About the Event

The Carnegie AI Safety Initiative (CASI) at Carnegie Mellon University is hosting the inaugural Pittsburgh AI Policy Hackathon, a hybrid policy writing competition open to undergraduate and graduate students across the greater Pittsburgh area.

Teams of 1–2 will choose from one of three brackets on pressing AI policy challenges, write a policy brief, and top teams will be invited to present in-person to a panel of judges on CMU's campus.

Event Details

Writing Period
April 6–15, 2026
Prompts released; work asynchronously
Submission Deadline
Wednesday, April 15
11:59 PM ET
Finals Day
Sunday, April 19, 2026
Posner Hall, CMU
Prize Pool
$4,000+ across brackets
Eligibility
Students at any Pittsburgh-area university

Key Dates

April 6
Monday (today): Prompts released
April 15
April 17
Friday: Finalist notifications
April 19
Sunday: Final round (invite-only) in Posner Hall. Logistics shared in notifications.

Office Hours

Drop by our office (Unit 1E, 201 S Craig St) if you want to talk through the prompts, get feedback on your approach, or ask logistical questions.

  • Week 1: Wednesday, April 8, 4:00–6:00 PM
  • Week 2: Wednesday, April 15, 4:00–6:00 PM

Brackets & Prompts

Teams choose one of the three brackets below. Click each bracket to see the full prompt and Narrowing Questions.

Central Question

How should the federal government assign legal responsibility for AI-enabled consumer fraud across a multi-party deployment chain, and what enforcement mechanism can operate at the speed and scale of autonomous agents rather than human investigators?

Background and Context

Consider a scenario drawn from documented fraud patterns. An elderly woman receives a call from someone who sounds exactly like her grandson. He is stranded and needs $40,000 wired immediately. The voice is a clone, generated by an AI model, deployed by a fraud ring using a commercial agentic AI platform, configured by an operator who has since disappeared. She transfers the money. Law enforcement identifies the agentic system and traces the deployment chain, but no single actor clearly violated an existing criminal statute. The model developer did not know about the fraud. The platform marketed its tools for legitimate sales use. The operator is unreachable. Nobody is prosecuted. The woman has no civil recourse.

This is not a hypothetical edge case. AI-powered voice cloning is being used to run grandparent scams, impersonate bank representatives, and conduct fraud at a scale no human operation could match. A single agentic call system can place thousands of simultaneous calls, adapting scripts in real time, at a cost approaching zero per call.

The FTC has made clear that using AI tools to deceive or defraud people is illegal under existing law. Its enforcement action against Air AI may be the first consumer protection case to allege deception specifically in the marketing of agentic AI capabilities. But enforcement keeps running into the same structural wall.

The agentic AI deployment ecosystem involves at least four distinct commercial actors: AI model and voice cloning developers, platform companies, businesses and operators, and consumers/victims. No existing federal statute cleanly assigns liability across this chain. The FTC Act Section 5 prohibits unfair or deceptive acts in commerce but assumes identifiable responsible parties. Wire fraud statutes require proof of intent. Section 230 may shield platform intermediaries. The result is a system where each actor has plausible grounds to disclaim responsibility.

The Telephone Consumer Protection Act of 1991 offers a relevant precedent — when robocall technology enabled fraud at scale, Congress responded with structural restrictions on the technology itself rather than trying to prosecute every bad actor.

Your Task

Your team is jointly advising the FTC Bureau of Consumer Protection and the Senate Commerce Committee's consumer protection staff. Develop a federal policy proposal that does one or more of the following:

  • Establishes a liability allocation framework that assigns responsibility across the agentic AI deployment chain under a coherent legal theory
  • Modifies existing statutes (FTC Act, wire fraud, or TCPA) to address agentic AI fraud without requiring entirely new legislation
  • Proposes disclosure and audit requirements for businesses deploying agentic AI in consumer contexts
  • Creates a victim redress mechanism that gives consumers practical recourse when agentic AI fraud causes financial harm

Narrowing Questions

  • In the Pittsburgh grandmother scenario, who do you name as the liable party, under what legal theory, and what does she actually recover?
  • Should liability be strict, negligence-based, or tied to specific knowledge of misuse? What are the deterrence and innovation tradeoffs?
  • Your framework must be implementable under existing congressional authority. Which existing statute provides the most viable enforcement hook?
  • How does your framework distinguish between a small business using an off-the-shelf agentic platform legitimately vs. a bad actor using the same platform for fraud?
  • Existing enforcement tools operate at human speed. AI-enabled fraud operates at machine speed. What mechanism closes this gap?
  • How do you design victim redress practically accessible to elderly and low-income users least likely to navigate civil litigation?

Relevant Regulations and Resources

  • FTC Act Section 5: unfair or deceptive acts or practices in commerce
  • Wire Fraud Statute (18 U.S.C. Section 1343)
  • Telephone Consumer Protection Act (1991)
  • Section 230, Communications Decency Act
  • CFPB Circular on AI in Financial Products (2024)
  • FTC v. Air AI (2024–2025)
  • FTC Voice Cloning Challenge (2023)
  • FTC Impersonation Rule (2024)
  • Brookings: “Governing AI Agents” (2024)
  • Stanford HAI: “Generative AI and Consumer Fraud” (2024)

Central Question

How should the federal government limit the use of AI-powered surveillance tools by law enforcement and intelligence agencies, defining with operational specificity which uses are permissible and which are prohibited, and establishing an institutional mechanism that enforces that boundary and prevents exceptions from expanding over time, while preserving legitimate national security applications?

Background and Context

U.S. Customs and Border Protection currently deploys facial recognition technology at airports, land border crossings, and seaports, scanning travelers — including U.S. citizens — without individualized suspicion. Whether the border search exception to the Fourth Amendment extends to AI-powered mass biometric surveillance remains constitutionally unsettled.

The technical limitations of these systems are central to the policy problem. Independent audits have documented facial recognition error rates for darker-skinned women up to 35 percentage points higher than for lighter-skinned men. In 2023, Robert Williams of Detroit became one of at least three documented cases of wrongful arrest based on facial recognition misidentification.

The history of surveillance mission creep is directly relevant. COINTELPRO operated under national security authority from 1956 to 1971 and was used primarily against civil rights leaders and political organizers. AI surveillance tools initially deployed at the border have since been documented targeting Black Lives Matter protests, Muslim communities, and immigration advocates far from any port of entry.

AI surveillance is qualitatively different from prior generations: a single system can process the biometric data of every person passing through a major airport and flag individuals without any human involvement until a flag is generated.

Your Task

Your team is advising the Senate Judiciary Committee's Subcommittee on Privacy, Technology, and the Law. Develop a federal policy proposal that does one or more of the following:

  • Defines with operational specificity which AI surveillance uses are permissible and which are prohibited, including at the border where constitutional protections are weakest
  • Establishes an oversight mechanism with a structural design that resists historical patterns of exception creep
  • Proposes technical standards — including accuracy, demographic parity, and audit requirements — that AI surveillance systems must meet before deployment
  • Creates a community accountability mechanism that gives affected populations a role in surveillance governance beyond litigation

Narrowing Questions

  • Does the border search exception extend to AI-powered mass biometric surveillance of every person at a port of entry? What standard should Congress establish and on what constitutional basis?
  • Define the national security exception your framework permits as a specific, bounded category. What institutional mechanism prevents that exception from expanding?
  • Does technical inaccuracy alone, without discriminatory intent, create a constitutional equal protection problem? What accuracy and demographic parity standards should apply?
  • AI surveillance tools built for counterterrorism have been documented targeting political organizers. How does your framework prevent that migration, and who has standing to challenge a violation?
  • Should a federal framework preempt more protective municipal bans (San Francisco, Boston, Portland), preserve them, or set a floor that localities can exceed?

Relevant Regulations and Resources

  • Fourth Amendment; Almeida-Sanchez v. United States (1973); Carpenter v. United States (2018)
  • Foreign Intelligence Surveillance Act (FISA) and Section 702
  • Executive Order 12333
  • Facial Recognition and Biometric Technology Moratorium Act (introduced, not enacted)
  • EU AI Act Article 5 (2024): prohibition on real-time biometric surveillance
  • CBP Facial Recognition Privacy Impact Assessments (2020–2024)
  • Church Committee Final Report (1975–76)
  • MIT Media Lab: “Gender Shades” (Buolamwini and Gebru, 2018)
  • Georgetown Law: “The Perpetual Line-Up” (2016)
  • Brennan Center: “AI and Government Surveillance” (2024)
  • ACLU: “Face Recognition Technology and CBP” (2023)

Central Question

Now that training AI models on existing creative work is an established practice, legally contested but commercially entrenched, how should the United States design a prospective licensing framework that compensates creators for future use of their work, without creating transaction costs that slow AI development or systematically exclude independent creators in favor of institutional rights holders who have the leverage to negotiate private deals?

Background and Context

This prompt is not about the legal battles over past training. Those questions are working through courts in cases including Bartz v. Anthropic, Thomson Reuters v. Ross Intelligence, and Getty Images v. Stability AI. This prompt asks a different and more urgent question: what framework should govern AI training going forward?

Modern foundation models are trained on datasets containing hundreds of billions of tokens from millions of individual creators without their knowledge or consent. The U.S. Copyright Office's May 2025 report concluded that some training uses fall outside fair use and explicitly deferred to Congress for a legislative solution.

Congress has a relevant precedent: the Copyright Act of 1909 established a compulsory licensing regime for mechanical reproduction. Whether a similar structure is appropriate for AI training data — and how to design it so that independent creators can actually participate — is a central question your brief should address.

The private licensing market forming in the absence of legislation creates a two-tier system where institutional rights holders (major labels, news organizations) have negotiated compensation while independent creators have no equivalent pathway. The window to design an inclusive licensing ecosystem is closing.

Your Task

Your team is advising the Senate Judiciary Subcommittee on Intellectual Property, preparing draft legislation for the 2026 session. Develop a federal policy proposal that does one or more of the following:

  • Designs a prospective licensing framework specifying the default rule (opt-in vs. opt-out), the rate-setting mechanism, and the administrative body responsible for royalty distribution
  • Proposes a technical standard for training data provenance and disclosure that is feasible at scale and meaningful for creators
  • Creates a mechanism specifically designed to reach independent creators, not just institutional rights holders
  • Proposes a compulsory licensing regime analogous to Section 115, specifying what it permits, what it prohibits, and how it handles creators who affirmatively opt out

Narrowing Questions

  • Should a prospective licensing regime be opt-in or opt-out? What are the equity implications for independent creators without legal or administrative resources?
  • What does meaningful disclosure look like at the scale of hundreds of billions of tokens from millions of creators? Is there a difference between category-level and individual-creator-level disclosure?
  • How do you set a statutory royalty rate for AI training use when the market is still forming? What mechanism applies, and who has standing to participate?
  • What is the maximum friction your framework can impose while remaining genuinely accessible to an independent musician with no label, no legal department, and no standardized catalog?
  • How does your framework handle AI systems that generate outputs in the style of a specific artist without reproducing their work verbatim? Is that a training data problem, an output problem, or neither?

Relevant Regulations and Resources

  • Copyright Act Section 107 (Fair Use)
  • Copyright Act Section 115 (Compulsory Licensing)
  • Digital Millennium Copyright Act (DMCA)
  • Music Modernization Act (2018)
  • U.S. Copyright Office Report on AI and Copyright (May 2025)
  • Bartz v. Anthropic (N.D. Cal. 2025)
  • Thomson Reuters v. Ross Intelligence (D. Del. 2025)
  • Authors Guild v. OpenAI (ongoing)
  • C2PA / Content Credentials Standard
  • Mechanical Licensing Collective (MLC)
  • Copyright Alliance: “AI Training and Creator Compensation” (2024)
  • Electronic Frontier Foundation: “Fair Use and AI Training” (2024)
  • EU Text and Data Mining Exception (DSM Directive Article 4)

Prizes

$4,000+ across three brackets. Each track will have its own winners. Prize breakdown to be announced closer to finals.

Submission Guidelines

Your submission is a policy brief responding to one of the three prompts. You can only submit for one track.

What to Submit

A policy brief of under 1,500 words (excluding references). Your brief should include:

  • An executive summary
  • A clear statement of the problem
  • Your proposed policy solution
  • Implementation considerations (who implements it, how, what it costs, what the timeline looks like)
  • Anticipated objections or tradeoffs, and how you’d address them

Formatting

  • PDF format
  • 1-inch margins, 12pt Times New Roman, 1.15 line spacing
  • Under 1,500 words (references not counted)
  • Include your name(s) and bracket number in the header. Do not include names in the document body for anonymous review.
  • File name example: Member1FullName_Member2FullName_Bracket1.pdf

Citations

Please cite your sources! Any standard format (APA, Chicago, etc.) is fine, as long as things are consistent. References do not count toward the word limit.

AI Tool Use

You may use AI tools for research and editing assistance, but the ideas and arguments must be your own. Judges may ask about your reasoning during Q&A. Include a brief disclosure at the end of your brief noting any AI tools used (also not counted towards the word limit).

How to Submit

Submit your policy brief as a PDF through the submission form here. Only one person per team needs to submit. Deadline: Wednesday, April 15 at 11:59pm ET.

Resubmission Policy

You may resubmit as many times as you like before the deadline. Only your most recent submission will be reviewed. After the deadline, no further changes are accepted.

What Happens After Submission

The CASI team and the judges will review all briefs. By Friday, April 17, teams selected for the in-person finals will be notified by email. Finalists will present their proposals on April 19 at CMU.

Resources and Guides

Agenda for Final Competition Day (April 19)

Final round held in-person at Posner Hall, CMU (invite-only). Full logistics will be shared with finalists. Please check back later for the finalized agenda.

Judging Rubric

Submissions will be evaluated on:

  • Problem analysis: Do you demonstrate a clear understanding of the issue?
  • Policy solution: Is your proposal specific, well-designed, and actionable?
  • Feasibility: Is it politically, legally, and practically viable?
  • Anticipation of objections: Do you address counterarguments and tradeoffs?
  • Clarity and persuasiveness: Is the brief well-written and well-organized?

Frequently Asked Questions

The hackathon is open to undergraduate and graduate students at any Pittsburgh-area university.

No — you can participate solo or as a team of two. Use the “Looking for a Partner?” link at the top if you’d like to find a teammate.

No — this is a policy writing competition, not a coding event. You’ll be writing a policy brief, not software.