The AI Forum promotes thoughtful discussion of the legal and societal challenges posed by AI … read more About Us
Responsible AI in the News
Tech and music leaders urged Congress to pass the No Fakes Act, a bipartisan bill protecting Americans from AI-generated deepfakes that steal voices and likenesses. The proposed law would build upon the Take It Down Act, creating federal safeguards, holding platforms accountable, and empowering victims to remove unauthorized digital replicas quickly, addressing urgent privacy and creative rights concerns.
In February of 2024, a teenager shot himself after months of conversation with a chatbot whose last message asked him to “come home to me as soon as possible.” A federal judge has just rejected AI firm Character.AI’s claim that its chatbot’s outputs are protected free speech, allowing a wrongful death lawsuit to proceed. The case will likely set a major precedent for AI liability and regulation as technology outpaces legal safeguards.
The Chicago Sun-Times and Philadelphia Inquirer syndicated a 50-page AI-generated "summer guide" filled with fake books, dubious tips, and non-existent individuals, exposing systemic editorial failures. The freelancer behind “Heat Index” admitted using unchecked AI content from King Features, highlighting journalism's crisis as financial strains and AI "slop" erode trust in legacy media. While a benign slip-up, this incident indicates the problematic AI future we could be heading towards.
As AI integration accelerates, state attorneys general leverage existing privacy, consumer protection, and anti-discrimination laws to address AI risks like deepfakes, bias, and fraud. California, Massachusetts, Oregon, New Jersey, and Texas lead enforcement, signaling heightened scrutiny for businesses deploying AI systems without robust compliance safeguards.
The viral “AI Barbie” trend, where users create Barbie-themed AI images, raises thorny legal issues from copyright and trademark infringement to data privacy and regulatory uncertainty. As generative AI collides with pop culture, creators and developers must navigate a shifting legal landscape to avoid infringing on intellectual property and exposing themselves and others to cybersecurity risks.
Legal experts and judges agree that AI can help process information and streamline judicial tasks, but it lacks the reasoning, empathy, and moral judgment needed for true adjudication. While courts experiment with AI in administrative roles, the consensus is that human judges remain essential, at least for now.
A federal judicial panel has advanced a draft rule to regulate AI-generated evidence in court, seeking public input to ensure such evidence meets reliability standards. The move reflects growing pressure for regulations to keep pace with the judiciary’s need to adapt to rapid technological change, balanced with concern about the trustworthiness of machine-generated information.
The California State Bar faces scrutiny after using AI to generate bar exam questions, leading to widespread criticism. Issues regarding conflict of interest, questionable test writing, and the viability of online testing have arisen amidst a financial crisis. The California State Bar faces lawsuits and an audit, highlighting the risks and accountability challenges as legal institutions experiment with AI in high-stakes, high-impact settings.
President Trump’s latest executive order pushes AI education nationwide, prioritizing AI in federal grant programs and establishing a White House task force. While questions remain about how schools should responsibly adopt AI into the classroom, the order sets the stage for new legal frameworks and public-private partnerships in AI policy and compliance.
Finding the Intersection of Law and Society
While we might joke about the impending robot takeover, the humans are still responsible for how AI is deployed and regulated. Alex and Ted Alben explore the promise, pitfalls, and policy challenges of AI in society today and share a few conclusions they’ve come to about where humans should be paying most attention. Spoiler: the humans aren’t dead, but we do need to stay alert.
How does 2025 remind Merrill Brown of 1995? And how can journalists use AI responsibly and efficiently? In this episode of the Responsible AI podcast, we speak with the trailblazing journalist and advisor about the future of content automation and local journalism and how the two can work together.
How is artificial intelligence reshaping modern workplaces? In this episode of the Responsible AI podcast, cybersecurity expert Ted Alben discusses AI's impact on business and the need for human oversight. We discuss the arrival of ChatGPT and the evolving landscape of workplace technology and security challenges.
How can AI be used responsibly in litigation? On The Legal Department podcast, Stacy Bratcher speaks with Alex Alben about ensuring ethical AI use, identifying AI-generated content, and mining legal data for better outcomes. Don’t miss this episode exploring the future of AI-driven lawyering and its impact on litigation.
President Trump has rescinded Biden's AI Executive Order, leaving a policy vacuum. What will replace it? Alex Alben analyzes the implications of this decision and poses five key questions that will shape the future of AI regulation in the United States.
AI Forum Contributors
Alex Alben, Director
Nicole Alonso, Syro
David Alpert, Covington & Burling
Chris Caren, TurnItIn
William Covington, UW Law
Tiffany Georgievski, Sony
Bill Kehoe, State of WA
Ashwini Rao, Eydle
Linden Rhoads, UW Regent
Sheri Sawyer, State of WA
Eugene Volokh, UCLA Law