We have curated articles that express a range of viewpoints and news related to AI.

Consultation on High-Risk AI
The European Commission has launched a six-week public consultation on implementing rules for high-risk AI systems under the EU AI Act. Amid the discussions are debates over compliance burdens, requests from American tech giants to simplify the code, worries over European innovation competitiveness, and expert concerns about transparency and downstream obligations for high-risk AI applications.

Industry leaders urge Senate to protect against AI deepfakes with No Fakes Act
Tech and music leaders urged Congress to pass the No Fakes Act, a bipartisan bill protecting Americans from AI-generated deepfakes that steal voices and likenesses. The proposed law would build upon the Take It Down Act, creating federal safeguards, holding platforms accountable, and empowering victims to remove unauthorized digital replicas quickly, addressing urgent privacy and creative rights concerns.

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights
In February of 2024, a teenager shot himself after months of conversation with a chatbot whose last message asked him to “come home to me as soon as possible.” A federal judge has just rejected AI firm Character.AI’s claim that its chatbot’s outputs are protected free speech, allowing a wrongful death lawsuit to proceed. The case will likely set a major precedent for AI liability and regulation as technology outpaces legal safeguards.

At Least Two Newspapers Syndicated AI Garbage
The Chicago Sun-Times and Philadelphia Inquirer syndicated a 50-page AI-generated "summer guide" filled with fake books, dubious tips, and non-existent individuals, exposing systemic editorial failures. The freelancer behind “Heat Index” admitted using unchecked AI content from King Features, highlighting journalism's crisis as financial strains and AI "slop" erode trust in legacy media. While a benign slip-up, this incident indicates the problematic AI future we could be heading towards.

State AGs fill the AI regulatory void
As AI integration accelerates, state attorneys general leverage existing privacy, consumer protection, and anti-discrimination laws to address AI risks like deepfakes, bias, and fraud. California, Massachusetts, Oregon, New Jersey, and Texas lead enforcement, signaling heightened scrutiny for businesses deploying AI systems without robust compliance safeguards.

Plastic, fantastic ... and potentially litigious: AI Barbie goes from dollhouse to courtroom
The viral “AI Barbie” trend, where users create Barbie-themed AI images, raises thorny legal issues from copyright and trademark infringement to data privacy and regulatory uncertainty. As generative AI collides with pop culture, creators and developers must navigate a shifting legal landscape to avoid infringing on intellectual property and exposing themselves and others to cybersecurity risks.

AI Can Assist Human Judges, but It Can’t Replace Them (Yet)
Legal experts and judges agree that AI can help process information and streamline judicial tasks, but it lacks the reasoning, empathy, and moral judgment needed for true adjudication. While courts experiment with AI in administrative roles, the consensus is that human judges remain essential, at least for now.

US judicial panel advances proposal to regulate AI-generated evidence
A federal judicial panel has advanced a draft rule to regulate AI-generated evidence in court, seeking public input to ensure such evidence meets reliability standards. The move reflects growing pressure for regulations to keep pace with the judiciary’s need to adapt to rapid technological change, balanced with concern about the trustworthiness of machine-generated information.

State Bar of California admits it used AI to develop exam questions, triggering new furor
The California State Bar faces scrutiny after using AI to generate bar exam questions, leading to widespread criticism. Issues regarding conflict of interest, questionable test writing, and the viability of online testing have arisen amidst a financial crisis. The California State Bar faces lawsuits and an audit, highlighting the risks and accountability challenges as legal institutions experiment with AI in high-stakes, high-impact settings.

Executive Order Issued Calling for Advancement of Artificial Intelligence Education for American Youth
President Trump’s latest executive order pushes AI education nationwide, prioritizing AI in federal grant programs and establishing a White House task force. While questions remain about how schools should responsibly adopt AI into the classroom, the order sets the stage for new legal frameworks and public-private partnerships in AI policy and compliance.

Inside the First Major U.S. Bill Tackling AI Harms—and Deepfake Abuse
In a historic move, the U.S. House and Senate passed the Take It Down Act, the nation’s first major law directly addressing AI-induced harms such as deepfakes. The bipartisan bill garnered support from tech companies like Meta and Snapchat and aims to give victims of malicious AI-generated content a legal path to demand removal and seek damages. Despite concerns over its enforceability and potential for abuse, this landmark bill signals a new era for AI accountability.

New Federal Agency Policies and Protocols for Artificial Intelligence Utilization and Procurement Can Provide Useful Guidance for Private Entities
The U.S. Office of Management and Budget recently released sweeping new guidelines for how federal agencies buy and use AI, aiming to spur innovation and cut red tape. These policies could ripple out to the private sector, reshaping how companies approach responsible AI governance and compliance.

China's love of open-source AI may shut down fast
China’s embrace of open-source AI is potentially vulnerable as regulatory pressures mount. Once seen as a pathway to innovation, the approach could face government scrutiny that could stifle development. Will China’s AI ambitions clash with its tightening controls? Discover how shifting policies may reshape the nation’s and the world’s AI landscape.

Tony Blair Institute AI copyright report sparks backlash
The Tony Blair Institute's recent report advocates for the UK to relax copyright laws, enabling AI firms to utilize protected materials without explicit permission. This proposal has ignited backlash from the creative industry, which fears potential exploitation and economic harm. Governments must decide whether to prioritize advancement in the stiffly competitive AI race or the protection of creative works by human authors.

NYT case against OpenAI and Microsoft can advance
A U.S. District Court judge allowed The New York Times' lawsuit alleging copyright infringement by OpenAI and Microsoft to proceed, marking a significant moment in the intersection of AI and intellectual property law. The eventual final ruling is likely to draw upon the U.S. Copyright Office's Report on Copyright and Artificial Intelligence and hinge upon the nature of machine learning used by OpenAI.

Trump Administration Receives 8,755 Comments for AI Action Plan — AI: The Washington Report
The Trump administration received numerous comments in response to its Request for Information (RFI) for the development of an AI Action Plan. The comments highlight stakeholders' priorities on AI policies and reflect debates over copyrighted information for AI model training, federal preemption of state AI laws, and export controls. The AI Action Plan is expected to be announced by mid-July 2025.

Third Draft of the General-Purpose AI Code of Practice Misses the Mark on Fundamental Rights
The third draft of the EU's General-Purpose AI Code of Practice faces criticism for inadequately addressing fundamental rights risks. Despite the addition of illegal discrimination, most such risks remain optional, raising concerns about the code's effectiveness in ensuring AI providers mitigate these risks under the EU AI Act.

State Legislatures Consider New Wave of 2025 AI Regulation
State legislatures are introducing hundreds of AI bills in 2025, focusing on consumer protection, sector-specific regulations, chatbot transparency, generative AI oversight, data center energy usage, and frontier model safety. These diverse proposals aim to shape the U.S. AI regulatory landscape amid a current absence of federal action and will have implications on business development and consumer interactions.

New Proposed Regulations Will Significantly Shape How Businesses Leverage AI in Personnel Decisions
The California Civil Rights Council is developing new regulations to curb AI-driven employment discrimination by requiring anti-bias testing and extended record-keeping. Employers must demonstrate proactive measures to prevent unlawful bias in hiring and recruitment processes, ensuring AI systems align with fair employment practices.

AI Meets HIPAA Security: Understanding HHS’s Risk Strategies and Proposed Changes
The HHS's proposed HIPAA changes will address AI security risks amidst the rapid integration of the technology into the healthcare space, applying a previously tech-agnostic regulation to AI. They require entities to integrate AI into risk assessments and manage vulnerabilities in ePHI handling. This shift aims to ensure the confidentiality and integrity of sensitive health data amid evolving AI technologies.