Fact Check: The Responsible Innovation and Safe Expertise Act is the first federal legislation to offer clear guidelines for AI liability in a professional context.
What We Know
The Responsible Innovation and Safe Expertise (RISE) Act was introduced by Senator Cynthia Lummis on June 12, 2025. This legislation aims to clarify the legal responsibilities of professionals, such as physicians, attorneys, engineers, and financial advisors, when using AI systems in their practices. It establishes that these professionals must exercise due diligence and verify the outputs of AI systems they utilize, thereby retaining legal accountability for the advice they provide (Lummis Introduces AI Legislation).
The RISE Act is described as the first targeted liability reform legislation specifically addressing professional-grade AI, marking a significant step in federal regulation of AI technologies. The act requires AI developers to disclose model specifications, which is intended to empower professionals to make informed decisions about the AI tools they use (Liability Rules and Standards). This legislation is positioned as a response to the current patchwork of state liability standards that create legal uncertainty and hinder innovation in AI (NBC News).
Analysis
The claim that the RISE Act is the first federal legislation to provide clear guidelines for AI liability in a professional context is supported by the unique nature of the legislation. While there have been discussions and actions taken by various federal agencies regarding AI accountability and liability, such as the Federal Trade Commission and the Department of Justice, these efforts have largely focused on existing laws and their application to AI technologies (Liability Rules and Standards).
The RISE Act distinguishes itself by specifically targeting the professional use of AI and establishing clear legal responsibilities for licensed professionals. This is a notable departure from previous regulatory frameworks, which have not provided such explicit guidelines for liability in the context of AI (Safe, Secure, and Trustworthy Development and Use of AI).
Critically, while the act does not grant blanket immunity to AI developers, it does create a safe harbor for them under certain conditions, which has raised concerns about potential implications for accountability (NBC News, AI companies could soon be protected from most lawsuits). However, the emphasis on transparency and professional responsibility is a significant advancement in the legal landscape surrounding AI.
Conclusion
Verdict: True
The claim that the Responsible Innovation and Safe Expertise Act is the first federal legislation to offer clear guidelines for AI liability in a professional context is accurate. This legislation establishes a framework that specifically addresses the responsibilities of professionals using AI, which has not been previously codified at the federal level. The act's focus on transparency and accountability marks a critical development in the regulation of AI technologies in professional settings.
Sources
- Lummis Introduces AI Legislation to Foster Development & Strengthen Professional Responsibility
- Liability Rules and Standards
- Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- New GOP bill would protect AI companies from lawsuits if they disclose how their systems work
- AI companies could soon be protected from most lawsuits under new GOP bill
- Senate Bill Would Shield AI Developers From Civil Liability in Certain Uses of Their Tools
- AI companies could soon be protected from most lawsuits
- The State AI Laws Likeliest To Be Blocked by a Moratorium