Fact Check: "The Responsible Innovation and Safe Expertise Act clarifies that licensed professionals using AI tools retain legal liability for errors, provided AI developers disclose how their systems work."
What We Know
The Responsible Innovation and Safe Expertise (RISE) Act, introduced by Senator Cynthia Lummis, aims to address the legal responsibilities associated with the use of artificial intelligence (AI) in professional settings. The legislation explicitly states that licensed professionals—including physicians, attorneys, engineers, and financial advisors—will retain legal liability for any errors made while using AI tools, provided that AI developers disclose the workings of their systems. According to Lummis, this means that professionals must exercise due diligence and verify the outputs of the AI systems they utilize (source-1, source-2).
The RISE Act is characterized as the first federal legislation to provide clear guidelines regarding AI liability in professional contexts. It requires AI developers to publicly disclose model specifications, which allows professionals to make informed decisions about the AI tools they choose to use (source-4, source-5).
Analysis
The claim that the RISE Act clarifies the legal liability of licensed professionals using AI tools is supported by multiple sources. Lummis’s statements emphasize that the legislation does not grant blanket immunity to AI developers; rather, it establishes that professionals are responsible for their decisions and must ensure the reliability of the AI outputs they rely on (source-1, source-2).
The bill aims to create a unified federal standard for liability, addressing the current confusion stemming from varying state laws. This is particularly important as AI becomes more integrated into critical decision-making processes across various fields (source-4, source-6).
However, it is essential to consider the potential biases of the sources. The primary statements come from Senator Lummis and her office, which may present the legislation in a favorable light. Nevertheless, independent news sources also corroborate the claims, indicating a higher reliability of the information presented (source-2, source-4).
Conclusion
The claim that the Responsible Innovation and Safe Expertise Act clarifies that licensed professionals using AI tools retain legal liability for errors, provided AI developers disclose how their systems work, is True. The legislation explicitly states that professionals must verify AI outputs and are responsible for their decisions, while also requiring developers to disclose critical information about their AI systems.
Sources
- Lummis Introduces AI Legislation to Foster Development ...
- GOP bill would shield AI companies from lawsuits if they're ...
- Lummis says the RISE Act protects AI developers from liability
- Senate Bill Would Shield AI Developers From Civil Liability ...
- AI companies could soon be protected from most lawsuits ...
- Lummis says the RISE Act protects AI developers from ...
- New GOP bill would protect AI companies from lawsuits if ...
- Lummis says the RISE Act protects AI developers from ...