Senator Cynthia Lummis (R-WY) has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a legislative proposal designed to clarify liability frameworks for artificial intelligence (AI) used by professionals.
The bill could bring transparency from AI developers – stoping short of requiring models to be open source.
In a press release, Lummis said the RISE Act would mean that professionals, such as physicians, attorneys, engineers, and financial advisors, remain legally responsible for the advice they provide, even when it is informed by AI systems.
At the time, AI developers who create the systems can only shield themselves from civil liability when things go awry if they publicly release model cards.
The proposed bill defines model cards as detailed technical documents that disclose an AI system’s training data sources, intended use cases, performance metrics, known limitations, and potential failure modes. All this is intended to help help professionals assess whether the tool is appropriate for their work.
"Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy,” Lummis said in a press release.
“This legislation doesn’t create blanket immunity for AI," Lummis continued.
However, the immunity granted under this Act has clear boundaries. The legislation excludes protection for developers in instances of recklessness, willful misconduct, fraud, knowing misrepresentation, or when actions fall outside the defined scope of professional usage.
Additionally, developers face a duty of ongoing accountability under the RISE Act. AI documentation and specifications must be updated within 30 days of deploying new versions or discovering significant failure modes, reinforcing continuous transparency obligations.
Stops short of open source
The RISE Act, as it's written now, stops short of mandating that AI models become fully open source.
Developers can withhold proprietary information, but only if the redacted material isn’t related to safety, and each omission is accompanied by a written justification explaining the trade secret exemption.
In a prior interview with CoinDesk, Simon Kim, the CEO of Hashed, one of Korea's leading VC funds, spoke about the danger of centralized, closed-source AI that's effectively a black box.
"OpenAI is not open, and it is controlled by very few people, so it's quite dangerous. Making this type of [closed source] foundational model is similar to making a 'god', but we don't know how it works," Kim said at the time.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。