• Please search to find attorneys
Close Btn



Senators Propose Bipartisan AI Framework To Regulate Artificial Intelligence

In September 2023, Senator Richard Blumenthal (D-CT) and Senator Josh Hawley (R-MO) announced the Bipartisan Framework for U.S. AI Act (“Framework”). The two Senators serve as, respectively, Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. The Framework can be seen as Congress taking steps toward regulating AI.

This Framework is one of the first blueprints for enforceable AI protections at the federal level in the United States, and it holds the potential to begin addressing the growing concerns around the development and use of AI. The Framework contains five key goals to address AI: (1) establish an Independent Oversight Body, (2) ensure legal liability, (3) defend legal accountability for harms, (4) promote transparency, and (5) protect consumers and children.

Establish a Licensing Regime Administered by an Independent Oversight Body

The Framework would create a new Independent Oversight Body to oversee and regulate the use of AI by US companies. For example, the Independent Oversight Body would have the authority to conduct audits of companies seeking licenses, cooperate with other enforcement bodies, and monitor and report on the technological developments and economic impacts of AI.

The Independent Oversight Body would ensure that companies are complying with explicit licensing requirements; such requirements might include registration of information about AI models, and the approval of such licenses would be contingent on developers adopting and maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs.

Ensure Legal Accountability for Harms

Under the Framework, a private right of action would be made available to consumers. This private right of action would empower consumers to bring suit against technology providers if such providers’ AI models violate consumer privacy, civil rights, or cause other harms. The Framework also states that Congress should clarify that Section 230 of the Communications Decency Act protects tech companies from legal consequences of content posted by third parties, but such protection does not apply to AI.

Defend National Security and International Competition

The Framework states that Congress should utilize existing trade controls (e.g., export controls, sanctions, and other legal restrictions) to limit the transfer of AI models and other related technologies to foreign adversaries and countries engaged in human rights violations.

Promote Transparency

The Framework provides numerous recommendations to promote transparency on AI use:

  • Companies should be required to disclose “essential information” regarding the development of their AI models. This may include training data, limitations, accuracy, and safety of AI models.
  • Companies should provide notice to users if they are interacting with an AI model or watermark.
  • AI system providers should be required to provide disclosures of “deepfakes”, which are digitally altered pictures or videos of human faces.
  • The Independent Oversight Body should establish a public database and reporting system for consumers to have easy access to information relating to compliance.

Protect Consumers and Children

The Framework states that companies that use AI in “high risk or consequential situations” (e.g., facial recognition) should be required to employ “safety brakes”. “Safety brakes” can include giving notice and allowing for human review when AI is being used. Strict limitations have also been recommended for generative AI involving children.

Contact one of the privacy experts in McGrath North’s Privacy and Cybersecurity team for all your questions related to the Bipartisan Framework for U.S. AI Act and potential future regulation of AI use, both in the United States and abroad.