U.S. Supports Open Models to Promote AI Innovation

Published: August 7, 2024
  • U.S. officials said the report provides a roadmap for responsible AI innovation
  • To monitor for emerging risks, the report calls for the U.S. government to develop an ongoing program to collect evidence of risks and benefits,

The Biden Administration recently issued policy recommendations embracing openness in artificial intelligence (AI) while calling for active monitoring of risks in powerful AI models.

The Department of Commerce’s National Telecommunications and Information Administration (NTIA)’s Report on Dual-Use Foundation Models with Widely Available Model Weights recommends the U.S. government develop new capabilities to monitor for potential risks, but refrain from immediately restricting the wide availability of open model weights in the largest AI systems.

“Today’s report provides a roadmap for responsible AI innovation and American leadership by embracing openness and recommending how the U.S. government can prepare for and adapt to potential challenges ahead,” said U.S. Secretary of Commerce Gina Raimondo in a press statement.

U.S. Guidelines

As part of an executive order on AI last year, President Joe Biden gave the U.S. Commerce Department until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks of so-called open models.

The open-source model allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals. The NTIA was tasked to review the risks and benefits of large AI models with widely available weights, and to develop policy recommendations maximizing those benefits while mitigating the risks.

“The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools,” said Assistant Secretary of Commerce for Communications and Information and NTIA Administrator Alan Davidson.

Open AI Debate

“NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models,” continued Davidson. “Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

As noted in an AP News story, this is the federal government’s first time delving into a tech industry debate between developers such as ChatGPT-maker OpenAI advocating closing off their models’ inner workings to guard against misuse, and others, such as Meta Platforms CEO Mark Zuckerberg, who have lobbied for a more open approach they say favors innovation.

“The openness of the largest and most powerful AI systems will affect competition, innovation and risks in these revolutionary tools,” noted Davidson.

Ongoing Oversight

To monitor for emerging risks, the report calls for the U.S. government to develop an ongoing program to collect evidence of risks and benefits, evaluate that evidence, and act on those evaluations, including possible restrictions on model weight availability, if warranted. This falls into three categories:

  • Collecting Evidence, which includes Performing research into the safety of powerful and consequential AI models, as well as their downstream uses; Supporting external research into the present and future capabilities of dual-use foundation models and risk mitigations; and maintaining a set of risk-specific indicators.
  • Evaluating Evidence that would develop and maintain thresholds of the risk-specific indicators to signal a potential change in policy; continue to reassess and tailor benchmarks and definitions for monitoring and action; and maintain professional capabilities in technical, legal, social science, and policy domains to support the evaluation of evidence.
  • Acting on Evidence. If a future evaluation of evidence leads to the determination that further action is needed, the government could place restrictions on access to models or engage in other risk mitigation measures as deemed appropriate, such as restricting access to material components in the case of bio-risk concerns.

“NTIA’s report recognizes the importance of open AI systems and calls for more active monitoring of risks from the wide availability of model weights for the largest AI models,” stated Davidson. “Government has a key role to play in supporting AI development while building capacity to understand and address new risks.”

Related stories: