Biden Administration Sets Deadline for AI Uses Disclosures in Government

By James Hickey, Managing Editor, RFIDJournal.com

image_pdfimage_print

By Dec. 1, federal agencies will have to detail their uses of AI

The Biden Administration is requiring federal agencies using artificial intelligence (AI) to adopt “concrete safeguards” by Dec. 1 as the federal government expands its use in a wide range of applications.

The Office of Management and Budget (OMB) directive to federal agencies to monitor, assess and test AI’s impacts “on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” according to a report from Reuters. Agencies must conduct risk assessments and set operational and governance metrics.

The increased use of Generative AI—in private and public offices—to create text, photos and videos in response to open-ended prompts has met with worries about job losses and deepfakes creation.

Federal Safeguards

Biden Administration officials said they want federal agencies “to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety.”

At the top of that list is a detailed public disclosure so the public knows how and when AI is being used by the government.

To that end, the federal government is mandating its agencies to release inventories of AI use cases, report metrics about AI use and release government-owned AI code, models, and data if it does not pose risks.

Agencies Using AI

AI is currently being employed by assorted government agencies, including the Federal Emergency Management Agency (FEMA) to assess structural hurricane damage; Centers for Disease Control and Prevention (CDC) to predict spread of disease and detect opioid use; and Federal Aviation Administration (FAA) to help “deconflict” air traffic to improve travel time at major airports.

The White House plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.

This is the latest initiative by the Biden Administration to provide rules for AI use in government.

Previous Biden Actions

President Joe Biden signed an executive order in October 2023 invoking the Defense Production Act to require developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before publicly released.

In January, the Biden administration proposed requiring U.S. cloud companies to determine whether foreign entities are accessing U.S. data centers to train AI models through “know your customer” rules.