Amid Deep Concerns, Biden Administration Secures AI Safety Commitments from Tech Companies
Introduction
Amid deep concerns about the risks posed by artificial intelligence, the Biden administration has lined up commitments from seven tech companies — including OpenAI, Google and Meta — to abide by safety, security and trust principles in developing AI.
Voluntary Commitments from Leading AI Companies
Reps from seven “leading AI companies” — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — are scheduled to attend an event Friday at the White House to announce that the Biden-Harris administration has secured voluntary commitments from the companies to “help move toward safe, secure, and transparent development of AI technology,” according to the White House.
Promoting Safety and Security of AI
“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the Biden administration said in a statement Friday. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”
Voluntary Agreements and Legislative Efforts
Note that the voluntary agreements from Meta, Google, OpenAI and the others are just that — they’re promises to follow certain principles. To ensure legal protections in the AI space, the Biden administration said, it will “pursue bipartisan legislation to help America lead the way in responsible innovation” in artificial intelligence.
Principles Committed by AI Companies
- Develop ways for consumers to identify AI-generated content, such as through watermarks;
- Engage independent experts to assess the security of their tools before releasing them to the public;
- Share information on best practices and attempts to get around safeguards with other industry players, governments and outside experts;
- Allow third parties to look for and report vulnerabilities in their systems;
- Report limitations of their technology and guiding on appropriate uses of AI tools;
- Prioritize research on societal risks of AI, including around discrimination and privacy; and
- Develop AI with the goal of helping mitigate societal challenges such as climate change and disease.
Global Collaboration on AI Safety
The White House said it has consulted on voluntary AI safety commitments with other countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the U.K.
Policy Guidance for Federal Agencies
The White House said the Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement and use of AI systems is centered around safeguarding the Americans’ rights and safety.