Marking the latest development in the Biden administration’s recently urgent push to respond to artificial intelligence, seven top tech firms committed to following a voluntary set of AI safeguards in concert with the federal government.
The companies – which all have significant, leading investments in the development of AI technology – include some of the world’s most recognizable tech giants: Amazon, Google, Meta and Microsoft. ChatGPT-maker OpenAI and startups Anthropic and Inflection have also signed on.
“Artificial Intelligence is going to transform the lives of people around the world,” Biden said in remarks to announce the commitments on Friday. “The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans.”
The obligations that the companies said they will hold themselves to are three-fold: That AI products will be ensured to be safe before being introduced to the public; that AI systems will be built to put security first; and that the companies will seek to earn the public’s trust.
Biden on Friday added the companies also agreed to explore how AI can be used to meet challenges and contribute to society, including addressing cancer, the climate and investing in new jobs and education.
Administration officials noted that the commitment was voluntary on behalf of those companies, and not necessarily binding.
“These commitments are real and they’re concrete,” Biden said. “They're going to help the industry fulfill this fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”
The president added during his remarks that companies will implement these practices “immediately.”
In a statement to Spectrum News, Amazon spokesperson Tim Doyle said the company is "committed to continued collaboration with the White House, policymakers, the industry, researchers, and the AI community."
"As one of the world’s leading developers and deployers of AI tools and services, Amazon supports these voluntary commitments to foster the safe, responsible, and effective development of AI technology," Doyle said.
Artificial intelligence, in its many forms, is the bleeding edge of many industries and a key concern of many with an eye on the future of labor. AI has been a central concern for members of the Writers’ Guild of America in an ongoing strike against film and television producers — a concern that SAG-AFTRA, the union representing actors, echoed in a letter to producers ahead of their own labor stoppage.
The technology has also been at the forefront of battles regarding the future of media and communication. Experts warn that deepfakes — generally speaking, computer-generated media that tries to depict events that have never happened with the intent of misleading viewers — may be trotted out by people or groups with bad intent to manipulate the 2024 election cycle.
“We must be clear-eyed and vigilant about the threats emerging technologies can pose – don't have to – but can pose to our democracy and our values,” Biden said.
But reporters reminded administration officials that a commitment by tech companies is only worth as much as the paper it’s written on if they refuse to follow through, as in cases of self-policing platforms they own or are associated with. In response, an official said Thursday that the Biden-Harris Administration is also “working on an executive order ... to make sure we manage the risks posed by this technology.”
That order, they said, would seek to “govern the use of AI,” including ongoing Biden administration themes like personal equity, consumer protection, workers’ rights and national security in its considerations — though the official declined to dig into the potential order in greater detail.
“These commitments are a promising step,” Biden said, adding they “have a lot more work to do together,” and he is going to communicate with both parties on legislation and regulation.
Experts called the pledge a "good first step," but questioned whether it will be effective without a more firm government mandate.
"I don't think it has much teeth in it, right? There's no way of enforcing it," said Vasant Dhar, professor of Technology and Operations at New York University. "It's not even clear what this means. Fundamentally, I think the issue is that we don't really understand what the issues are ourselves, in the sense that we don't really understand what the risks are."
"It’s a great start, but only a start," said Gary Marcus, an AI expert who testified before Congress about the technology in June. "It’s voluntary, and we will need laws to mandate these things."
"The biggest omission is any kind of requirement on the companies to disclose their training data, which we need for many reasons, including fighting bias, understanding how the models well enough to mitigate risks and compensating creators whose work is leveraged," Marcus added.
The past two presidential administrations have made artificial intelligence a priority, though the Biden White House in particular has portrayed a public stance seeking to protect Americans from bad-acting AI technology and developers, publishing its Blueprint for an AI Bill of Rights in October 2022.
OpenAI CEO Sam Altman added attention and urgency to the issue in Washington when he testified about the technology at a Senate hearing in May. Federal Election Commission filings released on Saturday show Altman donated $200,000 to Biden’s joint fundraising committee, Biden Victory Fund, in June.