Sam Altman, the CEO of ChatGPT maker OpenAI, is set to testify before Congress on Tuesday as lawmakers look to analyze the benefits and drawbacks of artificial intelligence and push for oversight of the emerging technology.
Altman will testify Tuesday before the Senate Judiciary Committee alongside IBM executive Christina Montgomery, who serves as the company’s Chief Privacy and Trust Officer, and New York University professor emeritus Gary Marcus, an “outspoken critic of the current AI hype.”
The hearing, titled “Oversight of A.I.: Rules for Artificial Intelligence,” will aim to help lawmakers understand AI and figure out how best to oversee, and potentially regulate, the technology, which has seen a massive surge in popularity – and concerns – in recent weeks and months.
“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” Judiciary chairman Richard Blumenthal, D-Conn., said in a statement. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology. I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”
“I am not an expert at all,” Sen. Josh Hawley, R-Mo., the ranking republican on the panel, told Spectrum News. “At all, at all … what all members of Congress need to do is learn more about it, learn where the technology is moving.”
Democrats have introduced legislation in both the House and Senate to address AI. One bill would create a new agency to manage us policies on AI.
“There’s a lot that we don't know,” California Rep. Ted Lieu, who is calling for the creation of a bipartisan commission to issue recommendations on regulating AI, told Spectrum News in a recent interview. “It's only been the second public release of [the AI chatbot] ChatGPT. What does ChatGPT version 12 look like? Where does AI go two years, four years, six years from now?”
As for elections, Lieu, who is also a computer programmer, said he thinks “the American public is starting to learn with every passing day that you just shouldn't believe everything you see on the internet.”
Another would require political campaigns to disclose the use of AI in their advertising. With the groundbreaking technology comes concerns that it could be exploited to push disinformation that might influence elections, including in the 2024 presidential race, experts warn.
“People should be concerned about AI in the upcoming election because it's going to blur the line between fake and real,” said Darrell West, a senior fellow with the think tank the Brookings Institution who specializes in technology innovation and governance studies. “Even experts are going to have difficulty distinguishing the false material.”
In March, before Donald Trump was criminally charged with falsifying business records, someone posted photos to social media of the former president resisting arrest in New York. The user disclosed that the images were generated by AI, but they were shared by others without context.
Immediately after President Joe Biden announced last month that he is seeking reelection, the Republican National Committee released a video ad that used realistic, AI-created images to paint a doomsday picture of what the U.S. might look like if Biden serves another four years.
It showed China invading Taiwan, banks closing, migrants swarming bridges and armed officers guarding San Francisco after it is closed due to escalating crime. The 30-second ad included a disclaimer saying the images were generated by artificial intelligence.
In March, a group of prominent tech leaders, including Tesla CEO Elon Musk, an early investor in OpenAI, penned a letter calling for a six-month pause in artificial intelligence experiments, warning of “profound risks to society and humanity.”
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter, also signed by Marcus, as well as Apple co-founder Steve Wozniak and 2020 presidential candidate Andrew Yang, reads. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
“Such decisions must not be delegated to unelected tech leaders,” the letter continues. “Powerful AI systems should be developed only once we are confident that their effects ill be positive and their risks will be manageable.”
Spectrum News' Ryan Chatelain contributed to this report.