A few weeks into the 118th Congress, Rep. Ted Lieu, D-Calif., introduced a resolution asking Congress to ensure that artificial intelligence be developed and deployed in a way that is “safe, ethical” and respectful of American rights and privacy.
That resolution, while introduced by Lieu, wasn’t written by him, nor by a member of his staff. Rather, it was written by ChatGPT, the artificial language model that can take in complex user prompts and, within a matter of minutes, produce a response that looks, sounds and reads as if a human wrote it.
“It looks like any other resolution that someone could have been seduced about trying to regulate artificial intelligence,” Lieu told Spectrum News. The point of the resolution, he said, was to encourage Congress to get a better grasp on what AI is and how it can be used, for good and for bad — lest the U.S. risk being left in the dust.
“Artificial intelligence is going to increase dramatically in the next few years,” said Lieu. Right now, AI is in its horse-drawn wagon stage, he added. “But it's going to become, essentially, a jet airplane with a personality in just a few years.”
Artificial intelligence can help make our lives easier, from audio transcription services that can help to produce articles like this, to facial recognition software, to giving targeted suggestions for consumer products.
“If you want to watch a movie on any one of your favorite service providers, Netflix, Paramount+, Disney+, or Pandora — you know, any of these sorts of services — they all use AI algorithms” to recommend new content, explained David Broniatowski, Associate Director for the George Washington University’s Institute for Data, Democracy and Politics.
Such recommendation programs look at a user's history of what they're watching or listning to, compares that behavior to other similar users, and serves suggestions based on what those other users like.
But Broniatowski, who has been studying artificial intelligence for over a decade, warns that what we don’t know about AI can be dangerous.
“Treat it like a very, very sophisticated knife. You can use a knife to help cook your dinner, you can use a knife to hurt someone,” he said. “There are people out there who will use [AI] to cause harm. We've seen that in terms of things like the employment of armies of bots, to engage in online harassment or to engage in attempts to manipulate political outcomes,” Broniatowski said.
Lieu said that his prompt, in which he asked ChatGPT to write as if it were the congressman himself, is just one example of how deceptive AI can be.
“It's really hard right now, with Chat GPT, for a professor or a teacher to know if an essay was written by your student, or by a computer. And it's only going to get even harder because Chat GPT will have a new version coming out in a couple months that's going to be even better at writing essays and articles,” Lieu said
As Broniatowski explained, Chat GTP uses the entirety of the internet to try and predict what it should produce next in response to a prompt. It’s not unlike predictive text on a smartphone. But with those learning inputs can come falsehoods and biases the technology cannot understand.
“The biggest sort of obvious thing is that the model will tell you things that just aren’t true. But the model doesn’t know if it’s true or not, because the model doesn’t have the ability to tell the difference,” Broniatowski said. The model can only repeat what’s been fed into it — and, in the worst of hands, that might include conspiracy theories and debunked misinformation.
“It's not like somebody's gone into the training data and checked each one of those billions of documents, and said, 'Well, this one's okay, this one is not okay,'” Broniatowski said. “The fact of the matter is that the people who are designing these are making value judgments, and they may not even know it.”
Broniatowski is encouraged by Congress’ attention to the issue, saying that while legislation like Lieu’s may open AI to politicization, new policy is necessary.
“I think we need a concerted research effort, in order to really understand how the models work, and how the data that go into the models shape the output, and whether we need to think about putting potential restrictions on what data can go in … or at the very least warning labels, so people know what they're getting themselves into,” said Broniatowski.
As for how Congress will respond to the rise in AI use, House Speaker Kevin McCarthy told reporters in January that “those on the Intel committee, Republican and Democrat alike, take the courses for AI and Quantum, the same courses our generals in our military take,” to protect national security.