For the past several weeks, Senate Majority Leader Chuck Schumer has met with at least 100 experts to create legislation to regulate and install safeguards against artificial intelligence (AI).
Tech experts held a hearing May 16 and laid bare the risks posed by the exploding advances of AI, and it’s clear that Congress is facing challenges in keeping up.
Congress has struggled to regulate technology before. Lawmakers missed windows to create guardrails for the internet and social media that could have prevented the spread of disinformation online. Why? Most members didn’t fully understand the technology and couldn’t figure out how to solve these problems.
Sen. Josh Hawley, R-Mo., wants to be involved in the development of AI law, but admits he’s got some catching up to do.
“I’ve got to get educated,” he said. “For me right now, the power of AI to influence elections is a huge concern. So I think we’ve got to figure out what is the threat level there, and then what can we reasonably do about it?
Ifeoma Ajunwa, co-founder of an AI research program at the University of North Carolina at Chapel Hill says that there aren’t enough experts in both computer science and law on Capitol Hill and that this makes AI lawmaking all the more challenging.
“AI, or automated decision-making technologies, are advancing at breakneck speed,” she said. “There is this AI race… yet… the regulations are not keeping pace.”
Now, as AI steamrolls its way into our society, lawmakers are having to play catch up with the technology that even the savviest of users are still trying to understand.
Sam Altman, the CEO of the company that created ChatGPT – the application that uses AI to write text in response to prompts – testified in the hearing that even he recognizes the dangers of AI and that the government can help mitigate them.
This isn’t based on hearsay. This is a real threat. AI can be used to sway public opinion, impersonate world leaders, and spread disinformation, like medical advice.
There’s also some fear that AI-driven weapons could get out of hand – nuclear war and such. Terrifying.
The hearing earlier this month raised two solutions for Congress.
First, Congress should mandate disclosure and choice. What does that mean? AI-generated material should specify that it was generated by AI, especially since these systems have been known to just make up information. This is a phenomenon the industry calls hallucinating.
Second, there should be a federal agency that will license AI products. An independent group of scientists working for the agency would test the AI products and question the companies to address potential safety risks before the products can be used by the public.
These two solutions can and should be implemented by Congress. Lawmakers need to regulate now to help mitigate dangers that may come later.
We have got to put the brakes on the runaway train that is artificial intelligence, and that isn’t possible if our lawmakers don’t understand what they’re regulating.