As artificial intelligence becomes more powerful and more mainstream, even the White House has focused attention on tackling the ethical and safety concerns that accompany its development.
Last week, Vice President Kamala Harris and, for a brief time, President Joe Biden, met with industry leaders to talk about how to advance the technology while safeguarding Americans’ rights. Speaking to executives from Google, Microsoft, OpenAI and the start-up Anthropic, they urged caution regarding the rapidly advancing technology.
“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said afterward in a statement. “And every company must comply with existing laws to protect the American people.”
Before the meeting, the White House announced new initiatives for promoting responsible artificial intelligence. First, the National Science Foundation will invest $140 million to build seven new National A.I. Research Institutes, bringing the total number to 25 across the country. And this summer, the government will accept public comments on a draft of guidelines for A.I. use in federal agencies.
Additionally, the administration said leading developers have agreed to participate in a public evaluation of their A.I. models, which would allow “thousands of community partners and A.I. experts” to assess how they comply with guidelines released by the government last year, per a statement.
Artificial Intelligence is one of the most powerful tools of our time, but to seize its opportunities, we must first mitigate its risks.— President Biden (@POTUS) May 4, 2023
Today, I dropped by a meeting with AI leaders to touch on the importance of innovating responsibly and protecting people's rights and safety. pic.twitter.com/VEJjBrhCTW
The White House’s focus on A.I. comes as the industry explodes with new innovations such as ChatGPT, Google Bard and DALL-E. Open AI’s ChatGPT, a chatbot released in November 2022, quickly emerged as a leader in the field and set the record for the fastest-growing application in history. The chatbot hit one million users just five days after its launch. ChatGPT’s potential to disrupt the industry alarmed competitors (Google issued a “code red” in response) and triggered an A.I. “arms race,” wrote Kevin Roose for the New York Times in February.
But the speed at which A.I. technology is developing has raised red flags among industry leaders for other reasons, too, such as its potential to spread misinformation, manipulate users and upend the job market.
Last week, artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time position at Google so he could more freely speak about the dangers of A.I.
And in March 2023, thousands of leaders, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter urging a six-month hiatus on significant A.I. projects. Before progress continues, they urged, independent experts should develop and put in place shared safety protocols for A.I. design.
“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter states.
Robert Weissman, president of the consumer rights nonprofit Public Citizen, tells the Guardian’s Dan Milmo that the White House’s announcement is a “useful step,” but it doesn’t go far enough.
“At this point, Big Tech companies need to be saved from themselves,” he tells the Guardian. “The companies and their top A.I. developers are well aware of the risks posed by generative A.I. But they are in a competitive arms race, and each believes themselves unable to slow down.”