Sam Altman, CEO of ChatGPT-maker OpenAI, recently stated at a Senate hearing that requiring government approval for releasing powerful artificial intelligence software would be “disastrous” for the U.S.’s lead in AI technology. This marks a significant shift from his stance two years ago when he advocated for creating a new agency to license AI technology. Altman’s change in position reflects a broader transformation in how tech companies and the U.S. government discuss AI. Warnings about AI posing an existential risk and calls for preemptive regulation have diminished. Instead, there is a consensus among top tech executives and officials in the Trump administration that the U.S. must allow companies to move faster to gain economic benefits from AI and maintain its edge over China. Critics warn that AI is already causing harm to individuals and society. Researchers have shown that AI systems can perpetuate racism and other biases. A bipartisan bill criminalizing the posting of nonconsensual sexual images, including AI-generated ones, was passed in April. Rumman Chowdhury, a former U.S. science envoy for AI, said the tech industry’s focus on existential concerns distracted lawmakers from addressing real-world harms. The industry’s approach enabled a “bait and switch,” where executives pitched regulation around concepts like self-replicating AI while emphasizing the need to outpace China. Early warnings about superintelligence gained traction in Washington, but the Trump administration swiftly reversed the AI agenda under President Joe Biden. Trump repealed Biden’s AI executive order on his first day back in office and appointed several Silicon Valley figures to his administration. Altman’s statements reflect how tech companies have aligned with the Trump administration’s tone on AI risks and regulation. Microsoft President Brad Smith now advocates for a “light touch” regulatory framework, and Google’s AI lab DeepMind scrapped a pledge not to develop AI for weapons or surveillance. Max Tegmark, an AI professor at MIT, criticized the lack of AI regulation in the U.S., noting that while sandwich shops must meet safety standards, AI companies are free to release potentially dangerous technology. — new from The Washington Post
