At the beginning of the current AI boom, governments seemed open to the doomers’ point of view. In 2023, the UK organized an event at Bletchley Park with a heavy focus on safety from “frontier risks,” including existential threats. By the third iteration of the event, in Paris this February, it had been rebranded an “AI Action Summit” and had an agenda laden with calls for investment and innovation.
The Biden White House issued an executive order in 2023 mandating that American AI companies should share their safety research with the US government so it could independently assess their risks. This year, a draft of the Trump-backed Big Beautiful Bill included a proposal to pause state and local regulation for a decade.
And slowing down or stopping is anathematic to Silicon Valley’s ethos. The AI boom has brought in hundreds of billions of dollars of investment already.
The industry’s biggest players want hundreds of billions more. To make investors believe you can pay back on that bet, you need to promise a moonshot — nothing less than the total reworking of the global knowledge economy.
“I don’t think the risk has been re-evaluated downward,” said Hélène Landemore, a professor of political science at Yale who studies the politics and ethics of AI. “It’s more that the economic interests are so enormous, they dwarf the fear.”
Read more | BLOOMBERG