AI desperately needs more adult supervision - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

AI desperately needs more adult supervision

The critical challenge is to build institutions that protect us from tech companies and the state
00:00

{"text":[[{"start":6.2,"text":"The riveting legal case currently being heard in the Ronald V Dellums courthouse in Oakland is nailing the lie that self-regulation of frontier AI is sufficient. The world’s richest man, Elon Musk, has accused OpenAI’s chief executive Sam Altman of mendaciously reneging on the AI lab’s founding agreement as a charity and unjustly enriching himself. Altman’s lawyers have countered that Musk, an early funder of OpenAI who later founded the rival xAI, is an unreliable witness because of memory lapses caused by his use of the drug “rhino ket.”"}],[{"start":40.85,"text":"At times, the testimony has resembled the personal insults and grudges of a schoolyard scrap, even though the fate of multibillion-dollar businesses is at stake. But the case has also shone an unforgiving light on how two of the world’s best-funded frontier AI labs are run. Do we really want such win-at-all-cost billionaires in unconstrained charge of developing the most powerful technology of our times? No matter what their entrepreneurial talents, both Musk and Altman are in clear and desperate need of adult supervision."}],[{"start":72,"text":"Whether their companies’ boards or the US federal government are willing and able to provide such supervision is another question. Several tech titans may have bent the proverbial knee at President Donald Trump’s inauguration, suggesting they were subordinate to the White House. But their gesture was also a calculated trade: they played Trump when it came to AI regulation. In Trump’s second term “permissionless innovation” has been the administration’s mantra as the US races to outpace China. At present, more regulatory restrictions are imposed on nail salons than frontier AI companies. "}],[{"start":106.65,"text":"However, Anthropic’s controlled release of Claude Mythos, a frontier model capable of finding thousands of cyber security vulnerabilities, has made it impossible to ignore the national security risks. “Mythos has changed the debate,” Dean Ball, who helped draft Trump’s 2025 AI Action Plan, tells me. “We cannot pretend that catastrophic risks do not exist. And catastrophic risks obviously implicate the state.”"}],[{"start":134.9,"text":"Ball, who now writes the Hyperdimensional newsletter, says he is opposed to almost all AI regulation except for protecting against catastrophic risks, such as mass cyber attacks or bioweapons. It is the responsibility of states to prevent catastrophic risks wherever possible. Market actors have no incentive to do so."}],[{"start":154.8,"text":"But Ball is equally concerned by what happens if the Leviathan wakes and the state acquires too much power over frontier AI. Not only would the state monopolisation of advanced AI be an unprecedented instrument of government tyranny, it might also mean humanity would lose many of the undoubted benefits of AI. “This is the ultimate dystopia to avoid,” he writes."}],[{"start":177.5,"text":"The critical challenge is how to create independent institutions that simultaneously protect us from AI companies and the state. A partial answer, at least, has been sketched out in an intriguing paper by Christoph Winter and Charlie Bullock at the Institute for Law and AI. Their argument is that societies need to develop what they call radical optionality."}],[{"start":197.75,"text":"The “let the market rip” approach pursued in the US carries the dangers outlined by Ball. But the EU’s insistence on the precautionary principle means it is prematurely trying to regulate a technology that is still evolving and risks stifling the industry."}],[{"start":212.65,"text":"The authors’ alternative solution is to build well-funded expert institutions, secure information-sharing channels, whistleblower protections, lab security standards and legal authorities to monitor emerging risks. That will give well-informed authorities options to intervene at critical points. One such institution is the UK’s AI Security Institute, which emerged from the Bletchley Park summit hosted by the then British prime minister, Rishi Sunak, in 2023. The institute, which is being emulated around the world, has provided the valuable testing and evaluation of Mythos."}],[{"start":249.10000000000002,"text":"The cautious way in which Anthropic released Mythos, shared the technology with more than 40 partner organisations and launched Project Glasswing to help identify and plug security gaps shows self-regulation can be important. But, as Winter points out, Anthropic was under no external obligation to act that way. “We should be very, very worried about any concentrations of power in the age of AI,” he says."}],[{"start":273.65000000000003,"text":"Even Anthropic’s chief executive Dario Amodei is concerned about the “chaos agents” and bad actors who can misuse AI and is explicitly calling for stricter regulation of frontier models. We urgently need to create options for doing so."}],[{"start":295.65000000000003,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1778888988_7353.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×