It is time AI started to play by the rules | 人工智能是时候遵守规则了 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

It is time AI started to play by the rules
人工智能是时候遵守规则了

Creating regulations for something so fast-changing is difficult but that is no reason not to try
00:00

Late last year, California almost passed a law that would force makers of large artificial intelligence models to come clean about the potential for causing large-scale harms. It failed. Now, New York is trying on a law of its own. Such proposals have wrinkles, and risk slowing the pace of innovation. But they are still better than doing nothing.

The risks from AI have increased since California’s fumble last September. Chinese developer DeepSeek has shown that powerful models can be made on a shoestring. Engines capable of complex “reasoning” are supplanting those that simply spit out quick-fire answers. And perhaps the biggest shift: AI developers are furiously building “agents”, designed to carry out tasks and engage with other systems, with minimal human supervision.

undefined

How to create rules for something so fast-moving? Even deciding what to regulate is a challenge. Law firm BCLP has tracked hundreds of bills on everything from privacy to accidental discrimination. New York’s bill focuses on safety: large developers would have to create plans to reduce the risk that their models produce mass casualties or large financial losses, withhold models that present “unreasonable risk” and notify the state authorities within three days when an incident occurs.

Even with the best intentions, laws governing new technologies can end up ageing like milk. But as AI scales up, so do the concerns. A report published on Tuesday by a band of California AI luminaries outlines a few: for example, OpenAI’s o3 model outperforms 94 per cent of expert virologists. Evidence that a model could facilitate the production of chemical or nuclear weapons, it adds, is emerging in real time.

Disseminating dangerous information to bad actors is only one danger. Models’ adherence to users’ objectives is also raising concerns. Already, the California report notes mounting evidence of “alignment scheming”, where models follow orders in the lab, but not in the wild. Even the pope fears AI could pose a threat to “human dignity, justice and labour.”

Many AI boosters disagree, of course. Venture capital firm Andreessen Horowitz, a backer of OpenAI, argues rules should target users, not models. That lacks logic in a world where agents are designed to act with minimal user input.

Nor does Silicon Valley appear willing to meet in the middle. Andreessen has described the New York law as “stupid”. A lobby group it founded proposed New York’s law exempt any developer with $50bn or less of AI-specific revenue, Lex has learned. That would spare OpenAI, Meta and Google — in other words, everyone of substance.

undefined

Big Tech should reconsider this stance. Guardrails benefit investors too, and there is scant likelihood of meaningful federal rulemaking. As Lehman Brothers or AIG’s former shareholders can attest, backing a company that brings about systemic calamity is no fun.

The path ahead involves much horse-trading; New York governor Kathy Hochul has until the end of 2025 to request amendments to the state’s bill. Some Republicans in Congress have proposed blocking states from regulating AI altogether. And with every week that passes, AI reveals new powers. The regulatory landscape is a mess, but leaving it to chance will create one far bigger and harder to clean up.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

以色列冲突升级之际,海湾君主国拉近与宿敌伊朗的关系

沙特阿拉伯、阿联酋及其他地区国家担心,一旦美国对伊朗发动打击,它们将成为报复目标。

科技巨头为什么对“通用人工智能”众说纷纭

通用人工智能被誉为硅谷下一个重大突破,但它究竟是一个科学目标,还是一个营销流行语?

洛克希德•马丁向英国推销导弹防御系统

美国防务集团希望在地缘政治紧张局势加剧以及美国投资“金穹”之际,为英国建造一个新的导弹防御系统提供帮助。

军事力量逐步就位,特朗普接近对伊朗发动打击

美国总统暗示将在数日内采取行动,美国已准备好能够打击福尔道地下核设施的部队。

普京召开的投资论坛未能吸引西方公司

俄罗斯的盟友们也只是向圣彼得堡派遣了低级别的官员和商人,但印尼总统是个例外。

微软准备退出与OpenAI的关键性谈判

ChatGPT开发商计划转型为营利性公司,促使这家软件巨头制定应对预案。
设置字号×
最小
较小
默认
较大
最大
分享×