AI should not be a black box - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

AI should not be a black box

Spats at OpenAI highlight the need for companies to become more transparent

Sam Altman, chief executive of OpenAI. Researchers once released papers on their work, but the rush for market share has ended such disclosures

Proponents and detractors of AI tend to agree that the technology will change the world. The likes of OpenAI’s Sam Altman see a future where humanity will flourish; critics prophesy societal disruption and excessive corporate power. Which prediction comes true depends in part on foundations laid today. Yet the recent disputes at OpenAI — including the departure of its co-founder and chief scientist — suggest key AI players have become too opaque for society to set the right course.

An index developed at Stanford University finds transparency at AI leaders Google, Amazon, Meta and OpenAI falls short of what is needed. Though AI emerged through collaboration by researchers and experts across platforms, the companies have clammed up since OpenAI’s ChatGPT ushered in a commercial AI boom. Given the potential dangers of AI, these companies need to revert to their more open past.

Transparency in AI falls into two main areas: the inputs and the models. Large language models, the foundation for generative AI such as OpenAI’s ChatGPT or Google’s Gemini, are trained by trawling the internet to analyse and learn from “data sets” that range from Reddit forums to Picasso paintings. In AI’s early days, researchers often disclosed their training data in scientific journals, allowing others to diagnose flaws by weighing the quality of inputs.

Today, key players tend to withhold the details of their data to protect against copyright infringement suits and eke out a competitive advantage. This makes it difficult to assess the veracity of responses generated by AI. It also leaves writers, actors and other creatives without insight into whether their privacy or intellectual property has been knowingly violated.

The models themselves lack transparency too. How a model interprets its inputs and generates language depends upon its design. AI firms tend to see the architecture of their model as their “secret sauce”: the ingenuity of OpenAI’s GPT-4 or Meta’s Llama pivots on the quality of its computation. AI researchers once released papers on their designs, but the rush for market share has ended such disclosures. Yet without the understanding of how a model functions, it is difficult to rate an AI’s outputs, limits and biases.

All this opacity makes it hard for the public and regulators to assess AI safety and guard against potential harms. That is all the more concerning as Jan Leike, who helped lead OpenAI’s efforts to steer super-powerful AI tools, claimed after leaving the company this month that its leaders had prioritised “shiny products” over safety. The company has insisted it can regulate its own product, but its new security committee will report to the very same leaders.

Governments have started to lay the foundation for AI regulation through a conference last year at Bletchley Park, President Joe Biden’s executive order on AI and the EU’s AI Act. Though welcome, these measures focus on guardrails and “safety tests”, rather than full transparency. The reality is that most AI experts are working for the companies themselves, and the technologies are developing too quickly for periodic safety tests to be sufficient. Regulators should call for model and input transparency, and experts at these companies need to collaborate with regulators.

AI has the potential to transform the world for the better — perhaps with even more potency and speed than the internet revolution. Companies may argue that transparency requirements will slow innovation and dull their competitive edge, but the recent history of AI suggests otherwise. These technologies have advanced on the back of collaboration and shared research. Reverting to those norms would only serve to increase public trust, and allow for more rapid, but safer, innovation.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

反弹的通胀与不耐烦的特朗普:凯文•沃什面临双重压力

美国参议院本周有望批准这位56岁的金融家接替杰伊•鲍威尔出任美联储主席。

伊朗战争推高燃气价格,印度工人纷纷逃离城市生活

伊朗战争推高了烹饪燃料价格,迫使印度许多务工人员返乡回村。

能源、军火与粮食:特朗普对伊战争日益沉重的代价

这场冲突正波及整个美国经济,造成了数千亿美元的产出损失。

肺纤维化生物科技公司Avalyn Pharma申请首次公开募股(IPO)

一家生物技术公司正开发可吸入剂型的已获批肺纤维化口服药,计划赴公开市场融资以支持其后期研发。
2天前

凯勒拉治疗学公司在生物技术领域创纪录的IPO中融资6.25亿美元

最新的生物科技公司首次公开募股创下历史新高。
2天前

法国将迎来最拥挤的大选角逐场:谁将取代马克龙?

左翼和中间阵营的分裂,助长了极右翼问鼎爱丽舍宫的希望。
设置字号×
最小
较小
默认
较大
最大
分享×