AI should not be a black box - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

AI should not be a black box

Spats at OpenAI highlight the need for companies to become more transparent
00:00

{"text":[[{"start":null,"text":"

Sam Altman, chief executive of OpenAI. Researchers once released papers on their work, but the rush for market share has ended such disclosures
"}],[{"start":8.23,"text":"Proponents and detractors of AI tend to agree that the technology will change the world. "},{"start":13.159,"text":"The likes of OpenAI’s Sam Altman see a future where humanity will flourish; critics prophesy societal disruption and excessive corporate power. "},{"start":20.977,"text":"Which prediction comes true depends in part on foundations laid today. "},{"start":24.907,"text":"Yet the recent disputes at OpenAI — including the departure of its co-founder and chief scientist — suggest key AI players have become too opaque for society to set the right course. "}],[{"start":35.480000000000004,"text":"An index developed at Stanford University finds transparency at AI leaders Google, Amazon, Meta and OpenAI falls short of what is needed. "},{"start":43.709,"text":"Though AI emerged through collaboration by researchers and experts across platforms, the companies have clammed up since OpenAI’s ChatGPT ushered in a commercial AI boom. "},{"start":52.989000000000004,"text":"Given the potential dangers of AI, these companies need to revert to their more open past. "}],[{"start":58.86,"text":"Transparency in AI falls into two main areas: the inputs and the models. "},{"start":63.577,"text":"Large language models, the foundation for generative AI such as OpenAI’s ChatGPT or Google’s Gemini, are trained by trawling the internet to analyse and learn from “data sets” that range from Reddit forums to Picasso paintings. "},{"start":75.969,"text":"In AI’s early days, researchers often disclosed their training data in scientific journals, allowing others to diagnose flaws by weighing the quality of inputs. "}],[{"start":85.6,"text":"Today, key players tend to withhold the details of their data to protect against copyright infringement suits and eke out a competitive advantage. "},{"start":93.217,"text":"This makes it difficult to assess the veracity of responses generated by AI. "},{"start":97.83399999999999,"text":"It also leaves writers, actors and other creatives without insight into whether their privacy or intellectual property has been knowingly violated. "}],[{"start":105.6,"text":"The models themselves lack transparency too. "},{"start":108.62899999999999,"text":"How a model interprets its inputs and generates language depends upon its design. "},{"start":113.184,"text":"AI firms tend to see the architecture of their model as their “secret sauce”: the ingenuity of OpenAI’s GPT-4 or Meta’s Llama pivots on the quality of its computation. "},{"start":122.439,"text":"AI researchers once released papers on their designs, but the rush for market share has ended such disclosures. "},{"start":128.469,"text":"Yet without the understanding of how a model functions, it is difficult to rate an AI’s outputs, limits and biases. "}],[{"start":135.22,"text":"All this opacity makes it hard for the public and regulators to assess AI safety and guard against potential harms. "},{"start":141.549,"text":"That is all the more concerning as Jan Leike, who helped lead OpenAI’s efforts to steer super-powerful AI tools, claimed after leaving the company this month that its leaders had prioritised “shiny products” over safety. "},{"start":152.667,"text":"The company has insisted it can regulate its own product, but its new security committee will report to the very same leaders. "}],[{"start":159.97,"text":"Governments have started to lay the foundation for AI regulation through a conference last year at Bletchley Park, President Joe Biden’s executive order on AI and the EU’s AI Act. "},{"start":169.699,"text":"Though welcome, these measures focus on guardrails and “safety tests”, rather than full transparency. "},{"start":175.429,"text":"The reality is that most AI experts are working for the companies themselves, and the technologies are developing too quickly for periodic safety tests to be sufficient. "},{"start":183.909,"text":"Regulators should call for model and input transparency, and experts at these companies need to collaborate with regulators. "}],[{"start":191.18,"text":"AI has the potential to transform the world for the better — perhaps with even more potency and speed than the internet revolution. "},{"start":197.947,"text":"Companies may argue that transparency requirements will slow innovation and dull their competitive edge, but the recent history of AI suggests otherwise. "},{"start":205.589,"text":"These technologies have advanced on the back of collaboration and shared research. "},{"start":209.81900000000002,"text":"Reverting to those norms would only serve to increase public trust, and allow for more rapid, but safer, innovation. "}],[{"start":216.3,"text":""}]],"url":"https://creatives.ftacademy.cn/album/156972-1717174727.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

伊朗核问题勾起伊拉克战争阴影

美国政府正在权衡是否应该出兵伊朗,这不禁令人回想起20年前的伊拉克战争阴影。

以色列冲突升级之际,海湾君主国拉近与宿敌伊朗的关系

沙特阿拉伯、阿联酋及其他地区国家担心,一旦美国对伊朗发动打击,它们将成为报复目标。

科技巨头为什么对“通用人工智能”众说纷纭

通用人工智能被誉为硅谷下一个重大突破,但它究竟是一个科学目标,还是一个营销流行语?

洛克希德•马丁向英国推销导弹防御系统

美国防务集团希望在地缘政治紧张局势加剧以及美国投资“金穹”之际,为英国建造一个新的导弹防御系统提供帮助。

军事力量逐步就位,特朗普接近对伊朗发动打击

美国总统暗示将在数日内采取行动,美国已准备好能够打击福尔道地下核设施的部队。

普京召开的投资论坛未能吸引西方公司

俄罗斯的盟友们也只是向圣彼得堡派遣了低级别的官员和商人,但印尼总统是个例外。
设置字号×
最小
较小
默认
较大
最大
分享×