Voice phishing is AI fraud in real time - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

Voice phishing is AI fraud in real time

Just as we learnt to treat emails with caution, we must now learn to doubt a human-sounding voice
00:00

{"text":[[{"start":7.34,"text":"The writer is an AI researcher at Bramble Intelligence and worked on the State of AI Report 2025"}],[{"start":16.33,"text":"Until recently, building an artificial intelligence system that could hold a convincing phone conversation was a laborious task. You had to combine separate tools for speech recognition, language processing and speech synthesis, all linked through fragile telephony software. "}],[{"start":36.68,"text":"This is no longer true. The arrival of real-time, speech-native AI models such as OpenAI’s RealTime API, launched last year, means a system that once required multiple components can now be created in minutes. "}],[{"start":54.76,"text":"Publicly available code can connect these models to a phone line. The AI model listens, “thinks” and responds in an instant. The result is a synthetic voice that can converse fluently, improvise naturally and sustain a dialogue in a way that feels human. "}],[{"start":77.41,"text":"In the past year we have moved from the theoretical possibility of widescale AI-enabled voice phishing — or vishing — scams to the reality. Last year, UK tech company Arup was defrauded of $25mn in a deepfake scam, while a vishing attack on Cisco succeeded in extracting information from a cloud-based customer relationship management system it used."}],[{"start":107.00999999999999,"text":"What once demanded expert knowledge is now available, pre-packaged, for anyone to exploit. Low-latency voice-native models have removed the final technical barriers to real-time AI voice fraud. "}],[{"start":123.21,"text":"In testing, it took me only a few lines of instruction to make such a system act like an HR manager calling about the payroll or a fraud officer warning of suspicious activity. Because AI can reason and change strategy in real time, its manipulation is adaptive."}],[{"start":146.54,"text":"The technology itself has legitimate uses, such as healthcare follow-ups, customer service or language tutoring. But the same accessibility that enables innovation also enables harm. A single operator could in theory launch hundreds of thousands of fraudulent calls a day, each one tailored to their target."}],[{"start":171.28,"text":"This threat is compounded by the increasing realism and low costs of platforms like ElevenLabs or Cartesia, which can facilitate voice cloning with very short audio samples."}],[{"start":184.81,"text":"In the case of public figures, it is possible — and relatively easy — to gather hours of audio and produce a compelling approximation of their voice without their knowledge. Public officials have already been impersonated in such attacks, according to the FBI. It has warned the public not to assume that messages claiming to be from a senior US official are authentic."}],[{"start":214.2,"text":"MIT’s Risk Repository, a database of over 1,600 AI risks, shows that in the past five years, the proportion of AI incidents associated with fraud has increased from around 9 per cent to around 48 per cent."}],[{"start":231.48999999999998,"text":"The scale of this cyber crime means voice-verification systems that identify customers by their speech patterns are now a liability. Sensitive requests and high-value transactions should require multi-factor verification that does not depend on how someone sounds."}],[{"start":252.46999999999997,"text":"For the rest of us, the lesson is simple: the voice on the other end of the line is no longer evidence of who is speaking. Just as we have learnt to treat emails with caution, we must today learn to doubt a human-sounding voice. In time, we may need to create vocal watermarks or digital signatures that verify speech as genuine."}],[{"start":279.34999999999997,"text":"Debates around AI are sometimes framed in existential terms. But it is the smaller risks that will reach us first."}],[{"start":290.29999999999995,"text":"Fraud and impersonation corrode trust in everyday communication. These supposedly mundane crimes are the front line of the AI transition. The same ingenuity that created the tools must be applied to securing them."}],[{"start":307.28,"text":"The real disruption of generative AI — the quiet, invisible kind — has already arrived. It will not announce itself with superhuman intelligence but with a phone call."}],[{"start":330.36999999999995,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1763379847_6499.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

特朗普因新关税计划面临法律挑战

在最高法院裁定先前关税非法后,美国总统转而援引一些鲜为人知的法律。

整顿还是圈地?印尼领导人瞄准资源公司

印尼总统普拉博沃•苏比延多誓言将对违反环境法规的资源企业采取强硬措施。

伊朗战争威胁海湾资金的全球流动

海合会六个成员国数十年来已集体成长为全球金融领域最具影响力的力量之一,投资足迹遍及全球。世界对中东资本的依赖程度比许多人意识到的更深。

一周展望:投资者在押注滞胀吗?

随着全球债市抛售加剧,一种新的忧虑正在占据上风:滞胀。

特朗普将伊朗战争推向新的升级阶段

在伊朗发动一连串针锋相对式打击之后,美国总统发出48小时最后通牒,要求开放霍尔木兹海峡。

高技能劳动者正在训练AI——这要付出代价

步入这一全新劳动力市场的学生应谨慎规划对外分享的内容,重新思考竞争,并考虑集体谈判。
设置字号×
最小
较小
默认
较大
最大
分享×