Apps to Detect AI-Generated Voice Scams
As artificial intelligence rapidly evolves, Japanese telecommunications companies are stepping up efforts to combat the rising threat of deepfake audio realistic AI-generated voice impersonations used in fraud and misinformation campaigns. The move comes amid growing concerns over AI-enabled scams, especially those involving impersonation calls that trick individuals and organizations into divulging sensitive information or transferring money.
At the forefront of this initiative, NTT East Japan (NTT東日本) has partnered with Tokyo University-spun AI startup NABLAS to develop and test detection technology specifically designed to identify synthetic phone calls generated by AI tools. The project is part of a broader national effort backed by Japan’s Ministry of Internal Affairs and Communications to build systems that help counter “fake information” that can spread over phone networks and social platforms.
Why Deepfake Audio Detection Matters
AI voice synthesis has become so advanced that even moderately trained models can produce speech nearly indistinguishable from real human voices. Such technology has been exploited in social engineering attacks, where scammers impersonate trusted figures like family members or executives to defraud victims or bypass security protocols. In some markets, fraud losses linked to AI-enabled scams have grown dramatically, highlighting the urgency for practical defenses.
Traditional fraud detection methods struggle against these highly realistic audio deepfakes, prompting the need for specialized solutions capable of identifying subtle artifacts in synthetic speech that are invisible to the human ear.
How the New Detection System Works
The system under development by NTT East and NABLAS focuses on integrating deepfake audio detection directly into telephone communication services and related applications. According to developers:
- The tool analyzes incoming and outgoing voice streams to flag likely AI-generated content.
- It is trained to remain effective across a range of telephony conditions including different audio formats and network quality variations.
- The technology is being trialed as part of real phone apps to ensure it operates faultlessly in everyday use.
- If a deepfake is detected, the app can alert users or associated systems to take corrective action.
In addition to the core phone detection tool, the consortium plans to develop broader misinformation prevention systems aimed at local government use. These include tools to verify creator identity through decentralized identifiers, detect manipulated images or video, and use watermarking and AI-assisted fact-checking to curb the spread of deceptive media.
Pilot Programs and Broader Deployment
The first real-world evaluations are scheduled to take place in Nagano Prefecture’s Ina City, where telephony and misinformation detection will be tested in collaboration with municipal authorities. Results from these pilot programs will inform future rollouts that could integrate directly with consumer phone services and public safety systems.
While NTT East’s project is among the most high-profile public efforts, other Japanese technology and security firms have also been exploring AI-powered fraud detection technologies reflecting a wider industry trend toward strengthening defenses against synthetic media misuse.
The Stakes Are Rising
Experts say these innovations are critical as AI-driven scams grow more sophisticated. Deepfake audio has the potential not only to defraud individuals but also to disrupt business communications and weaken trust in digital channels. Detection tools built into telecommunication infrastructure could become a frontline defense, giving users and businesses an early warning when something sounds “off.”
As Japan and the world navigate this rapidly changing landscape, the development of real-time deepfake audio detection may soon move from cutting-edge research to an expected part of everyday communication technology.
Originally written by-NHK world
Link to the article- https://www3.nhk.or.jp/nhkworld/en/news/20260105_B4/









