Mark Gorski Athletic Facilities Maintenance Manager (Desert Financial Arena) | Arizona State Sun Devils Website
Mark Gorski Athletic Facilities Maintenance Manager (Desert Financial Arena) | Arizona State Sun Devils Website
Advancements in voice cloning technology have raised concerns about its potential misuse for election fraud, misinformation, impersonation, and identity theft.
Visar Berisha, an associate dean of research and commercialization at Arizona State University’s Ira A. Fulton Schools of Engineering, participated in a roundtable discussion on voice cloning held by the United States National Security Council in Washington, D.C., this month. The discussion included over 20 experts from academia, industry, and government who examined both the applications and potential misuses of voice cloning.
Berisha leads a team that recently won the U.S. Federal Trade Commission's Voice Cloning Challenge. His team developed OriginStory, a new type of microphone that verifies human speech and watermarks it as authentically human. The team includes ASU faculty members Daniel Bliss and Julie Liss.
In an interview with ASU News, Berisha explained the ease with which voice cloning can be achieved: "Anyone can do this... You would go to a company like ElevenLabs... upload the short clip of my voice, and then you would use it to generate whatever audio you wanted to generate."
Berisha highlighted that while companies take steps to prevent unauthorized use of their services, such as requiring permission from the person whose voice is being cloned, overall it remains quite easy to clone voices using publicly available recordings.
He noted that although voice cloning technology has existed for about a decade, recent advancements have made it difficult to distinguish between real and fake speech. "Until very recently...you’re always willing to trust it. But that’s changing now," he said.
While acknowledging some positive uses for voice cloning—such as aiding individuals who lose their voices due to medical conditions—Berisha emphasized his concerns about its harmful applications. He cited examples including scams targeting elderly individuals and attempts at election interference through cloned voices.
Discussing potential preventive measures against fraud enabled by this technology, Berisha mentioned AI systems designed to detect fake audio but noted the challenges due to rapid improvements in deepfake technologies. Another approach involves watermarking AI-generated content or watermarking human-generated content using devices like OriginStory microphones.
Regarding future developments in preventing misuse of voice cloning technology through mobile devices equipped with specialized microphones like OriginStory’s design, Berisha expressed hope for collaboration with large mobile phone manufacturers.
Looking ahead, Berisha does not foresee any decline in the growth of voice cloning technologies: "We’re sort of at the point of exponential increase...I think over time people will start developing some vigilance because they’ll know of its existence."