Google has launched a speech identification system aimed at commercial call centres – leaving some biometric security questions unanswered at the same time.
According to the Chocolate Factory, Speaker ID is a way of identifying callers using just their voice, seemingly avoiding annoying and time-consuming ID check from the call centre agents. It even works, so Google says, without requiring a special text or password – voices can be identified from a sample of natural speech.
The omnipresent search and email giant has launched the product in conjunction with its existing Contact Center AI (CCAI), claiming the speech bot package has created a 20 to 35 per cent call deflection away from agents, between $1.3m and $3.7m productivity gains per centre in reduced average call times, and up to 75 per cent reduction in effort to manage contact centre solutions. All this according to a study commissioned by Google and conducted by Forrester Consulting in August 2020.
In Google’s plan for Speaker ID, callers new to the system will have to first enrol, which starts with authentication using the existing process, after which callers provide a brief voice sample that can be used “in lieu of the old process to fast track verification in the future.”
Google said Cloud Speaker ID is “text independent” so after a user is enrolled, they can be identified with any audio snippet as short as three seconds. No password phrase or special text is required. Existing partners seem pleased, at least according to this pre-canned quote.
“Our customers are excited by the opportunity to add ML-based verification on top of their existing [conversation bot] agents with no additional telephony or technology integrations necessary,” said Eric Rossman, VP of Technology Partners and Alliances, at Avaya, a cloud comms company. Speaker ID is generally available now.
Google does, however, offer the assurance that it is committed to ensuring that its products are aligned with its “AI Principles”, listed in seven steps here.
Not everyone will be reassured, though. Quite aside from who owns the training data, and what use Google could have for those datasets once they have been created, there is the matter of whether the system is secure.
One study used voice morphing techniques to transform its voice and launch attacks on voice ID systems. “Our results show that the automated systems are largely ineffective to our attacks. The average rates for rejecting fake voices were under 10-20 per cent for most victims,” according to the study from the European Symposium on Research in Computer Security published in 2015.
Another study published in 2017 showed that human impersonators can fool voice biometric systems.
The Register has asked Google for further comment. ®
* With apologies to the BBC Scotland’s Burnistoun sketch comedy show