Google’s Human-Sounding AI to Answer Calls at Contact Centers

Google’s Human-Sounding AI to Answer Calls at Contact Centers

As Microsoft uses AI to chase down Amazon in retail, Google is drawing on its freakishly human-sounding Duplex technology to beat Microsoft in the call center.

Google is hatching a plan to win over contact centers using its AI, announcing a slew of new features for its Dialogflow Enterprise Edition, which borrows from its own deep learning and AI research, as well as groundbreaking AI from Alphabet-owned DeepMind. The service, which launched in beta in April, aims to help customers build natural-sounding chatbots for the contact center, as well as phone-based conversational agents.

It comes just days after Microsoft revealed its roadmap for Dynamics 365, including the new Dynamics 365 AI for Sales app, which will be in public preview in October, and is meant to help sales teams make decisions using call sentiment analysis and to deliver warnings about deals at risk.

Google, the search company, or AI company as it now refers to itself, is releasing Dialogflow Phone Gateway, currently in beta, which allows a company to assign a phone number to a chatbot and have that bot begin taking calls “in less than a minute”, all on Google’s infrastructure. Customers gain access to telephony tools including Google’s speech recognition, speech synthesis, natural language understanding and automated orchestration systems. The service aims to provide a complete virtual agent service, saving users from having to cobble together components from multiple vendors.


The second beta release is Dialogflow Knowledge Connectors, which interprets unstructured documents such as FAQs or knowledge-base articles. This is meant to complement automated responses that draw on information in internal documents, providing a better conversational experience.

This extra contextual information is integrated with the Dialogflow-based agent and is meant to save time and money on home-baked integrations. Customers on Google’s Enterprise Edition Plus plan will be eligible for unlimited usage of the service when it launches in a few weeks.

Other beta Dialogflow releases including Automatic Spelling Correction and Sentiment Analysis for scoring how much a customer loves or hates the service. Sentiment Analysis relies on the Google Cloud Natural Language API. Others include Agent Assist and contact center analytics.

In Google terms, beta means that a service is ready for any customer to use but there are no SLA or technical support obligations.

There’s also a new text-to-speech beta, which beefs up existing Dialogflow text-to-speech with native audio response using Google-parent Alphabet-owned DeepMind’s WaveNet speech synthesis technology.

“By taking advantage of DeepMind’s WaveNet technology, it closes the voice-quality gap with human voice (based on the Mean Opinion Score for voice quality) by over 70 percent. And when used in conjunction with the Dialogflow Phone Gateway, it’s optimized for phone calls,” said Google Cloud spokespeople.

While all of these features seem to fit in the realms of Google’s Duplex, Google emphasizes that Dialogflow services only “share some underlying components” with Duplex, and that they are otherwise distinct technology stacks.

Given the jitters and ethical questions raised by Duplex, Google highlights that it and its contact center business partners such as Cisco, Genesys, and Twilio and others are discussing “responsible use of Cloud AI”.

“This includes best practices such as disclosing when customers are talking to a bot, and education around issues such as unconscious bias and the future of work. We want to make sure we’re using technology in ways employees and users will find fair, empowering, and worthy of their trust,” wrote Google.


Source: ZDNet


    Popular posts

    Related posts