Published in News

Google's talking AI sounds human

by on28 December 2017


Robots of the future will sound like us

The concept of a robot which sounds artificial is enshrined in science fiction, but the reality will probably be robots who sound more like humans.

A research paper published by Google this month - details a text-to-speech system called Tacotron 2, which claims near human accuracy at imitating audio of a person speaking from text.

The system is Google's second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram, a visual way to represent audio frequencies over time.

That spectrogram is then fed into WaveNet, a system from Alphabet's AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.

The Google researchers also demonstrate that Tacotron 2 can handle hard-to-pronounce words and names, as well as alter the way it enunciates based on punctuation. For instance, capitalised words are stressed, as someone would do when indicating that specific word is an important part of a sentence.

 

Last modified on 28 December 2017
Rate this item
(0 votes)