Are we on our way to computers that really understand what we say. I mean, not just recognise each word, but really understand the meaning of what is said ?
On January 22 there was the news that “software understands, um, language“, explaining that the team of the Luna project, a European-wide effort to dramatically advance the power and intelligence of speech recognition, has created a data mining, more precisely a text mining model that understands spoken language. The first languages covered are polish and italian.
It probably was a huge job, because if you want to do data mining or text mining, you need data, um text. But not just data or text, but also the meaning of these texts. It means that a lot of people have to sort of translate these spoken texts into “meanings” understandable for a computer. And then come the mining algorithms that learn from this information base to understand the spoken texts.
At the university of Essex, Jon Chamberlain tries to solve the problem of the enormous job to create this information base (think of all the knowledge of men to get an idea of what we are talking about at the end) by means of an internet game. People can join this “phrase detectives site” to add information to the database. Until now they got 40,000 annotations in 4 weeks. Not bad for a start.
So at the end we will just tell our robot what we want, in stead of saying some pre-defined commands.
Related articles by Zemanta
- Processing Speech and Voice
- His master’s voice: speech recognition software
- Dragon NaturallySpeaking, Now with Supercalifragilisticexpialidocious