Will Lewis

will be the Keynote speaker at on Day-2: Friday, 27 November 2015

His keynote address will be on:

Skype Translator: Breaking Down Language and Hearing Barriers
A Behind the Scenes Look at Near Real-Time Speech Translation

William Lewis is Principal Technical Program Manager with the Microsoft Translator team at Microsoft Research. He has led the Microsoft Translator team’s efforts to build Machine Translation engines for a variety of the world’s languages, including threatened and endangered languages, and has been working with the Translator team to build Skype Translator.

He has been leading the efforts to support the features that allow deaf and hard of hearing users to make calls over Skype. This work has been extended to the classroom in Seattle Public Schools, where “mainstreamed” deaf and hard of hearing children are using MSR’s speech recognition technology to participate fully in the “hearing” classroom.

Before joining Microsoft, Will was Assistant Professor and founding faculty for the Computational Linguistics Master’s Program at the University of Washington, where he continues to hold an Affiliate Appointment, and continues to teach classes on Natural Language Processing. Before that, he was faculty at California State University (CSU) Fresno, where he helped found the Computational Linguistic and Cognitive Science Programs at the university. He received a Bachelor’s degree in Linguistics from the University of California Davis and a Master’s and Doctorate in Linguistics, with an emphasis in Computational Linguistics, from the University of Arizona in Tucson.

In addition to regularly publishing in the fields of Natural Language Processing and Machine Translation, Will is on the editorial board for the Journal of Machine Translation, is on the board for the Association for Machine Translation in the Americas (AMTA), served as a program chair for the National American Association for Computational Linguistics (NAACL) conference in June 2015, serves as a program chair for the Machine Translation Summit in October 2015, regularly reviews papers for a number of Computational Linguistic conferences, and has served multiple times as a panelist for the National Science Foundation.

See also the entry about him in the People section at Microsoft Research: http://research.microsoft.com/en-us/people/wilewis/

Extended Abstract

In 1966, Star Trek introduced us to the notion of the Universal Translator.Such a device allowed Captain Kirk and his crew to communicate with alien species, such as the Gorn, who did not speak their language, or even converse with species who did not speak at all (e.g., the Companion from the episode ).In 1979, Douglas Adams introduced us to the “Babelfish” in the which, when inserted into the ear, allowed the main character to do essentially the same thing:communicate with alien species who spoke different languages.

Although flawless communication using speech and translation technology is beyond the current state of the art, major improvements in these technologies over the past decade have brought us many steps closer.Skype Translator puts together the current state of the art in these technologies, and provides a speech translation service in a Voice over Internet (VoIP) service, namely Skype.With Skype Translator, a Skype user who speaks, say, English, can call a colleague or friend who speaks, say, Spanish, and be able to hold a bilingual conversation mediated by the translator.

In the Skype Translator project, we set ourselves the ambitious goal of enabling successful open-domain conversations between Skype users in different parts of the world, speaking different languages. As one might imagine, putting together error-prone technologies such as speech recognition and machine translation raises some unique challenges.But it also offers great promise.

The promise of the technologies is most evident with children and young adults who accept and adapt to the error-prone technology readily.They understand that the technology is not perfect, yet work around and within these limitations without hesitation.The ability to communicate with children their own age, irrespective of language, gives them access to worlds that fascinate and intrigue them.The stunning simplicity of the questions they ask, e.g., “Do you have phones?” or “Do you like wearing uniforms in school?”, shows how big the divide can be (or is perceived to be), but it also shows how strongly they wish to connect.Because they also readily adapt the modality of the conversation, e.g., using the keyboard when speech recognition or translation may not be working for them, means they also readily accept the use of the technology to break down other barriers as well.Transcriptions of a Skype call, a crucial cog in the process of speech translation, are essential for those who do not hear, as are the text translations of those transcripts.Freely mixing modalities and readily accepting them offers access to those who might otherwise be barred access.Adjusting the design of Skype Translator to accommodate those with deafness or hard of hearing added features that benefited all users.The technologies behind Skype Translator not only break down the language barrier, they also break down the hearing barrier.

In this talk, we will look at Skype Translator and how it works. We will cover the issues we had to address in order to design a system that transcribes and translates speech in an existing Voice of Internet service, namely Skype. We will conclude with a demonstration of the service.