How Automated Captioning Saved TEDxSeattle for This Fan
By Swetha Machanavajhala
I’m a huge fan of TED talks. They expose everyone’s creative thinking and how that thinking can be applied in innovative ways to change the world. Some of my favorite talks include Simon Sinek’s “How Leaders Inspire Action” and Aji Piper’s “Kids Sue the Government to Fight Climate Change - and Win.”
My first live experience was TEDxSeattle 2017, and it definitely was incredible to be in the audience. Needless to say, I was looking forward to attending this year’s TEDxSeattle event.
Imagine my shock and disappointment when organizers asked whether I’d be willing to accept a refund for my ticket, because they couldn’t afford to hire a CART interpreter. Since they had been able to provide this service during the 2017 event, I was particularly upset by this news.
A CART interpreter is someone who’s trained as a stenographer—like a court reporter—who uses a stenography machine to transcribe whatever speakers say during a live presentation. The result are captions that appear on a video screen. CART interpreters, or alternately, ASL interpreters, capture spoken content for audience members who are deaf or hard of hearing.
I hear with the help of hearing aids. But, due to my profound hearing loss, I can make out the words only with the combination of lip reading and hearing. Lip reading is a challenge if the person isn’t a few inches away from me, and having to constantly read lips throughout the day makes me extremely tired. It’s quite impossible to lip read someone who’s standing on a stage or in group conversations where people may not face me directly. In noisy surroundings, the hearing aids amplify the ambient noise, requiring even more concentration on my part.
I certainly sympathized with TEDxSeattle’s situation. Years ago, I might have resigned myself to not having access, since there were no resources to provide it. But I couldn’t stand the thought of being excluded and, worse, miss watching my colleague Anirudh Koul give his TEDx talk, “How AI Can Empower the Blind Community.”
This is 2018. As an engineer, I know that there are many artificial intelligence technology solutions that can address exactly this kind of challenge. For example, multiple speech-to-text software products on the market enable people who are deaf or hard of hearing, as well as people who aren’t fluent in English, to follow conversations. Microsoft Translator, in particular, has the ability to provide live, near real-time captioning and translation. Using this feature, a person can start a conversation in his or her native language and any number of people can join that conversation in their respective languages on their own devices.
Being more inclusive
There is no excuse for not making events or our built environment accessible. This is what made me push TEDxSeattle organizers to try out Microsoft Translator. (Full disclosure, I work at Microsoft.) The automated captioning feature has been used publicly since the fall of 2016, but mostly in schools, universities, and at conferences. It has not been used at large-scale, general public events like TEDx, however. But I knew it could work.
I am always looking for solutions to make the auditory world inclusive. I also wanted to raise awareness about the challenges the deaf or hard of hearing experience and how technology can help alleviate some of those issues. So I introduced the TEDxSeattle team to my colleague Will Lewis’s team from Microsoft Translator. Will is a manager on the Microsoft Translator team and has led his team’s efforts to foster inclusivity using their tools. He also has a long history working in computational linguistics and underserved communities.
While TEDxSeattle organizers were hesitant at first, they were impressed after we worked with the producer to demonstrate the technology at McCaw Hall, where TEDxSeattle would take place. Satisfied with the results, they agreed to use Microsoft Translator. With some additional support from audiovisual experts at Microsoft Studios, the TEDxSeattle team was able to provide live captioning for the day of talks.
With Microsoft Translator, I had no headache in trying to access bits and pieces of information. The fact that I could just relax and “listen” to the talks through the transcripts, watch my colleague speak, meet friends, and spend the day leisurely describes the boundless joy that I experienced.
About the Author: Swetha Machanavajhala
Swetha Machanavajhala is a software engineer at Microsoft in Azure Networking team. Apart from her day job she is very passionate in building products that help people who are deaf or hard of hearing. Her stint at Microsoft since 2013 gave her experiences from developing software to being a seasoned hacker to leading teams cross country and giving several talks at conferences.