When it comes to inclusivity and accessibility, technology can leave some people behind. People with disabilities comprise 19% of the U.S. population, which means if they were a minority group, they would be the largest in the country. Experts write about the potential for information and communications technology to enhance access for those with disabilities, yet, inclusivity has not kept pace. Rather than a shortage of tools and resources, this gap is due to lack of expertise in the tech industry. The good news is that as more tech companies focus on cultivating skills of developers, designers, and UI professionals in areas of accessibility, widespread inclusion shows promise.
Global Accessibility Awareness Day (GAAD) is on May 17th this year, and at Evia, we want to help promote awareness by highlighting how tech companies are working to bridge the digital divide. For example, just this week, Microsoft pledged $25 million towards an Artificial Intelligence for Accessibility project. Artificial Intelligence (AI), machine simulated learning, in this instance, would be used to create software solutions to assist people with disabilities in work, in everyday life, and in making human connections. In this article, I will delve into one facet of inclusive technology, captioning for the hearing impaired.
THE IMPORTANCE OF CLOSED CAPTIONING
Since the start of digital broadcasting, demand for closed captions has only grown. Displayed on video programming, these text descriptions of audio are critical for people who are hard of hearing or are watching dialogue in another language. For both live and pre-recorded video broadcasts, the challenges associated with the traditional captioning workflow range from high costs and video latency to risky errors and extra time for manual correction. Live streaming introduces another level of complexity to the closed captioning process.
LIVE STREAMING EXPERTS TACKLE CAPTIONS IN THE CLOUD
This past January, Evia was approached by a client who needed live closed captioning for their event. Due to a new initiative by the company’s accessibility group, closed captions were needed to accommodate attendees who were deaf or hard of hearing. Using Evia’s services, the client wanted to produce video programming with both pre-recorded and live elements. The digital event also needed to include captioned footage available both on the live stream and on-demand recording, and they wanted viewers to have the ability to turn captions on/off as needed.
To accommodate the client’s request for a closed caption option would traditionally require a time-consuming, customized approach. Since their event was less than a month away, the development would need to be rushed, an extremely costly option. However, since Evia had recently migrated to the Microsoft Azure Cloud Computing Platform, the solution provided was much more cost-efficient and streamlined.
Evia’s Solutions Analyst configured the media player in the cloud to enable a closed captioning button on the player interface. The viewers had the choice to turn on semi-transparent captions on the lower corner of the screen, creating a seamless viewing experience. Evia kept as much of the workflow in the cloud as possible so that broadcast delay was kept to a minimum. A caption service provider was employed to transcribe the captions in real time to meet our standards for accuracy. With this replicable cloud-based workflow, the team at Evia was able to eliminate the need for rushed development or extra infrastructure on a fast-approaching deadline and saved the client between $4,000-$8,000.
FUTURE OF TRANSCRIPTION TECHNOLOGY IS PROMISING (THANKS TO AI)
The cloud is one area where technology is supporting accessibility, and another space is Artificial Intelligence (AI). A prime example can be found at the Rochester Institute of Technology in New York. The Institute’s students who are deaf or hard of hearing take courses with students who are hearing, with some help from AI. The professor wears a headset and while she speaks, it generates captions from her speech. The headset is powered by Microsoft Translator, an AI communication technology. According to Microsoft, [LA8] “The system uses an advanced form of automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. The removal of disfluencies and addition of punctuation leads to higher-quality translations into the more than 60 languages that the translator technology supports.”
In April, Amazon introduced Amazon Transcribe, a service that converts audio from speech to text. Partnering with Echo360, a video platform for high education institutions, Amazon Transcribe aims to provide another tool to improve access for students. During lectures, the service sends automated captions to a screen where they are shown side by side with video. The transcription notes are accessible after class. While the technology isn’t perfect, it provides the deaf and hard of hearing student community with better comprehension and offers support for learners of a new language.
For the student community utilizing these AI innovations, having such tools to access spoken language brings them closer to speakers, and encourages closer communication and engagement.
From a pure product engineering design perspective, if you design for accessibility, you design for all, including the 1 billion plus with disabilities.
- Jenny Lay-Flurrie
The future of accessible technology is exciting as technology companies continue to close the access gap through innovations like cloud computing and AI. Companies like Amazon and Microsoft are making their services more accessible to all people. A profound summation comes from Microsoft’s Chief Accessibility Officer, Jenny Lay-Flurrie. “From a pure product engineering design perspective, if you design for accessibility, you design for all, including the 1 billion plus with disabilities.”
To experience how nearly a billion people in the world live with a disability, we’ve provided some resources to understand and promote accessibility awareness.