Three Ways Technology Can Drive Inclusive Communication – by Kenya McPheeters

The ability to communicate is essential to inclusion in professional, learning, or social settings. A Deaf employee, for example, can’t fully contribute to a business unless they can participate in impromptu meetings or hallway chats with colleagues. If English is a second language for a medical student, they need detailed and accurate notes to retain critical information. For a senior aging into hearing loss, losing the ability to connect with family members by phone can be devastatingly isolating. I know of this situation all too well – In my work as a sign language interpreter I’ve seen how connections can be lost when communication isn’t available or readily accessible

In all of these instances, inclusive communication enhances diversity by facilitating involvement, acceptance, and belonging. Today, innovative technology is creating new opportunities for people of different backgrounds, experiences, and linguistic modes to seamlessly share information, collaborate, and engage. Three examples are outlined below.

On-Demand Access to ASL Interpreters

American Sign Language (ASL) interpreters can allow a Deaf person to understand as well as converse with hearing counterparts. For most Deaf people, moreover, ASL is a “native” or primary language; as such, ASL interpretation provides linguistic equality by supporting a Deaf person’s ability to articulate ideas and thoughts fluently.

The challenge is accessibility. Bringing an interpreter onsite to a business or an event poses logistical, scheduling and supply and demand issues. While video conferencing simplifies things, interpreters have traditionally had to be scheduled in advance. For a business, that means Deaf employees can actively participate in live or virtual meetings that are pre-arranged, but they are largely excluded from informal discussions or urgent matters that arise.

Artificial Intelligence and Data Analytics

Today, new services are emerging that allow Deaf users to virtually access an ASL interpreter on demand, via a Zoom call or other platform. This functionality is a potential game-changer for human resources and DEI strategies aimed at facilitating ongoing, real-time collaboration, and engagement. Similar services are being developed to enhance customer experiences for the Deaf. In a retail setting that provides such a solution, for example, Deaf shoppers can have on-demand access to ASL interpreters as soon as they enter a store, giving them the option to ask questions and engage with store personnel. Retail staff, meanwhile, can help Deaf customers find products, inquire about needs and preferences, recommend new offerings, and engage on a personal level. Here again, on-demand availability has a significant impact on communication, inclusion, and engagement.

To enable this capability, providers are leveraging Artificial Intelligence (AI) and data analytics to develop platforms that – similarly to ride-sharing systems – constantly monitor and match the supply and skill sets of available interpreters against demand for services from Deaf users. The intelligent tools also predict demand curves, identify potential trouble spots, offer work shifts and measure and monitor quality.

Real-Time Transcription

Communication Access Real-Time Translation (CART) services transcribe words into text or captions as they are spoken during a classroom lecture, business meeting, or public speech. For people who are Deaf or hard-of-hearing, who aren’t fluent in English, or who have auditory processing disorders, CART services support understanding and participation, particularly in higher education. CART also provides written documentation of an event in real time, and, when combined with signing interpretation, can reinforce learning and information retention for Deaf people (and, for that matter, anyone whose mind wanders during a lecture).

As with ASL interpreters, access to CART services has traditionally been limited. In addition to typing up to 260 words per minute, CART stenographers have required training in the specialized medical, legal, or scientific terminology used in university lectures. This has resulted in high costs, hard-to-find skill sets, and limited availability.

Smart Tools Plus Smart People

Today, AI-enabled Automated Speech Recognition (ASR) computer software is becoming increasingly adept at replicating the stenographer’s role of transcribing text. Specifically, the software is getting better at understanding accents and jargon, and at analyzing word clusters to contextualize a discussion and accurately predict the words a speaker will use. Despite this progress, smart tools still can’t deliver CART services with the accuracy and understanding that many environments – such as a medical school lecture hall – require.

By complementing AI-enabled ASR with human intervention, providers are leveraging the respective strengths and capabilities of intelligent tools and human knowledge. Specifically, ASR applies processing speed and contextual analysis to do the bulk of the transcription, while human agents address nuances, ensure the accuracy of technical terminology, and correct errors. This greatly reduces the level of specialized training a CART captioning agent requires, which can significantly expand the availability of CART services to a wider audience.  Easier access to CART services, meanwhile, creates new opportunities to improve communication and enhance inclusion for people with unique learning styles.

Speech-to-Text Transcription

CART services – which involve specialized terminology and require rigorous accuracy – rely on human intervention. Speech-to-text applications for smartphones, meanwhile, are entirely AI-driven, and are another example of how ASR can enhance linguistic inclusion. These easily downloadable apps convert speech to text in real time. During a cell phone call, one user’s spoken words are converted into text that appears on the other user’s smart phone screen, allowing one or both users to read along with the conversation. Speech-to-text transcription can be especially helpful for individuals with hearing loss because telephones don’t transmit the full range of frequencies used in human speech, so turning up the volume simply creates louder garbled sound. Text also helps those who have difficulty understanding certain accents or voices, or who may miss parts of the conversation due to background noise.

As with CART services, advances in AI technology and ASR algorithms are key. Today’s speech-to-text apps deliver real-time transcription with increasing accuracy, without the longstanding latency issues that, until recently, produced choppy text blocks and disrupted conversational flow. The applications’ ease of use is another important consideration, particularly for seniors aging into hearing loss who are often resistant to change and who struggle with learning to use new devices.

Diversity, equity, and inclusion comprise a wide range of criteria and characteristics. These include racial, gender, and ethnic identities, as well as belief systems and physical abilities. As public and private institutions, organizations, and businesses define new strategies to create more diverse and accepting cultures and environments, linguistic inclusion should be part of the conversation.

Kenya McPheeters

3 thoughts on “Three Ways Technology Can Drive Inclusive Communication – by Kenya McPheeters”

Leave a Reply

Your email address will not be published. Required fields are marked *