The World Health Organization estimates that there currently are around 466 million people that are deaf or hard of hearing around the globe, and the number is expected to grow to 900 million by 2050. This is why Google is introducing two new Android accessibility features, designed to empower hearing impaired users' communication - Live Transcribe and Sound Amplifier.
Live Transcribe, as the name suggests transcribes speech to text in-real time. It enables deaf users to have two-way conversations with those that do not speak sign language. Powered by Google Cloud, the feature supports more than 70 languages and can be launched with a single tap from the accessibility icon in the system tray.
As you may have guessed, Live Transcribe is also powered by AI and leverages the work the company has previously done on automatic speech recognition (ASR), which is behind automated YouTube captions, presentations in Slides and more. The technology has seen major improvements in recent years. However, until now automated continuous transcription has required 'expensive access to connectivity'. Google wants to change that:
"To do this, we implemented an on-device neural network-based speech detector, built on our previous work with AudioSet. This network is an image-like model, similar to our published VGGish model, which detects speech and automatically manages network connections to the cloud ASR engine, minimizing data usage over long periods of use."
Live Transcribe is currently rolling out in limited beta for Pixel users.
Sound Amplifier, on the other hand, is already available. Announced at least year's Google I/O, this feature is designed to filter background or unwanted noise without making already loud sounds louder. It works with headphones and can already be downloaded on Google Play.
What do you think about these new features? Let us know in the comments below.