Introducing Whisper-Zero

Introducing Whisper-Zero

Introducing Whisper-Zero
Published on
Apr 2024

Today, we're thrilled to release a new breakthrough ASR system, Whisper-Zero —a complete rework of Whisper combined with multiple state-of-the-art models, using over 1.5 million hours of diverse audio, including phone-quality and noisy data from real-life environments.

The biggest product milestone for Gladia to date, Whisper-Zero removes virtually all hallucinations from transcription, providing better accuracy, faster speed, enhanced language support, and more features to our users. All in a single production-ready transcription and audio intelligence API.

Our story with optimizing Whisper

Gladia’s core product has been based on Whisper architecture since our conception. Released by OpenAI in 2022, the transformer-based Whisper model set a new standard for automatic speech recognition (ASR) for accuracy and multilingual capabilities. Despite its many advantages, the model came with usage limitations and hardware requirements that made it impractical for enterprise needs and scale.

In the months following Whisper's release, Gladia has transformed the open-source version of the model into a production-grade transcription API for companies. Compared to the original, Gladia delivered better accuracy, extended multilingual support, and additional high-value features like live streaming transcription, translation, speaker diarization, word timestamps and code-switching (i.e., detecting a language change in an audio recording).

There was one pain point we were yet to solve — hallucinations, a phenomenon where an ASR system produces transcriptions that include words or phrases that were not present in the original audio.

Towards hallucinations-free audio transcription

Powered by a predecessor of GPT-3 at the decoding phase, Whisper is notoriously prone to hallucinations, resulting from internal — such as training data and model architecture — and external factors like complex input audio. It's even been reported that the latest version of the model, Whisper v-3, released a few weeks back by OpenAI, is in fact more likely to hallucinate compared to the most accurate of the 'Whispers', the large v-2.

Despite being described by the CEO of OpenAI as the "magic of AI", hallucinations are in reality a huge pain point for any company that relies on transcription to improve its operations and deliver a better user experience. By reducing the overall accuracy of transcription, they make it harder for companies to leverage transcripts to build ASR-powered apps, especially in use cases where the data extracted from transcriptions is used to feed one's database directly, as in the case of automated CRM enrichment, or showcase the transcript in real-time to the final user via live captions.

Gladia has committed to fixing this issue once and for all. In addition to upgrading the existing features set, we have improved the model’s architecture to mitigate Whisper’s hallucination flaw. The resulting word error rate (WER) — a metric used to assess the accuracy of speech recognition systems — is 10-15% more accurate comparing to both Whisper large v2 and v3.

Delivering the best version of enterprise Whisper

Moreover, Whisper-Zero has been optimized specifically for complex environments to account for another Whisper limitation — the fact that the base model was trained on large volumes of data collected from the internet, making it a versatile yet generalist audio model, which is mathematically more biased towards phrases that have nothing to do with professional audio data.

With the fine-tuning and prompt engineering done by Gladia, our customers from online meetings, media, call centers, and otherd domains, can now enjoy better precision in real-life, non-sterile scenarios.

In addition to that, for this release we have put special emphasis on enhancing transcription accuracy in multilingual environments, with Whisper-Zero fine-tuned to recognise a wide variety of accents.

In a nutshell, today we’re offering the market the best enterprise-grade version of Whisper, which removes its biggest limitations, boosts performance, and enhances its capabilities with more features. You can now enjoy the best version of Whisper in the cloud, without limitations, addressing enterprise scale and needs.

Find our more about the release on a dedicated landing page.

More resources

Contact us

Your request has been registered
A problem occurred while submitting the form.

Read more

Case Studies

How VEED is streamlining video editing and subtitles with AI transcription

User-generated content has become a cornerstone of the internet-driven economy. As part of this shift, various platforms have emerged to provide easy-to-use tools to create high-quality video content in a matter of minutes — with AI transcription playing a foundational role in their product development.


How to build a speaker identification system for recorded online meetings

Virtual meeting recordings are becoming increasingly used as a source of valuable business knowledge. However, given the large amount of audio data produced in meetings by companies, getting the full value out of recorded meetings can be tricky.


Should you trust WER?

Word Error Rate (WER) is a metric that evaluates the performance of ASR systems by analyzing the accuracy of speech-to-text results. WER metric allows developers, scientists, and researchers to assess ASR performance. A lower WER indicates better ASR performance, and vice versa. The assessment allows for optimizing the ASR technologies over time and helps to compare speech-to-text models and providers for commercial use.