.jpg)
We’ve had this moment of panic internally at Wispr Flow more than once.
Yesterday, transcription felt great.
Today, it suddenly feels off.
Did we break something?
Almost always, the answer is no. When transcription quality seems to drop suddenly, it’s rarely the model itself. Much more often, it’s a change in the audio.
That’s frustrating, because if you can’t trust the transcript, speaking stops being faster than typing. Accuracy matters more to us than anything else.
First: No, we didn’t secretly downgrade the model.
We’re extremely careful when deploying new models. Every change is validated, rolled out gradually, and closely monitored. If we see real regressions, we roll them back.
We’re not cutting corners or swapping in worse models to save money. We are constantly experimenting to improve speed and accuracy across 100+ languages, and occasionally that experimentation can surface edge cases. That’s exactly why user reports matter.
So what actually causes transcription quality to feel worse?
If you’ve ever been on a Zoom or phone call where someone says “I can’t hear you,” it’s the same underlying problem here. Wispr Flow often still produces something usable, but accuracy can suffer.
Some of the most common reasons we see:
- Microphone changes: Bluetooth mics like AirPods often have worse audio quality and can clip the beginning or end of speech, leading to missing or incorrect words.
- Environmental changes: Background noise or nearby conversations can cause the model to pick up unintended speech.
- Changes in how you’re speaking: As people get comfortable with Flow, they often mumble more, speak more quietly, or dictate while half-awake, hunched over, or lying down. I personally see worse results when I’m tired or speaking softly.
- Subtle system settings: One of our longest internal “the model is broken” scares turned out to be a macOS microphone input volume set too low. The audio sounded fine to us, but was subtly distorted.
- Wrong microphone selected: Sometimes, bluetooth headphones can pair even when they’re in your pocket. There’s basically no audio, and sometimes Wispr Flow can produce preposterous hallucinations in these cases.
All of these can look like “the AI got worse,” even though nothing about the model actually changed.
What bad audio can sound like
Here are a couple of real audio clips from our team. When you hear them, it’s easier to understand why even strong models can struggle with clipped or distorted audio.
Here’s what usually helps
Before assuming anything deeper is wrong, these steps resolve the majority of issues we see:
- Force quit and restart the app: Just like hanging up and calling someone again often fixes mic issues, restarting Wispr Flow can reset audio state.
- Listen to the recorded audio: In the desktop app, go to your history, click the three-dot menu, download the audio, and play it back. If it sounds distorted, clipped, or oddly quiet, try relaunching the app or switching microphones.
- Speak a little louder and more clearly: Often, simply retrying the dictation with slightly more projection makes a big difference.
- Check microphone input volume: On macOS especially, mic input volume can drift very low. Audio can sound “okay” to your ears while still being distorted enough to hurt transcription quality.
- Retry the transcription from history: If retrying fixes the issue, it may be related to temporary audio compression (especially on mobile networks). We’re actively working to improve this.
- Check the size of your dictionary: Very large dictionaries can sometimes hurt accuracy. If you’ve configured hundreds or thousands of special words, the model has to work harder to decide when to apply them. If you’re seeing strange name substitutions or overcorrections, try trimming your dictionary to the most important terms. You can always add words back as needed.
How your feedback helps us improve
We’re working on surfacing these kinds of issues directly in the app as we automate ways of detecting them. When transcription accuracy is lower than expected, submitting transcript feedback genuinely helps us identify patterns and catch regressions faster.
We can’t respond to every single piece of transcript feedback (imagine if Apple replied every time I muttered “WTF” at Siri), but we do review all of it closely and use it to continuously improve our models.
When this is a real bug
If part of a transcription is missing:
- but the full audio is present when you download it, or
- retrying the transcript restores the missing text
Then we consider that a serious software issue and treat it with high priority.
If you contact Support, please go through the app or the Support Portal or the in-app include the specific transcript from your history and note that you’ve tried the steps above. That helps us move much faster.
We’re always working to get better
Our research team is fully focused on improving transcription accuracy, especially in difficult real-world conditions. It’s easy to make transcription fast by sacrificing accuracy. That’s a tradeoff we’ll never make, because editing costs far more time than waiting a fraction of a second longer.
We’re building toward a future where you can trust your words to land correctly, even in imperfect environments. And we’re going to keep pushing until we get there.

Start flowing
Effortless voice dictation in every application: 4x faster than typing, AI commands and auto-edits.
