Why Your AI Prompts Suck (And How Voice Fixes Them)
Most developers write terrible AI prompts because typing is slow. Voice typing removes the bottleneck. See real before/after examples.
TL;DR: You write short, lazy AI prompts because typing long ones is exhausting. This makes AI tools seem less capable than they are. Voice input lets you speak detailed prompts in seconds, dramatically improving AI output quality.
The Uncomfortable Truth
Your AI coding tools are not underperforming. You are under-prompting.
When you type "fix the bug" into Cursor and get a mediocre result, the problem is not Cursor. The problem is that you gave it nothing to work with. You know where the bug is, what it does, and what the fix should look like. But you did not type any of that because it would take too long.
This is the prompt quality vs typing effort tradeoff, and it is the single biggest reason developers are disappointed with AI coding tools.
Why People Write Short, Lazy Prompts
It is not laziness. It is economics. Every word you type has a cost:
- At 70 WPM, a 100-word prompt takes about 85 seconds
- That includes thinking about phrasing, fixing typos, and restructuring
- In a terminal (where many AI agents run), editing long text is especially painful
- After 8 hours of typing code, your fingers and wrists are tired
So you optimize. You cut corners. You write "add auth" instead of explaining the entire authentication flow. You write "fix test" instead of describing which test, what it tests, and why it is failing.
And then you spend 10 minutes in a back-and-forth loop with the AI, providing the context you should have given upfront.
The irony: you save 60 seconds by writing a short prompt, then spend 10 minutes on iterations. The "shortcut" costs you 9 extra minutes.
The Prompt Quality Spectrum
Here is what the same request looks like at different quality levels:
Level 1: Minimum Effort (5 seconds to type)
fix the login
AI result: Makes a random guess about what is wrong with login. Changes the wrong file. You iterate three times. Total time: 8 minutes.
Level 2: Some Context (30 seconds to type)
The login function returns 401 for valid users.
Check the token validation.
AI result: Finds the token validation function and makes a plausible fix. It is close but misses the refresh token edge case. One more iteration. Total time: 4 minutes.
Level 3: Detailed (90 seconds to type)
The login endpoint in routes/auth.ts returns 401 for users whose
access token has expired but still have a valid refresh token. The
verifyToken function in middleware/auth.ts checks the access token
expiry on line 45 but returns 401 immediately without checking the
refresh token. Fix this by adding a refresh token check before the
401 response, and add a test for this scenario in auth.test.ts.
AI result: Fixes the exact issue, adds the test. Done in one iteration. Total time: 2 minutes.
Level 3 by voice (20 seconds to speak)
The same Level 3 prompt above takes only 20 seconds to dictate. That is less time than the Level 2 prompt takes to type.
This is the key insight: Voice makes Level 3 prompts as fast as Level 1 typed prompts. You get the quality without the effort.
How Voice Removes the Bottleneck
Speaking is 2-3x faster than typing for natural language. But the speed difference is only part of the story. Voice also changes how you think about prompts:
You Stop Self-Editing
When typing, you constantly evaluate whether each word is "worth" the effort. You trim sentences, skip context, and abbreviate. It is an unconscious optimization loop that degrades prompt quality.
When speaking, words flow at the speed of thought. You say what comes to mind, including the context, the constraints, the edge cases, and the examples that make AI output great.
You Include the "Why"
Typed prompt: "Add caching to the user endpoint."
Spoken prompt: "The user endpoint is too slow because it queries the database on every request. Add Redis caching with a 5-minute TTL. Invalidate the cache when a user updates their profile. Use our existing Redis client in lib/redis."
The spoken version includes the why (too slow), the how (Redis, 5-minute TTL), the edge case (invalidation on update), and the implementation detail (use existing client). You would never type all of that. But you naturally say it because speaking is effortless.
You Provide Examples
Typed prompt: "Format the output better."
Spoken prompt: "The API response currently returns the raw database object with snake_case field names and includes internal fields like created_at and deleted_at. I want it to return a clean DTO with camelCase field names, only the public fields like id, name, email, and role. Follow the same pattern we use in the product endpoint response."
Examples and references to existing patterns are incredibly valuable for AI tools. They almost never appear in typed prompts because they are "too much effort." They naturally appear in spoken prompts.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeBefore/After Prompt Examples
Example 1: Bug Fix
Typed (Level 1):
fix the search
Spoken (Level 3):
The search endpoint in controllers/search.ts is returning duplicate
results when the user's query matches both the title and description
of the same item. The issue is that we are doing two separate queries
and concatenating the results without deduplication. Fix this by either
using a single query with OR conditions or by deduplicating the results
based on item ID before returning them. Keep the relevance sorting.
Example 2: New Feature
Typed (Level 1):
add dark mode
Spoken (Level 3):
Add dark mode support to the application. Create a ThemeContext that
stores the current theme in localStorage so it persists across sessions.
Add a toggle button in the navigation bar. The dark theme should use
the design tokens we already have in styles/tokens.ts. I want the
theme to default to the user's system preference using the
prefers-color-scheme media query, but they should be able to override
it with the toggle.
Example 3: Refactoring
Typed (Level 1):
clean up the user service
Spoken (Level 3):
The UserService in services/user.ts has grown to 400 lines and mixes
business logic with database queries and email sending. Refactor it by
extracting the database operations into a UserRepository class, the
email operations into an EmailService that we inject through the
constructor, and keep only the business logic in UserService. Create
interfaces for both the repository and email service so we can mock
them in tests. Follow the same patterns as the ProductService refactor
we did last sprint.
Example 4: Code Review
Typed (Level 1):
LGTM, some minor issues
Spoken (Level 3):
The overall approach looks good but I have a few concerns. The database
query on line 45 could cause N+1 issues when there are many users.
Consider batching it. The error handling in the catch block on line 72
swallows the error silently. At minimum log it to our monitoring
service. Also, the new utility function should be in the shared utils
directory since the billing module will need the same logic.
The Compound Effect
Better prompts do not just save time on one interaction. They compound:
- Better first output means fewer iterations
- Fewer iterations means less context pollution in AI chat history
- Cleaner chat history means the AI maintains better context for subsequent prompts
- Better subsequent prompts (because you are now in the habit of being detailed) compound the quality advantage
Developers who switch to voice prompts typically report:
- 60-70% fewer AI iterations per task
- 2-3x more features completed per day
- Noticeably higher code quality from AI-generated code
Getting Started
You do not need to change how you think. You just need a tool that lets you speak instead of type.
Murmur works in any application including Cursor, VS Code, Claude Code's terminal, and every other tool where you write prompts. One keyboard shortcut activates it. Speak your prompt. Done.
Try this experiment for one day:
- Download Murmur (free tier: 5 dictations/day)
- Use voice for every AI prompt you write today
- Notice how much more detail your prompts contain
- Notice how much less you iterate
The difference is not subtle. It is the difference between telling an AI "fix this" and giving it the context it needs to actually fix it.
Conclusion
Your AI tools are only as good as the prompts you give them. Most developers under-prompt because typing is slow and painful. Voice typing removes this bottleneck completely, letting you speak Level 3 prompts in the time it takes to type Level 1 prompts.
If you have been disappointed with AI coding tools, try speaking your prompts before blaming the tools. You might discover that the AI was always capable. You were just too tired to type what it needed.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeRelated Articles
voice coding
5 Ways Voice Typing Makes You a Better Developer
Voice typing is not just faster — it makes you a better developer. Here are 5 concrete ways it improves your work.
voice coding
Voice Typing in the Terminal: Commands Without Keyboards
Use voice to type terminal commands faster. Git, npm, Docker, and more — speak your commands instead of typing them.
voice coding
How I 3x'd My Coding Speed Using Voice in Cursor
A developer's real experience using voice typing in Cursor IDE. Learn the workflows that tripled coding productivity.