How I 3x'd My Coding Speed Using Voice in Cursor
A developer's real experience using voice typing in Cursor IDE. Learn the workflows that tripled coding productivity.
TL;DR: Typing detailed prompts to Cursor's AI was my biggest bottleneck. Switching to voice input for prompts, comments, and documentation tripled my effective coding speed. Here is exactly how.
The Bottleneck I Did Not See
I thought I was fast. Mechanical keyboard, Vim keybindings, every shortcut memorized. I could navigate a codebase like a fighter jet. But when Cursor became my daily driver, I noticed something: I was spending more time typing instructions to the AI than I was spending writing code.
A typical Cursor session looked like this:
- Open a file, identify what needs to change
- Press Ctrl+K or Ctrl+L to open the AI panel
- Type a prompt explaining what I want
- Wait for generation
- Review, accept or refine
- Repeat
Steps 1, 4, and 5 were fast. Step 3 was the bottleneck. And worse, I was unconsciously writing shorter, lazier prompts because typing detailed instructions was slow. Short prompts meant worse output, which meant more iterations.
The Experiment
I installed Murmur on a Monday morning. The setup took about two minutes. Ctrl+Space to activate, speak, release. That is the entire workflow.
I decided to track my productivity for two weeks: one week of pure typing, one week with voice for all AI prompts.
Week 1: Typing Only (Baseline)
- Average prompt length: 15-25 words
- Average iterations per feature: 3-4
- Features completed per day: 4-5
- Typical prompt: "Add error handling to the login function"
Week 2: Voice for All Prompts
- Average prompt length: 50-80 words
- Average iterations per feature: 1-2
- Features completed per day: 12-15
- Typical prompt: "The login function in auth.ts needs proper error handling. It should catch network errors separately from auth errors, show a user-friendly toast message for each case, and log the full error to our monitoring service. Keep the existing retry logic but add a maximum of three retries for network errors only."
The 3x improvement came from two compounding effects: prompts were faster to produce, and they were so much better that AI output required fewer iterations.
Cursor Workflows That Benefit Most from Voice
1. Inline Edits (Ctrl+K)
Cursor's inline edit feature is where voice shines brightest. You select some code, press Ctrl+K, and describe what you want changed.
Typing scenario: You select a function. You type: "refactor to use async/await." Cursor makes a guess. It is not quite right. You type another prompt. Two more iterations.
Voice scenario: You select the same function. You press your Murmur shortcut and say: "Refactor this function to use async/await instead of promise chains. Keep the error handling but convert the .catch blocks to try-catch. Also rename the variable 'res' to 'response' for clarity and add a return type annotation."
One iteration. Done. Fifteen seconds of speaking replaced three rounds of typing and reviewing.
2. Chat Panel Conversations (Ctrl+L)
The chat panel is for longer conversations about architecture, debugging, and planning. These conversations benefit enormously from voice because you can explain context the way you would to a colleague.
Instead of typing a terse "why is this failing," you can say:
"I am getting a TypeScript error on line 34 of the UserService class. The error says Property 'email' does not exist on type 'unknown'. I think the issue is that the API response type is not properly narrowed after the fetch call. Can you show me how to add a type guard that validates the response shape before accessing properties?"
That level of context means Cursor gives you the right answer immediately, with a proper type guard implementation, not a generic suggestion.
3. Multi-File Edits with Composer
Cursor's Composer mode lets you make changes across multiple files. The prompts for this need to be detailed because you are describing changes to several files at once.
A voice prompt for Composer might sound like:
"I need to add a new API endpoint for user preferences. Create a route in routes/preferences.ts that handles GET and PUT requests. Add a PreferencesService in services/preferences.ts with methods to fetch and update preferences. Create a Zod schema in schemas/preferences.ts for validation. Update the route index file to include the new routes. Use the same patterns as the existing user routes."
Typing that would take a minute or more. Speaking it takes about 20 seconds. And you naturally include more architectural context when speaking.
4. Explaining Bugs and Issues
When you encounter a bug, explaining it verbally is more natural than typing. You tend to include more context about what you expected, what actually happened, and what you have already tried.
Voice prompt: "This component re-renders every time the parent updates even though its props have not changed. I already wrapped it in React.memo but the issue persists. I suspect the problem is that the onClick handler is being recreated on every render because it is an inline arrow function. Can you refactor this to use useCallback and check if there are any other props that might cause unnecessary re-renders?"
Tips for Speaking Effective Prompts to Cursor
Be Specific About File Paths and Names
Instead of saying "the auth file," say "the auth middleware in middleware/auth.ts." Cursor has context about your project, and specific file references help it locate the right code.
Describe the Pattern, Not Just the Outcome
Instead of: "Make this work."
Say: "Follow the same error handling pattern used in the UserController. Wrap the database call in a try-catch, map known error codes to HTTP status codes using the errorMapper utility, and let unknown errors propagate to the global error handler."
Include Constraints
When speaking, naturally mention what you do NOT want:
"Add pagination to this endpoint. Use cursor-based pagination, not offset. Do not change the existing response format, just add a 'nextCursor' field. Keep backward compatibility for clients that do not send a cursor parameter."
Think Out Loud
One of the biggest advantages of voice is that you can think out loud. You do not have to have a perfectly formed prompt before you start speaking.
"Okay so this function is supposed to validate the input but it is not handling the edge case where the array is empty. Actually, it is also not handling null. Let me think... I think the cleanest approach is to add a guard clause at the top that checks for null or empty array and returns an empty result early. Then the rest of the function can assume it has valid data."
Cursor handles this stream-of-consciousness style well because it extracts the intent from natural speech.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeThe Hybrid Workflow
After a month, I settled into a rhythm:
- Voice: All AI prompts, commit messages, PR descriptions, documentation, code review comments
- Keyboard: Navigation, small edits, shortcuts, accepting/rejecting AI suggestions
- Mouse: Selecting code blocks to refactor, clicking through Cursor's suggestions
Voice handles maybe 40% of my input by volume but saves 60% of my time because it targets the highest-friction activities.
Numbers After One Month
Here are my rough before/after stats:
| Metric | Typing Only | With Voice |
|---|---|---|
| Prompts per hour | 15 | 35 |
| Average prompt length | 20 words | 65 words |
| AI iterations per task | 3.2 | 1.4 |
| Features completed/day | 5 | 14 |
| Time on documentation | 45 min/day | 15 min/day |
The speed increase is not just about talking faster than typing. It is about the compound effect of better prompts leading to better AI output leading to fewer iterations.
Getting Started
If you want to replicate this workflow:
- Download Murmur and set up the keyboard shortcut
- Start by using voice only for Cursor's chat panel (Ctrl+L)
- After a few days, start using it for inline edits (Ctrl+K) too
- Eventually use it for commit messages, docs, and everything else
The first day feels strange. By day three it feels natural. By week two you will wonder why you ever typed prompts.
Conclusion
The 3x speed claim is not about typing speed. I already typed at 100+ WPM. The speed gain comes from removing the friction between your thoughts and Cursor's AI. When the cost of a detailed prompt drops from 60 seconds of typing to 15 seconds of speaking, you naturally write better prompts. Better prompts produce better code. Better code needs fewer iterations.
Voice input is not replacing my keyboard. It is filling the gap that keyboards are bad at: quickly expressing complex, nuanced instructions to AI tools. If you use Cursor every day, adding voice input is probably the highest-ROI change you can make to your workflow.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeRelated Articles
voice coding
Voice Coding in 2026: The Complete Guide
Everything you need to know about voice coding in 2026. Tools, setup, tips, and workflows to code faster with your voice.
voice coding
Write Code Documentation with Voice: Comments, READMEs, and Docs That Don't Suck
Learn how voice dictation makes writing code docs faster. Comments, READMEs, API docs, and docstrings in minutes instead of hours.
voice coding
Voice Coding with Claude Code: Speak Your Prompts
Use voice typing with Claude Code to write better prompts faster. Step-by-step setup and real examples inside.