Setting Up Voice Coding in VS Code: Step-by-Step
Complete tutorial for setting up voice coding in VS Code. Install, configure, and start dictating code in minutes.
TL;DR: Voice coding in VS Code takes under 5 minutes to set up. Install a voice typing tool, configure one shortcut, and start dictating. This guide walks you through everything from installation to advanced workflows.
Why Voice Coding in VS Code?
VS Code is where developers spend most of their time. It is also where you do the most typing that is not actual code: AI prompts in Copilot Chat, search queries, commit messages, comments, documentation, and terminal commands in the integrated terminal.
Voice coding does not replace your keyboard for writing code. It supercharges everything else. The result is less time typing boilerplate and more time thinking about architecture and logic.
Step 1: Install a Voice Typing Tool
For voice coding in VS Code on Windows, we recommend Murmur. Here is why:
- Works inside any application, including VS Code, its terminal, and its extensions
- AI-powered transcription with excellent accuracy for technical vocabulary
- Single keyboard shortcut activation (no mode switching)
- Free tier available (5 dictations/day) with affordable Pro
Installation
- Go to murmur-app.com/download
- Download the Windows installer
- Run the installer (takes about 30 seconds)
- Launch Murmur from the Start menu or system tray
On first launch, Murmur will ask for microphone permissions. Grant them.
Step 2: Configure Your Shortcut
Murmur's default shortcut is Ctrl+Space. This works well in most contexts, but VS Code uses Ctrl+Space for IntelliSense autocomplete.
You have two options:
Option A: Change the Murmur Shortcut
Open Murmur's settings and change the activation shortcut to something that does not conflict. Good alternatives:
- Ctrl+Shift+Space — easy to remember, rarely used
- Alt+Space — one-handed activation
- F9 or another function key — dedicated voice key
Option B: Change VS Code's IntelliSense Shortcut
If you prefer keeping Ctrl+Space for voice (since you will use it more often), remap IntelliSense in VS Code:
- Open VS Code Settings (Ctrl+,)
- Search for "trigger suggest"
- Click the keyboard shortcut icon
- Change it to Ctrl+I or another key
Either approach works. Pick the one that feels more natural to you.
Step 3: Test Your Setup
- Open any file in VS Code
- Place your cursor where you want text to appear (a comment line is a good test)
- Press your Murmur shortcut
- Say "This is a test of voice coding in VS Code"
- The text should appear at your cursor position
If it works, you are set up. If not, check:
- Murmur is running in the system tray
- Your microphone is not muted
- The correct shortcut is configured
- No other app is capturing the shortcut
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeVS Code-Specific Voice Coding Tips
Use Voice for Copilot Chat and AI Extensions
The highest-value use of voice in VS Code is for AI interactions. When you open Copilot Chat (Ctrl+Shift+I) or any AI extension's chat panel:
- Click in the chat input field
- Press your voice shortcut
- Speak your prompt in detail
"I need to refactor the authentication module to support both JWT and API key authentication. The current code only handles JWT. Add a strategy pattern where each auth method is a separate strategy. Keep backward compatibility with existing JWT tokens."
This 15-second spoken prompt produces dramatically better AI output than the 10-word typed version most developers would write.
Dictate Comments and Documentation
When writing JSDoc, docstrings, or inline comments, voice is faster:
Place your cursor above a function, type /** to start a JSDoc block, then speak:
"This function validates the user registration form data. It checks email format, password strength with minimum 8 characters including a number and special character, and username availability against the database. Returns an object with isValid boolean and an array of error messages."
Speak Your Commit Messages
In VS Code's Source Control panel (Ctrl+Shift+G):
- Click in the commit message input
- Press your voice shortcut
- Describe your changes naturally
"Refactored the payment processing module to use the strategy pattern. Added support for Stripe and PayPal payment methods. Updated tests to cover both payment strategies. Fixed a bug where the webhook handler was not validating the signature for PayPal events."
That is a great commit message. It took 10 seconds to dictate. Most developers would have typed "payment refactor" and moved on.
Terminal Commands
VS Code's integrated terminal works perfectly with voice typing:
- Open the terminal (Ctrl+`)
- Press your voice shortcut
- Speak your command: "git log dash dash oneline dash 10"
- Murmur transcribes it as
git log --oneline -10
Murmur's AI-powered transcription helps here because it accurately handles technical terms and command syntax.
For more terminal workflows, read Voice Typing in the Terminal.
Search and Navigation
VS Code's search (Ctrl+Shift+F) and Command Palette (Ctrl+Shift+P) both accept text input. Voice works for both:
- Search: Press Ctrl+Shift+F, then use voice to describe your search: "useState cleanup function" or "error handling middleware"
- Command Palette: Press Ctrl+Shift+P and speak: "toggle word wrap" or "format document"
Extensions That Complement Voice Coding
These VS Code extensions work well alongside voice typing:
Error Lens
Shows inline error messages directly in your code. When you can see the error without hovering, you can immediately speak a fix description to your AI tool without context-switching.
GitLens
Displays git blame information inline. See who wrote a line and when, then voice your questions to AI about the code's history and intent.
Todo Tree
Highlights TODO, FIXME, and HACK comments. Voice makes adding these annotations painless:
"TODO: This needs to handle the case where the API returns a 429 rate limit error. Add exponential backoff retry logic."
Project Manager
If you switch between multiple projects, voice typing for search and navigation becomes even more valuable when you can also voice-switch between projects.
Common Issues and Fixes
Voice Text Appears in the Wrong Window
Murmur inserts text where your cursor is. If text appears in the wrong window:
- Make sure VS Code is focused before pressing the shortcut
- Click in the exact input field where you want text to appear
- Some VS Code panels steal focus. Click back in your target field if needed.
Transcription Includes Filler Words
If you say "um" or "uh," Murmur may transcribe them. Tips to reduce this:
- Pause silently instead of using filler words
- After a few days of practice, filler words naturally decrease
- Murmur's AI processing often filters these out, but occasional ones slip through
Technical Terms Are Misspelled
AI-powered tools like Murmur handle most technical terms correctly. But for unusual terms:
- Say the term once, correct it if needed. The AI learns from context.
- For proprietary terms (your company's internal naming), you may need to correct the first few times. The transcription improves as it understands context.
IntelliSense Conflicts
If both Murmur and IntelliSense activate on the same shortcut, follow the shortcut reconfiguration steps in Step 2 above. Only one tool should use each shortcut.
A Day with Voice Coding in VS Code
Here is what a typical workday looks like after adopting voice coding:
9:00 AM: Open VS Code. Voice-dictate a plan as comments in the relevant file:
"Today I need to implement the user notification preferences. Start with the database schema, then the API endpoint, then the frontend settings component."
9:30 AM: Working on code. Use keyboard for actual code. Voice for AI prompts:
"Create a Prisma migration that adds a notifications_preferences table with columns for user_id, email_enabled, push_enabled, digest_frequency, and updated_at."
11:00 AM: Code review. Voice-dictate review comments:
"The error handling here silently swallows the error. We should at least log it to our monitoring service. Also, the retry count should be configurable through an environment variable, not hardcoded to 3."
12:00 PM: Voice-dictate standup notes:
"Completed the notification preferences API. Started the frontend component. Blocked on the push notification service integration, waiting for DevOps to set up the message queue."
3:00 PM: Documentation sprint. Voice-dictate module docs:
"The notification preferences module allows users to configure how they receive notifications. It supports email, push, and digest notifications with configurable frequency..."
5:00 PM: Voice-dictate a commit message summarizing the day's work.
Conclusion
Setting up voice coding in VS Code takes five minutes and immediately improves your workflow for AI prompts, documentation, commits, and code reviews. The key is to start with one use case, build the habit, and then expand.
Download Murmur, configure your shortcut, and try dictating your next AI prompt. The difference is immediate. For the full picture on voice coding tools and strategies, check out our complete voice coding guide.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeRelated Articles
voice coding
Write Code Documentation with Voice: Comments, READMEs, and Docs That Don't Suck
Learn how voice dictation makes writing code docs faster. Comments, READMEs, API docs, and docstrings in minutes instead of hours.
voice coding
Voice Coding with Claude Code: Speak Your Prompts
Use voice typing with Claude Code to write better prompts faster. Step-by-step setup and real examples inside.
voice coding
Talon Voice vs Murmur: Which Is Right for You?
An honest comparison of Talon Voice and Murmur for developers. Features, setup, pricing, and who should use which.