Write Code Documentation with Voice: Comments, READMEs, and Docs That Don't Suck
Learn how voice dictation makes writing code docs faster. Comments, READMEs, API docs, and docstrings in minutes instead of hours.
TL;DR: Developers avoid writing documentation because it forces a painful switch from code to prose. Voice dictation removes that friction. You read the code, then speak the explanation. The result is better docs written in a fraction of the time. This guide covers workflows for inline comments, docstrings, READMEs, and API documentation using voice coding tools like Murmur.
Why Developers Hate Writing Documentation
Let's be honest. Most developers would rather refactor a legacy codebase than write a README.
It is not laziness. It is friction. Writing documentation requires a fundamentally different mode of thinking than writing code. Code is precise, structured, and terse. Documentation is explanatory, conversational, and verbose. Switching between the two is cognitively expensive.
Here is what typically happens:
- You finish building a feature
- You know you should document it
- You open the README or a doc file
- You stare at the blank page
- You type a few reluctant sentences
- You decide "the code is self-documenting" and move on
The result is codebases with outdated READMEs, missing docstrings, and inline comments that say // TODO: add documentation here from three years ago.
The core problem is not motivation. It is the input method. Typing natural language when your brain is in code mode is slow and unnatural. You can think the explanation faster than you can type it. And that gap between thought speed and typing speed is where documentation dies.
Voice Removes the Bottleneck
When you explain code to a colleague, you do not struggle for words. You look at the function, understand what it does, and say it out loud. The explanation flows naturally because speech is the native format for explanations.
Voice dictation brings that same flow to writing documentation. Instead of typing prose character by character, you read the code, press a shortcut, and speak the explanation as if a junior developer just asked "what does this do?"
The speed difference is significant. Most developers type prose at 40-60 words per minute. Speaking comfortably lands around 130-150 words per minute. That is roughly 3x faster for the exact kind of writing that documentation requires.
But speed is only half the story. Voice-dictated documentation tends to be more thorough and more natural. When you speak an explanation, you include context and caveats that you would skip when typing because "it's not worth the effort." Those extra details are exactly what makes documentation useful.
Workflow 1: Inline Comments and Code Annotations
Inline comments are the simplest documentation to dictate. The workflow is straightforward:
- Read the code block you want to annotate
- Place your cursor above the relevant line
- Press your voice shortcut (Ctrl+Space in Murmur)
- Speak the explanation
For example, you are looking at this function:
function reconcileInventory(orders, stockLevels) {
const adjustments = orders.reduce((acc, order) => {
const current = stockLevels.get(order.sku) ?? 0;
const delta = order.type === 'return' ? order.qty : -order.qty;
acc.set(order.sku, (acc.get(order.sku) ?? current) + delta);
return acc;
}, new Map());
return adjustments;
}Instead of typing a comment, you speak:
"This function takes an array of orders and a map of current stock levels, then calculates the net inventory adjustment for each SKU. Returns process orders into sales that decrease stock and returns that increase it. The result is a map of SKU to adjusted stock level."
That took about eight seconds to say. Typing it would have taken 30-40 seconds, and you probably would have written something shorter and less helpful.
Tips for Better Inline Comments by Voice
Start with "why," not "what." The code already shows what happens. Your voice comment should explain why. Say "We check for null here because the legacy API sometimes returns null instead of an empty array" rather than "Check if value is null."
Speak in complete sentences. Modern AI transcription works best with full thoughts. Do not pause between every few words. Let the thought flow naturally and the punctuation will follow.
Do not worry about formatting on the first pass. Speak the explanation, then quickly edit the result with your keyboard. Voice for the first draft, keyboard for polish. This hybrid approach is covered in our complete voice coding guide.
Workflow 2: Docstrings and Function Documentation
Docstrings are where voice dictation really shines. A good docstring explains what a function does, what its parameters expect, what it returns, and what can go wrong. That is a lot of typing. But it is only about 15 seconds of speaking.
Here is the workflow in VS Code:
- Place your cursor inside the docstring opening (after
"""in Python,/**in JavaScript/TypeScript) - Press your voice shortcut
- Describe the function as if explaining it to someone who has never seen the code
Python example:
You see this function:
def sync_user_preferences(user_id: str, source: str = "api") -> dict:
...You speak:
"Synchronizes user preferences from the specified source to the local cache. Takes a user ID string and an optional source parameter that defaults to API. Can also accept webhook or migration as source values. Returns a dictionary containing the merged preferences and a timestamp of the last sync. Raises a UserNotFoundError if the user ID does not exist, and a SyncConflictError if the remote and local preferences have diverging timestamps."
That produces a comprehensive docstring in 12 seconds. The equivalent typing time would be over a minute, and most developers would have stopped at "Syncs user preferences."
TypeScript/JSDoc example:
For a TypeScript function, the same approach works. Place your cursor inside the JSDoc block and speak:
"Validates and normalizes an incoming webhook payload from Stripe. Verifies the webhook signature against the configured secret, parses the event type, and returns a normalized event object. Throws an InvalidSignatureError if the signature does not match. The timeout parameter controls how long to wait for signature verification and defaults to 5000 milliseconds."
Batch Documenting Existing Code
One of the best uses of voice dictation is documentation sprints on existing codebases. The workflow:
- Open a file with undocumented functions
- Read the first function
- Speak the docstring
- Move to the next function
- Repeat
You can document an entire module in 15-20 minutes that would have taken an hour or more of typing. The key insight is that you already understand the code. The bottleneck was always the typing, not the thinking.
Workflow 3: README Files
README files are the front door of every project. They are also the most neglected documentation artifact in most codebases. Voice dictation changes the economics of writing them.
The Voice-First README Process
Instead of staring at a blank file, open your README and speak each section:
Project description:
"This is a middleware library for Express.js that handles rate limiting with Redis-backed storage. It supports both fixed window and sliding window algorithms, configurable per-route limits, and automatic IP-based and API key-based client identification. Designed for production use with built-in support for clustering and horizontal scaling."
Installation section:
"Install using npm install at rate limiter middleware. Requires Node 18 or higher and a running Redis instance. For the default configuration, no additional setup is needed. For custom configurations, see the configuration section below."
Usage section:
Speak through a usage example as if you were walking a colleague through it:
"Import the rate limiter from the package. Create a new instance passing your Redis connection URL. Then use it as Express middleware with app.use. You can pass an options object to configure the window size, maximum requests, and the algorithm. Here is a basic example..."
Then add the code block with your keyboard. This hybrid approach (voice for prose, keyboard for code) is the most efficient way to write technical documentation.
README Tips for Voice Dictation
Dictate the prose, type the code. Code blocks need precision that keyboard input handles better. But all the explanatory text around those code blocks is perfect for voice.
Speak to a persona. Imagine a developer who just found your project on GitHub. What do they need to know first? Speak to that person.
Use section headers as prompts. Create your header structure first (## Installation, ## Usage, ## Configuration), then fill in each section by voice. The headers give you a framework so you do not have to figure out what to say next.
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeWorkflow 4: API Documentation
API documentation is repetitive and verbose, which makes it ideal for voice dictation. Each endpoint needs a description, parameters, request/response examples, and error codes. That is a lot of writing, but the pattern is predictable.
Dictating Endpoint Descriptions
For each API endpoint, speak through these elements:
"POST slash API slash users. Creates a new user account. Requires a JSON body with email, password, and an optional display name. The email must be unique across all accounts. The password must be at least 8 characters with one uppercase letter and one number. Returns a 201 with the created user object including the generated user ID and a timestamp. Returns 409 if the email already exists. Returns 422 if the request body fails validation."
That describes the endpoint more thoroughly than most developers would type, and it took about 15 seconds.
Dictating Schema Descriptions
Data model documentation benefits from the same approach:
"The user object contains an ID which is a UUID v4 generated on creation, an email which is unique and case-insensitive, a display name which defaults to the portion of the email before the at sign, a created at timestamp in ISO 8601 format, and a status field that can be active, suspended, or deleted."
Tips for Dictating Technical Content
Handling Abbreviations and Technical Terms
AI-powered transcription tools like Murmur handle most technical vocabulary correctly because they use context to understand intent. However, a few tips help with edge cases:
Say acronyms naturally. "API," "JSON," "REST," "JWT" are all recognized when spoken as acronyms. You do not need to spell them out.
Use context for ambiguous terms. If you say "route handler for the users endpoint," the transcription understands "route" in a programming context. Speaking in full sentences provides the context that makes transcription accurate.
Spell out unusual names when needed. For proprietary terms or unusual library names, you may need to spell them once and then correct. After the first occurrence, the AI transcription engine often picks up the pattern.
Punctuation and Formatting
Modern AI transcription handles punctuation well when you speak naturally. A few specifics:
- Periods and commas are inserted automatically based on your speech patterns
- Code terms like function names can be spoken naturally and then formatted afterward with backticks using your keyboard
- Lists work well when you signal them verbally: "First... Second... Third..." The transcription usually picks up the structure
The Hybrid Approach
The most productive documentation workflow combines voice and keyboard:
- Voice for all explanatory prose, descriptions, and natural language content
- Keyboard for code blocks, formatting (headers, bold, links), and precise edits
- Voice again for reviewing your draft. Read it back and dictate corrections or additions
This approach leverages the strengths of each input method. You are not fighting the keyboard for long prose, and you are not fighting voice for precise formatting.
Where Murmur Fits In
Murmur is designed for exactly this workflow. It works inside any application on your PC, which means you can dictate documentation in:
- VS Code directly in your source files
- Terminal editors like Vim or Nano
- Browser-based tools like GitHub's web editor, Notion, or Confluence
- Claude Code for prompting AI to generate or improve documentation
One keyboard shortcut activates dictation. Speak your documentation. The text appears where your cursor is. No app switching, no copy-pasting, no mode changes.
At free for 5 dictations per day or €29.97 for a lifetime Pro license, the cost of better documentation is negligible. Compare that to the cost of onboarding a new developer who spends three days figuring out an undocumented codebase.
The Documentation Habit
The biggest barrier to good documentation is not tools. It is habit. Voice dictation lowers the effort enough to make documentation a realistic part of your daily workflow rather than a quarterly guilt trip.
Here is a simple habit to build:
- Every time you finish a function, spend 10 seconds speaking a docstring
- Every time you close a PR, spend 30 seconds speaking a summary comment
- Every Friday, spend 10 minutes voice-dictating updates to your project README
Ten seconds per function. That is all it takes when you can speak instead of type. And the compound effect of those 10-second investments is a codebase that new developers can actually understand.
Getting Started
- Download Murmur (free, 2-minute setup)
- Open a file with undocumented code
- Place your cursor above a function
- Press Ctrl+Space and explain what the function does
- Repeat for the next function
Start with one file. Document every function in it by voice. Time yourself. You will be surprised how fast it goes when the bottleneck is thinking, not typing.
Your codebase deserves documentation that does not suck. Your voice can deliver it.
Further Reading
Ready to try voice coding?
Try Murmur free for 7 days with all Pro features. Start dictating in any app.
Download for freeRelated Articles
voice coding
Agentic Coding by Voice: The Future of Dev Productivity
Why voice is the natural input for AI coding agents like Cursor and Claude Code. Explore the future of development.
voice coding
Voice Coding with Claude Code: Speak Your Prompts
Use voice typing with Claude Code to write better prompts faster. Step-by-step setup and real examples inside.
voice coding
Voice Coding in 2026: The Complete Guide
Everything you need to know about voice coding in 2026. Tools, setup, tips, and workflows to code faster with your voice.