Hold a key. Speak. Release.

The fastest offline voice-to-text app for macOS. Whisper runs on your Apple Neural Engine. Your voice never leaves your Mac.

Whisper · ~1.0s
100%
On-device
0
Telemetry
~1s
Latency
MIT
License

How it works

Three motions. Zero friction.

01

Press

Hold Right Option (or any key you pick) anywhere on your Mac. The HUD slides in with a live waveform.

02

Speak

Talk normally. Audio is captured at 16 kHz mono. Nothing leaves your machine.

03

Release

Whisper transcribes locally on the Neural Engine. The text is auto-pasted exactly where your cursor was.

Features

Built like a native macOS app should be.

100% on-device

Whisper runs on your Neural Engine. Audio never touches a server.

Press, speak, release

One hotkey. No menus, no commands. The app stays out of your way.

Auto-paste cascade

CGEvent ⌘V → AppleScript fallback. Reliable in TextEdit, Terminal, Chrome, Cursor, Slack.

Glossary fuzzy-match

Add proper nouns and brands. Levenshtein corrects Whisper's near-misses on the way out.

Multilingual

All 99 Whisper languages. French and English auto-detect, or pin one in Preferences.

Open source · MIT

Read every line. Fork it. Self-host. No paywall, no upsell, no freemium hooks.

Privacy

Your voice never
leaves your Mac.

Not as a marketing claim. As a structural fact about how the app is built.

  • Audio is captured locally and deleted right after transcription
  • Whisper runs on your Apple Neural Engine, no API call
  • No analytics, no telemetry, no crash reporter
  • No third-party SDK has access to your voice or text
  • One single network call ever : downloading the Whisper model on first run

FAQ

Quick answers.

Is VoxPrompt really free ?

Yes. MIT license, no paywall, no upsell, no freemium. Read the source if you want to be sure.

How does it compare to Superwhisper or MacWhisper ?

Same on-device approach, same Whisper backend. VoxPrompt is fully open source and free. Use whichever fits you.

Does it work offline ?

After the initial Whisper model download (~632 MB on first run), yes, completely offline.

Which Macs does it support ?

Apple Silicon only (M1, M2, M3, M4) on macOS 14 or later. WhisperKit needs the Neural Engine.

Which apps does the auto-paste work with ?

Cocoa native (TextEdit, Notes), Electron (Slack, Cursor, VS Code), browsers, terminals (Terminal, iTerm), most text fields. The cascade falls back to AppleScript on apps that reject synthetic CGEvents.

Where is the audio stored ?

In a temporary WAV file, just long enough to be transcribed. Then it is deleted. Nothing is logged unless you explicitly enable VOXPROMPT_DEBUG.

Can I change the hotkey ?

Yes, in Preferences. Right Option, Left Option, Right Control, or any of F13 to F16.

Why is the auto-paste cascade in v0.1.1 such a big deal ?

Before v0.1.1, the simulated Cmd+V did not always reach the focused app on macOS 14+ and 26. The cascade captures the target app at hotkey press, re-activates it, posts directly to its PID, and falls back to AppleScript via System Events. It is now reliable.