Three stages. Each one local.
The whole pipeline runs on your machine. No audio leaves your hardware, no transcripts are written to disk, and no remote model is contacted. What follows is the exact path a single utterance takes from your microphone to the popover on your screen.
Three stages. Each one local.
The whole pipeline runs on your machine. No audio leaves your hardware. Transcripts never get written to disk — the past 30s lives in RAM until your next ping replaces it.
- step01
Detect
MeetPing watches Core Audio for mic activation and the foreground app. When Zoom, Meet, or any meeting app starts using the microphone, the listener arms automatically.
MicWatcher · AppDetector · NSWorkspace - step02
Listen
Audio streams into Parakeet TDT v3 running on Apple Neural Engine. Partial transcripts arrive every ~1.4s and are scanned against your watchword list with per-word boundary regex.
FluidAudio · KeywordWatcher · ANE - step03
Ping
On a match, MeetPing snapshots the past 30s of transcript, opens a popover with the keyword highlighted, and fires alerts on every channel you've enabled. The next 30s fills in live.
AlertCenter · Notification · Sound · Screen flash
Next: the six instruments inside the menubar app.
See features ↗Go deeper.
Auto-arm on mic activity
How MeetPing decides when to start listening. Idle by default, armed only inside a real meeting.
read ›Parakeet TDT v3 on Mac
The streaming chunk config, ANE constraints, and the latency budget for the live pipeline.
read ›Keyword watch
Regex + Soundex + Levenshtein matching, with confidence bands per strategy.
read ›