I built this because I wanted a meeting tool that didn't compromise privacy.
It’s a application that handles transcription locally and turns conversations into structured knowledge.
It has a live mode where you can also ask it questions during the meetings and a grounded research mode.
The performance of whisper.cpp allows me to transcribe 1h of meeting in just 20-30s on a Mac Pro M4.
Why it’s different:
* Local-First: Transcription via whisper.cpp happens 100% offline.
* Actionable: It doesn't just summarize; it automatically creates GitHub/GitLab issues from
meeting action items.
* Integrated: Generates structured Obsidian notes (with Mermaid.js graphs) and sleek,
HTML reports.
* Live Copilot: Press [Space] to ask the AI questions about the current meeting context in
real-time.
The Stack:
C++17, whisper.cpp, FTXUI (for the TUI), PortAudio, and libcurl (supporting Gemini, OpenAI, or
local Ollama).
I built this because I wanted a meeting tool that didn't compromise privacy.
It’s a application that handles transcription locally and turns conversations into structured knowledge.
It has a live mode where you can also ask it questions during the meetings and a grounded research mode.
The performance of whisper.cpp allows me to transcribe 1h of meeting in just 20-30s on a Mac Pro M4.
Looking forward for your input.