Your Private
Chief of Staff.
The old way: Pasting your sensitive company data into ChatGPT and hoping for the best.
The MiniMe way: An open-weights local LLM that can chat with your entire digital footprint without a single byte leaving your machine.
Omniscient, yet completely air-gapped.
Ask natural language questions about your own life.
Flawless RAG Citations
Hallucinations are unacceptable for your personal data. Every UI answer cites its exact local sources—whether it was a Slack message you read yesterday or a PDF you scrolled through last week.
Native Voice UI
Talk directly to MiniMe hands-free using Web Speech API dictation.
Proactive System Insights
You don't always have to ask. MiniMe observes your habits and sends unintrusive push notifications with high-value suggestions.
Flow State Warning
You've been deep in focus for 2.5 hours. Taking a break now will boost afternoon retention by 40%.
Agentic Plugins
Your AI Copilot doesn't just answer questions; it takes action. Extend MiniMe's capabilities by installing local plugins to automate repetitive tasks based on your graph data.
BYO Intelligence.
Your private Chief of Staff is powered by state-of-the-art weights running directly on your hardware, connected to your semantic vector store.
Ollama Native
Seamlessly connect to Ollama to run Llama 3, Mistral, or WebLLM entirely locally ensuring absolute zero-trust execution for sensitive queries.
Qdrant Vector DB
Semantic search is powered by a local Qdrant instance, enabling lightning-fast Retrieval-Augmented Generation mapped directly back to Neo4j.
AES Encrypted API Keys
Need more reasoning power? Opt-in to use OpenAI or Anthropic directly. Your keys are stored locally and encrypted using AES-256-GCM.
Perfect for Information Hoarders.
Ready to run it locally?
Discover the hyper-optimized Rust architecture that powers the Desktop Guardian tracking agent.
Explore Desktop Architecture