Back Original

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

A local device focused AI assistant built in Rust — persistent memory, autonomous tasks, ~27MB binary. Inspired by and compatible with OpenClaw.

cargo install localgpt

  • Single binary — no Node.js, Docker, or Python required
  • Local device focused — runs entirely on your machine, your memory data stays yours
  • Persistent memory — markdown-based knowledge store with full-text and semantic search
  • Autonomous heartbeat — delegate tasks and let it work in the background
  • Multiple interfaces — CLI, web UI, desktop GUI
  • Multiple LLM providers — Anthropic (Claude), OpenAI, Ollama
  • OpenClaw compatible — works with SOUL, MEMORY, HEARTBEAT markdown files and skills format
# Initialize configuration
localgpt config init

# Start interactive chat
localgpt chat

# Ask a single question
localgpt ask "What is the meaning of life?"

# Run as a daemon with heartbeat, HTTP API and web ui
localgpt daemon start

LocalGPT uses plain markdown files as its memory:

~/.localgpt/workspace/
├── MEMORY.md            # Long-term knowledge (auto-loaded each session)
├── HEARTBEAT.md         # Autonomous task queue
├── SOUL.md              # Personality and behavioral guidance
└── knowledge/           # Structured knowledge bank (optional)
    ├── finance/
    ├── legal/
    └── tech/

Files are indexed with SQLite FTS5 for fast keyword search, and sqlite-vec for semantic search with local embeddings

Stored at ~/.localgpt/config.toml:

[agent]
default_model = "claude-cli/opus"

[providers.anthropic]
api_key = "${ANTHROPIC_API_KEY}"

[heartbeat]
enabled = true
interval = "30m"
active_hours = { start = "09:00", end = "22:00" }

[memory]
workspace = "~/.localgpt/workspace"
# Chat
localgpt chat                     # Interactive chat
localgpt chat --session <id>      # Resume session
localgpt ask "question"           # Single question

# Daemon
localgpt daemon start             # Start background daemon
localgpt daemon stop              # Stop daemon
localgpt daemon status            # Show status
localgpt daemon heartbeat         # Run one heartbeat cycle

# Memory
localgpt memory search "query"    # Search memory
localgpt memory reindex           # Reindex files
localgpt memory stats             # Show statistics

# Config
localgpt config init              # Create default config
localgpt config show              # Show current config

When the daemon is running:

Endpoint Description
GET /health Health check
GET /api/status Server status
POST /api/chat Chat with the assistant
GET /api/memory/search?q=<query> Search memory
GET /api/memory/stats Memory statistics

Why I Built LocalGPT in 4 Nights — the full story with commit-by-commit breakdown.

Rust, Tokio, Axum, SQLite (FTS5 + sqlite-vec), fastembed, eframe

Star History Chart

Apache-2.0