Cloud is a great way to try Moltis. For the full experience, run it on your Mac, a VM, or any dedicated host.
Need setup details? See Cloud Deploy docs.
One self-contained binary. No runtime dependencies, just download and run.
Run your own models locally. Automatic download and setup included.
HTTPS by default. Password, token, and passkeys access.
Run browser sessions in isolated Docker containers for safer automation.
Blocks loopback, private, and link-local IPs from LLM fetch.
First token appears instantly. Smooth replies, even during long runs.
Plugins, hooks, MCP tool servers. Stdio or HTTP/SSE, auto-restart.
Full filesystem or per-session Docker/Apple Container isolation.
Hybrid vector + full-text search. Your agent remembers context.
Pi-Inspired Self-Extension
Pi-inspired self-extension: creates its own skills at runtime. Session branching, hot-reload.
Web UI, Telegram, or API. One agent, multiple frontends.
Talk to your assistant with multiple cloud and local TTS/STT providers.
More providers coming soon.
Ferris molted. The shell cracked open.
Local AI assistants are still early software. Treat Moltis as alpha: run it carefully, review tool permissions, and avoid giving broad system access you do not need.