Run AI agents on your own server. Bring your own API keys or run open-source models on dedicated GPUs. Connect to WhatsApp, Slack, and Telegram. One binary. No DevOps.
Or self-host: curl -fsSL https://getsolon.dev/install.sh | sh
Open-source models, secured by Solon. See what runs on your hardware.
curl -fsSL https://getsolon.dev/install.sh | sh Not another chatbot. Solon agents run autonomously — connected to your tools, your channels, and your data.
Deploy agents with tools, skills, and MCP server connections. They research, write, analyze, and execute tasks on their own — not just respond to prompts.
Connect agents to WhatsApp, Telegram, Slack, and Discord from the dashboard. Your agents meet your users where they already are.
Control what each agent can do. Tier 1: inference only. Tier 2: tools and internet. Tier 3: persistent storage. Tier 4: full capabilities with MCP and custom skills.
Use any model from any provider — or run your own. One unified API with auth, rate limiting, and analytics built in.
Use your existing Anthropic, OpenAI, or NVIDIA API keys. Solon proxies requests with auth, rate limiting, and usage tracking — one API for all your providers.
Pull and run Llama, Gemma, Qwen, Mistral, and more on dedicated NVIDIA GPUs. Or run locally on your Mac with llama.cpp and Metal acceleration.
Drop-in replacement for the OpenAI API. Any tool, SDK, or framework that works with OpenAI works with Solon. Zero code changes.
Dedicated hardware, mandatory security, and full data sovereignty. We manage the server — you own everything on it.
Every managed instance runs on its own server. No shared compute, no noisy neighbors. Full tenant isolation with automatic TLS.
Every request requires an API key. Keys are bcrypt-hashed, never stored in plaintext. There is no --no-auth flag. Security is the default, not an option.
Solon is MIT-licensed. Run it yourself for free forever, or let us manage the infrastructure so you can focus on building.
Your AI, your rules — without the infrastructure headache. Get everything set up in minutes instead of months.
| Task | Solon | DIY |
|---|---|---|
| Provision a server | 5 minutes | 2-4 hours |
| Install AI runtime + auth + TLS | Included | 1-2 days |
| Deploy agents with tools | Pre-configured | 1-2 weeks |
| Connect WhatsApp/Slack/Telegram | Dashboard toggle | 1-2 weeks |
| Deploy open-source model | One click | 4-8 hours |
| Add API keys (Anthropic, OpenAI) | Dashboard | Custom integration |
| Security hardening + monitoring | Included | 1-2 days |
| Ongoing maintenance | $0 extra | $50-100K/yr engineer time |
Ollama instances exposed to the internet without authentication
The current default path for self-hosted AI is a security disaster.
Solon makes auth mandatory — there is no --no-auth flag.
Deploy a managed instance in minutes, or self-host on your own hardware. Either way, you own it.
Prefer to self-host?
curl -fsSL https://getsolon.dev/install.sh | sh