Get up and running with Solon in under a minute.
curl -fsSL https://getsolon.dev | sh On first run, Solon automatically creates an admin API key and displays it. Save this key — it won't be shown again.
solon serve solon models pull llama3.2:3b curl http://localhost:8420/v1/chat/completions \
-H "Authorization: Bearer sol_sk_live_xxxx" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.2:3b",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Visit http://localhost:8420 in your browser
to access the built-in dashboard for managing models, API keys, and monitoring requests.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8420/v1",
api_key="sol_sk_live_xxxx",
)
response = client.chat.completions.create(
model="llama3.2:3b",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)