What you’ll need
- A self-hosted Skyvern deployment
- Ollama installed locally, or an OpenAI-compatible endpoint
Option A: Ollama
Use Ollama to run open-source models locally.Step 1: Start Ollama
http://localhost:11434.
Step 2: Configure Skyvern
Add to your.env file:
| Variable | Description |
|---|---|
ENABLE_OLLAMA | Enable Ollama integration. |
OLLAMA_SERVER_URL | Ollama server URL. Defaults to http://localhost:11434. |
OLLAMA_MODEL | Model name. Check available models with ollama list. |
OLLAMA_SUPPORTS_VISION | Enable vision support for multimodal models like qwen2-vl or llava. |
Step 3: Verify the connection
Option B: LiteLLM
Use LiteLLM as an OpenAI-compatible proxy to connect any model provider.Step 1: Start LiteLLM
Step 2: Configure Skyvern
Add to your.env file:
| Variable | Description |
|---|---|
ENABLE_OPENAI_COMPATIBLE | Enable OpenAI-compatible provider. |
OPENAI_COMPATIBLE_MODEL_NAME | Model identifier. |
OPENAI_COMPATIBLE_API_KEY | API key for the proxy. |
OPENAI_COMPATIBLE_API_BASE | Base URL. Must end with /v1. |
OPENAI_COMPATIBLE_SUPPORTS_VISION | Enable vision support for multimodal models. |
OPENAI_COMPATIBLE_REASONING_EFFORT | Set to low, medium, or high. |
Step 3: Verify the connection
Step 4: Start Skyvern
After configuring your.env, start the server:
Troubleshooting
| Issue | Solution |
|---|---|
| Model not responding | Ensure ollama serve is running and the model exists (ollama list). |
| LiteLLM 401 error | Set OPENAI_COMPATIBLE_API_KEY to a value the proxy accepts. |
| Model not visible | Set ENABLE_OLLAMA=true or ENABLE_OPENAI_COMPATIBLE=true and restart. |
| Wrong base URL | Confirm OPENAI_COMPATIBLE_API_BASE ends with /v1. |
Next steps
API Quickstart
Get started with Skyvern
Run a Task
Learn the task API

