Skip to main content
Self-hosted Skyvern runs entirely on your infrastructure: your servers, your browsers, your LLM API keys. This guide helps you decide if self-hosting fits your needs and which deployment method to choose.

Architecture

Self-hosted Skyvern has four components running on your infrastructure:
ComponentRole
Skyvern API ServerOrchestrates tasks, processes LLM responses, stores results. Includes an embedded Playwright-managed Chromium browser that executes web automation.
PostgreSQLStores task history, workflows, credentials, and organization data
LLM ProviderAnalyzes screenshots and determines actions. You provide the API key (OpenAI, Anthropic, Azure OpenAI, Google Vertex AI, Amazon Bedrock, Groq, OpenRouter, or local via Ollama)

How a task executes

Skyvern runs a perception-action loop for each task step:
  1. Screenshot: The browser captures the current page state
  2. Analyze: The screenshot is sent to your LLM, which identifies interactive elements and decides the next action
  3. Execute: Skyvern performs the action in the browser (click, type, scroll, extract data)
  4. Repeat: Steps 1-3 loop until the task goal is met or the step limit (MAX_STEPS_PER_RUN) is reached
This loop is why LLM choice and browser configuration are the two most impactful self-hosting decisions. They affect every task step.

What changes from Cloud

You gainYou manage
Full data control: browser sessions and results stay on your networkInfrastructure: servers, scaling, uptime
Any LLM provider, including local models via OllamaLLM API costs: pay your provider directly
No per-task pricingProxies: bring your own provider
Full access to browser configuration and extensionsSoftware updates: pull new Docker images manually
Deploy in air-gapped or restricted networksDatabase backups and maintenance
The most significant operational difference is proxies. Skyvern Cloud routes browser traffic through managed residential proxies to avoid bot detection. Self-hosted deployments need you to configure your own proxy provider.

Prerequisites

Before deploying, ensure you have:
1

Docker and Docker Compose

Required for containerized deployment. Install Docker
2

4GB+ RAM

Browser instances are memory-intensive. Production deployments benefit from 8GB+.
3

LLM API key

From OpenAI, Anthropic, Azure OpenAI, Google Gemini, or AWS Bedrock. Alternatively, run local models with Ollama.
4

Proxy provider (recommended)

For automating external websites at scale. Not required for internal tools or development.
PostgreSQL 14+ is included in the Docker Compose setup. If you prefer an external database, you can configure DATABASE_STRING to point to your own instance.

Choose your deployment method

MethodBest for
Docker ComposeGetting started, small teams, single-server deployments
KubernetesProduction at scale, teams with existing K8s infrastructure, high availability requirements
Most teams start with Docker Compose. It’s the fastest path to a working deployment. Move to Kubernetes when you need horizontal scaling or want to integrate with existing orchestration infrastructure.

Next steps

Docker Setup

Get Skyvern running in 10 minutes with Docker Compose

Kubernetes Deployment

Deploy to production with Kubernetes manifests