Skip to main content
Laminar is an observability platform for AI applications. When integrated with Skyvern, it captures traces of your automation runs in Laminar’s dashboard. What you see depends on how you’re running Skyvern:
  • Skyvern Cloud (via SDK) — Laminar wraps the run_task / run_workflow call, so you get a trace span around the API request and response: latency, status, errors, and the returned output.
  • Self-hosted — Skyvern’s server can export full traces to Laminar, including every LLM call (prompts, responses, token usage), browser actions, and workflow step execution. See self-hosted tracing setup below.
Laminar traces complement artifacts. Use artifacts for per-run debugging (screenshots, recordings, logs) and Laminar for tracking patterns across runs — failure rates, response times, and which tasks are slowest.
This guide is also available in the Laminar documentation.

Prerequisites

Laminar integration requires a Skyvern SDK and the Laminar SDK:
pip install skyvern 'lmnr[all]'
You will also need:

Set up environment variables

Add both keys to your .env file:
.env
SKYVERN_API_KEY=your-skyvern-api-key
LMNR_PROJECT_API_KEY=your-laminar-api-key

Run a traced task

This example scrapes the top 3 posts from Hacker News with Laminar tracing enabled. Call Laminar.initialize() before any Skyvern calls — it reads LMNR_PROJECT_API_KEY from your environment automatically.
import os
import asyncio
from dotenv import load_dotenv
load_dotenv()

from lmnr import Laminar
Laminar.initialize()

from skyvern import Skyvern

client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))

async def main():
    result = await client.run_task(
        prompt="Get the title and URL of the top 3 posts on Hacker News.",
        url="https://news.ycombinator.com",
        wait_for_completion=True,
    )
    print(f"Status: {result.status}")
    print(f"Output: {result.output}")

if __name__ == "__main__":
    asyncio.run(main())
Expected output:
{
  "status": "completed",
  "output": {
    "posts": [
      {"title": "Zig – Type Resolution Redesign and Language Changes", "url": "https://ziglang.org/devlog/2026/..."},
      {"title": "Create value for others and don't worry about the returns", "url": "https://geohot.github.io/..."},
      {"title": "U+237C ⍼ Is Azimuth", "url": "https://ionathan.ch/2026/02/16/angzarr.html"}
    ]
  }
}
The task runs, the output is returned, and the full trace — every HTTP call, timing, and payload — appears in your Laminar dashboard.
Python only: You will see a ForgeApp is not initialized error in stderr on startup. This is harmless — lmnr[all] tries to instrument Skyvern’s server-side internals, which aren’t present when using the client SDK. Your traces still work correctly.
Python only: LaminarLiteLLMCallback is deprecated and unnecessary. Laminar instruments LiteLLM directly — no callback setup is needed.

What traces capture

What shows up in Laminar depends on your setup.

Skyvern Cloud (via SDK)

When calling run_task or run_workflow through the SDK, Laminar traces the client-side call:
Trace dataWhat it shows
API request/responseThe full round-trip to Skyvern’s API — status, latency, payload size
Task outputThe extracted data or completion result
ErrorsHTTP errors, timeouts, and task failures
This is useful for monitoring how your application interacts with Skyvern — tracking which tasks fail, how long they take, and what outputs you’re getting back.

Self-hosted

When running Skyvern on your own infrastructure, you get deep server-side traces by configuring Laminar in Skyvern’s environment. This gives you visibility into everything happening inside the agent:
Trace dataWhat it shows
LLM interactionsEvery prompt sent to the model and its response, including token counts
Browser actionsEach click, type, and navigation the agent performed
Workflow stepsSequential block execution and data passed between blocks
Image tracingScreenshots sent to the LLM for analysis
Performance metricsLatency and cost per LLM call
ErrorsExceptions at any layer — LLM, browser, workflow engine

Self-hosted tracing setup

If you’re running Skyvern on your own infrastructure, add these to your server’s environment:
.env
LMNR_PROJECT_API_KEY=your-laminar-api-key
Skyvern’s server includes a built-in LaminarTrace integration that initializes Laminar with the LiteLLM callback, capturing every LLM call, token count, and cost. It disables the automatic Skyvern/Patchright instrumentors (to avoid conflicts) and uses Laminar’s @observe decorator on internal methods instead. No code changes needed — once the env var is set, traces appear in your Laminar project automatically.

Next steps

Using Artifacts

Per-run recordings, screenshots, logs, and network data

Troubleshooting Guide

Common issues and how to fix them