- Skyvern Cloud (via SDK) — Laminar wraps the
run_task/run_workflowcall, so you get a trace span around the API request and response: latency, status, errors, and the returned output. - Self-hosted — Skyvern’s server can export full traces to Laminar, including every LLM call (prompts, responses, token usage), browser actions, and workflow step execution. See self-hosted tracing setup below.
This guide is also available in the Laminar documentation.
Prerequisites
Laminar integration requires a Skyvern SDK and the Laminar SDK:- A Skyvern API key — get one at app.skyvern.com/settings
- A Laminar API key — sign up at lmnr.ai and create a project
Set up environment variables
Add both keys to your.env file:
.env
Run a traced task
This example scrapes the top 3 posts from Hacker News with Laminar tracing enabled. CallLaminar.initialize() before any Skyvern calls — it reads LMNR_PROJECT_API_KEY from your environment automatically.
Python only: You will see a
ForgeApp is not initialized error in stderr on startup. This is harmless — lmnr[all] tries to instrument Skyvern’s server-side internals, which aren’t present when using the client SDK. Your traces still work correctly.What traces capture
What shows up in Laminar depends on your setup.Skyvern Cloud (via SDK)
When callingrun_task or run_workflow through the SDK, Laminar traces the client-side call:
| Trace data | What it shows |
|---|---|
| API request/response | The full round-trip to Skyvern’s API — status, latency, payload size |
| Task output | The extracted data or completion result |
| Errors | HTTP errors, timeouts, and task failures |
Self-hosted
When running Skyvern on your own infrastructure, you get deep server-side traces by configuring Laminar in Skyvern’s environment. This gives you visibility into everything happening inside the agent:| Trace data | What it shows |
|---|---|
| LLM interactions | Every prompt sent to the model and its response, including token counts |
| Browser actions | Each click, type, and navigation the agent performed |
| Workflow steps | Sequential block execution and data passed between blocks |
| Image tracing | Screenshots sent to the LLM for analysis |
| Performance metrics | Latency and cost per LLM call |
| Errors | Exceptions at any layer — LLM, browser, workflow engine |
Self-hosted tracing setup
If you’re running Skyvern on your own infrastructure, add these to your server’s environment:.env
LaminarTrace integration that initializes Laminar with the LiteLLM callback, capturing every LLM call, token count, and cost. It disables the automatic Skyvern/Patchright instrumentors (to avoid conflicts) and uses Laminar’s @observe decorator on internal methods instead.
No code changes needed — once the env var is set, traces appear in your Laminar project automatically.
Next steps
Using Artifacts
Per-run recordings, screenshots, logs, and network data
Troubleshooting Guide
Common issues and how to fix them

