Introducing
Browser Sandbox
Give your agents a secure browser environment. Let them run code safely to gather data and take action on the web.
Take actions on the web
Your agent installs the CLI and adds the Firecrawl skill to their toolbox.
npx -y firecrawl-cli@latest init --all --browserYour agent calls the CLI and gets a remote browser with Playwright pre-loaded ready to go.
firecrawl browser "open https://news.ycombinator.com"Navigate pages, fill forms, collect data, and click through multi-step flows.
firecrawl browser "snapshot"
firecrawl browser "click @e5"Why a browser sandbox?
Playwright, agent browser, and Chromium come pre-installed in every session. No local installs, no Docker, no dependency management.
Every session runs in an isolated, disposable environment. Malicious sites, leaked credentials, or misbehaving agents can't touch your infrastructure.
Save and reuse browser state across sessions. Stay logged in, preserve cookies, and keep preferences intact with named profiles.
Just add the Firecrawl skill or CLI — Claude Code, Codex, and any coding agent will instantly know how to browse, scrape, and extract from the web.
The Web Data Toolkit - now complete
$ firecrawl scrape https://example.com/pricing { "title": "Pricing — Example", "markdown": "# Pricing\n\n## Starter...", "metadata": { "statusCode": 200, "sourceURL": "https://example.com/pricing" } }
$ firecrawl search "best web scraping APIs 2025" [ { "title": "Top Web Scraping APIs", "url": "https://...", "markdown": "# Top Web Scraping..." }, ... ]
$ firecrawl browser "open https://example.com" { "success": true, "id": "<string>", "cdpUrl": "<string>", "liveViewUrl": "<string>", "interactiveLiveViewUrl": "<string>", "expiresAt": "2023-11-07T05:31:56Z" }
Works with every agent framework
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { createMCPClient } from "ai";
const client = await createMCPClient({
transport: { type: "sse",
url: "https://mcp.firecrawl.dev/sse" },
});
const { text } = await generateText({
model: openai("gpt-4o"),
tools: await client.tools(),
prompt: "Open a browser and scrape the page",
});from agents import Agent, Runner
from agents.mcp import MCPServerSse
firecrawl = MCPServerSse(
url="https://mcp.firecrawl.dev/sse",
headers={
"Authorization": "Bearer fc-YOUR-API-KEY"
},
)
agent = Agent(
name="Browser Agent",
instructions="Browse the web for the user.",
mcp_servers=[firecrawl],
)
result = await Runner.run(
agent, "Get the top stories from HN"
)import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=[{
"type": "mcp",
"server_url":
"https://mcp.firecrawl.dev/sse",
"server_label": "firecrawl",
}],
messages=[{
"role": "user",
"content": "Open a browser and "
"scrape example.com"
}],
)from langchain_mcp_adapters.client \
import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
client = MultiServerMCPClient({
"firecrawl": {
"url": "https://mcp.firecrawl.dev/sse",
"transport": "sse",
}
})
tools = await client.get_tools()
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"), tools
)
result = await agent.ainvoke({
"messages": "Get the HN front page"
})Get data that scraping can't reach
Track pricing changes and feature diffs across competitor dashboards weekly.
$ firecrawl claude competitive-intel "Linear, Asana, Monday.com"Filter and paginate YC, Crunchbase, or G2 into CRM-ready company lists.
$ firecrawl claude company-directories "YC Series A B2B SaaS"Extract auth-gated docs portals into structured JSON or markdown.
$ firecrawl claude knowledge-ingest https://docs.internal.company.comPull revenue, margins, and earnings from financial portals and SEC filings.
$ firecrawl claude market-research "cloud infrastructure market"Search Apollo, LinkedIn, or Crunchbase and extract contact details at scale.
$ firecrawl claude lead-gen "CTOs at Series B fintech startups"Pull KPIs from GA, Mixpanel, Stripe, and Grafana into one report.
$ firecrawl claude dashboard-reporting "analytics.google.com"Just send code, we run it
Playwright, Chromium, and browser automation frameworks come ready in every session. No local installs, no dependency management.
Send Playwright code, Python scripts, or JS — the sandbox executes it against the running browser and returns the result.
Every session is sandboxed and destroyed after use. Malicious sites or misbehaving code can't leak to your infrastructure or other sessions.
import Firecrawl from '@mendable/firecrawl-js'; const firecrawl = new Firecrawl({ apiKey: "fc-YOUR-API-KEY" }); // Create a browser session — sandbox included const session = await firecrawl.browser(); // Execute Playwright code in the secure sandbox const result = await firecrawl.browserExecute(session.id, { code: ` await page.goto("https://news.ycombinator.com"); const titles = await page.querySelectorAll(".titleline > a"); for (const t of [...titles].slice(0, 5)) { console.log(await t.innerText); } `, language: "node", }); console.log(result.result); // Extracted titles // Close the session await firecrawl.deleteBrowser(session.id);
from firecrawl import Firecrawl app = Firecrawl(api_key ="fc-YOUR-API-KEY") # Create a browser session — sandbox included session = app.browser() # Execute Playwright code in the secure sandbox result = app.browser_execute( session.id, code =""" await page.goto("https://news.ycombinator.com") titles = await page.query_selector_all(".titleline > a") for t in titles[:5]: print(await t.inner_text()) """, language ="python", ) print(result.result) # Extracted titles # Close the session app.delete_browser(session.id)
# Install the Firecrawl CLI npm install -g firecrawl-cli # Shorthand - auto-launches a session firecrawl browser "open https://news.ycombinator.com" firecrawl browser "snapshot" firecrawl browser "scrape" # Close when done firecrawl browser close
# 1. Launch a session curl -X POST "https://api.firecrawl.dev/v2/browser" \ -H "Authorization: Bearer $FIRECRAWL_API_KEY" \ -H "Content-Type: application/json" # 2. Execute code curl -X POST "https://api.firecrawl.dev/v2/browser/SESSION_ID/execute" \ -H "Authorization: Bearer $FIRECRAWL_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "code": "await page.goto(\"https://news.ycombinator.com\")\nprint(await page.title())", "language": "python" }' # 3. Close curl -X DELETE "https://api.firecrawl.dev/v2/browser/SESSION_ID" \ -H "Authorization: Bearer $FIRECRAWL_API_KEY"
// Output > Session created: 550e8400-e29b-41d4-a716 > CDP URL: wss://cdp-proxy.firecrawl.dev/cdp/550e8400-... > Executing code in sandbox... Mass Recall of Tesla Cybertrucks Show HN: I built a real-time flight tracker The Art of PostgreSQL Why SQLite Is So Great for the Edge Ask HN: What are you working on? ✓ Code executed successfully
Frequently asked questions
data from the web