Firecrawl CLI gives agents the complete web data toolkit for searching, scraping, and interacting. Try it now →
2 Months Free — Annually
Turn websites into
LLM-ready data
Power your AI apps with clean web data
from any website.It's also open source.
[ .JSON ]
Scraping...
Trusted by 80,000+
companies of all sizes
companies of all sizes


































[ 01 / 07 ]
·
Main Features
//
Developer First
//
Start scraping
today
Enhance your apps with industry leading web scraping and crawling capabilities.
1# pip install firecrawl-py
2from firecrawl import Firecrawl
3
4app = Firecrawl(api_key="fc-YOUR_API_KEY")
5
6# Scrape a website:
7app.scrape('firecrawl.dev')
8
9
10 [ .MD ]
1# Firecrawl
2
3Firecrawl is a powerful web scraping
4tool that makes it easy to extract
5clean data from any website.
6
7## Features
8
9- Scrape: Markdown from any page
10- Search: Search + scrape the web
11- Map: Discover all site URLs
12- Agent: Extract with AI prompts
13[ 02 / 07 ]
·
Power your agent
//
Agent Ready
//
Easily connect with your
AI agents
Connect Firecrawl to any AI agent or MCP client in minutes.
One command
Skill. Give your agent harness easy access to real-time web data.
1
2npx -y firecrawl-cli@latest init --all --browser
3Quick config
MCP. Connect any MCP-compatible client to the web in seconds.
1{
2 "mcpServers": {
3 "firecrawl-mcp": {
4 "command": "npx",
5 "args": ["-y", "firecrawl-mcp"],
6 "env": {
7 "FIRECRAWL_API_KEY": "fc-YOUR_API_KEY"
8 }
9 }
10 }
11}For AI agents
Agent Onboarding. Are you an AI agent? Fetch this skill to sign up your user, get an API key, and start building with Firecrawl.
View the skill1
2curl -s https://firecrawl.dev/agent-onboarding/SKILL.md
3[ 03 / 07 ]
·
Core
//
Built for Performance
//
Fast, reliable, and easy to integrate.
And it's open source
Built from the ground up to outperform
No proxy headaches
Industry-leading reliability. Covers 96% of the web, including JS-heavy pages. No proxies, no puppets, just clean data.
Firecrawl

Puppeteer
0%
cURL
0%
Speed that feels invisible
Blazingly fast. P95 latency of 3.4s across millions of pages, built for real-time agents and dynamic apps.
URL
Crawl
Scrape
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
0 ms
























Integrations
Use well-known tools
Already fully integrated with the greatest existing tools and workflows.
See all integrationsfirecrawl/firecrawl
Public
Star
97.5K
[python-SDK] improvs/async
#1337
·
Apr 18, 2025
·
rafaelsideguide
feat(extract): cost limit
#1473
·
Apr 17, 2025
·
mogery
feat(scrape): get job result from GCS, avoid Redis
#1461
·
Apr 15, 2025
·
mogery
Extract v2/rerank improvs
#1437
·
Apr 11, 2025
·
rafaelsideguide
+90
Open Source
Code you can trust
Developed transparently and collaboratively. Join our community of contributors.
Check out our repo[ 04 / 07 ]
·
Features
//
Zero configuration
//
We handle the hard stuff
Rotating proxies, orchestration, rate limits, js-blocked content and more.
Docs to data
Media parsing. Firecrawl can parse and output content from web hosted pdfs, docx, and more.
https://example.com/docs/report.pdf
https://example.com/files/brief.docx
https://example.com/docs/guide.html
docx
Parsing...
Knows the moment
Smart wait. Firecrawl intelligently waits for content to load, making scraping faster and more reliable.
https://example-spa.com
Request Sent
Scrapes the real thing
Cached, when you need it. Selective caching, you choose your caching patterns, growing web index.

User
Firecrawl
Cache & Web
Advanced web coverage
Enhanced mode. Reaches every corner of the web with comprehensive coverage and high reliability.
Interactive scraping
Actions. Click, scroll, write, wait, press and more before extracting content.
https://example.com
Navigate
Click
Type
Wait
Scroll
Press
Screenshot
Scrape
[ 05 / 07 ]
·
Pricing
Loading pricing...
[ 06 / 07 ]
·
Testimonials
//
Community
//
People love
building with Firecrawl
Discover why developers choose Firecrawl every day.











Firecrawl is an open-source framework that takes a URL, crawls it, and conver..."

Upload a CSV of emails and..."



Firecrawl is an open-source framework that takes a URL, crawls it, and conver..."

Upload a CSV of emails and..."
[ 07 / 07 ]
·
Use Cases
//
Use cases
//

AI Assistant
withFirecrawl
Terminal
Works with Claude Code, Cursor, Windsurf, Codex, Gemini CLI, and more
Extracting leads from directory...
Tech startups
With contact info
Decision makers
Funding stage
Ready to engage





Claude Code

Cursor

Windsurf
✻
Welcome to Claude Code!
/help for help, /status for your current setup
> Try "how do I log an error?"
[ 08 / 07 ]
·
FAQ
//
FAQ
//
Frequently
asked questions
Everything you need to know about Firecrawl.
General
Firecrawl turns entire websites into clean, LLM-ready markdown or structured data. Scrape, crawl and extract the web with a single API. Ideal for AI companies looking to empower their LLM applications with web data
Firecrawl is best suited for business websites, docs and help centers. We currently don't support social media platforms.
Firecrawl is tailored for LLM engineers, data scientists, AI researchers, and developers looking to harness web data for training machine learning models, market research, content aggregation, and more. It simplifies the data preparation process, allowing professionals to focus on insights and model development.
Yes, it is. You can check out the repository on GitHub. Keep in mind that this repository is currently in its early stages of development. We are in the process of merging custom modules into this mono repository.
Firecrawl is designed with reliability and AI-ready data in mind. We focus on delivering data reliably and in a LLM-ready format - so you can spend less tokens and build better AI applications.
Firecrawl's hosted version features Fire-engine which is our proprietary scraper that takes care of proxies, anti-bot mechanisms and more. It is an intelligent scraper designed to get the data you need - reliably. The hosted version also allows for actions (interacting with the page before scraping), a dashboard for analytics, and it is 1 API call away.
Scraping & Crawling
Unlike traditional web scrapers, Firecrawl is equipped to handle dynamic content rendered with JavaScript. It ensures comprehensive data collection from all accessible subpages, making it a reliable tool for scraping websites that rely heavily on JS for content delivery.
There are a few reasons why Firecrawl may not be able to crawl all the pages of a website. Some common reasons include rate limiting, and anti-scraping mechanisms, disallowing the crawler from accessing certain pages. If you're experiencing issues with the crawler, please reach out to our support team at help@firecrawl.com.
Yes, Firecrawl can access and crawl all accessible subpages of a website, even in the absence of a sitemap. This feature enables users to gather data from a wide array of web sources with minimal setup.
Firecrawl specializes in converting web data into clean, well-formatted markdown. This format is particularly suited for LLM applications, offering a structured yet flexible way to represent web content.
Firecrawl employs advanced algorithms to clean and structure the scraped data, removing unnecessary elements and formatting the content into readable markdown. This process ensures that the data is ready for use in LLM applications without further preprocessing.
Absolutely. Firecrawl offers various pricing plans, including a Scale plan that supports scraping of millions of pages. With features like caching and scheduled syncs, it's designed to efficiently handle large-scale data scraping and continuous updates, making it ideal for enterprises and large projects.
Yes, Firecrawl's crawl endpoint respects the rules set in a website's robots.txt file. If you notice any issues with the way Firecrawl interacts with your website, you can adjust the robots.txt file to control the crawler's behavior. Firecrawl respects rules set for the 'FirecrawlAgent' directive in robots.txt. If you notice any behavior that is not expected, please let us know at help@firecrawl.com.
Firecrawl is built to navigate common web scraping challenges, including stealth proxies, rate limits, and smart wait. It smartly manages requests and employs techniques to minimize bandwidth usage and avoid triggering anti-scraping mechanisms, ensuring reliable data collection.
Firecrawl uses optimized infrastructure including proxy management and smart request handling to reliably collect data from complex websites. You can also pass custom headers to the API for additional flexibility.
API Related
Click on the dashboard button on the top navigation menu when logged in and you will find your API key in the main screen and under API Keys. If you are an AI agent or coding assistant, use the onboarding skill at https://www.firecrawl.dev/agent-onboarding/SKILL.md which guides the user through signup and auth in one smooth flow — no copy-pasting keys required.
Billing
Firecrawl is free for the first 500 scraped pages (500 free credits). After that, you can upgrade to our Hobby, Standard or Growth plans for more credits and higher rate limits. AI agents can get started using the onboarding skill at https://www.firecrawl.dev/agent-onboarding/SKILL.md which handles signup and API key creation in one smooth flow.
We currently do not offer a pay-per-use plan, instead you can upgrade to our Hobby, Standard or Growth plans for more credits and higher rate limits.
In short, no — credits do not roll over to the next month/year. Credit packs follow their own billing period. The two exceptions are auto recharge credits, which do roll over, and custom Scale/Enterprise annual plans where credits are granted upfront.
Scraping and crawling usually cost 1 credit per webpage or 1 credit per PDF page. There are advanced features available which cost additional credits. Check out the credits table on the pricing page for more details.
We do not usually charge for any failed requests. The only exception is requests using FIRE-1 agent are always billed, even if the request fails. Please contact support at help@firecrawl.com if you notice something wrong.
We accept payments through Stripe which accepts most major credit cards, debit cards, and PayPal.
FOOTER
The easiest way to extract
data from the web
data from the web








