Introducing /parse. Convert PDFs, Word docs, or spreadsheets into clean data for AI agents 5x faster. Try it now →

How do I add web search to a Python script?

Adding web search to a Python script means calling a search API over HTTP and parsing the response. The two main options are using requests directly with any REST search API, or installing an SDK that wraps authentication and response handling. The difference matters when you need full page content: standard search APIs return snippets and URLs, so getting the actual article or document body requires a second HTTP call to scrape each result.

OptionLibraryOutputExtra scrape step needed
Raw HTTP callrequestsJSON with snippets and URLsYes, to get full content
SDK with search onlyprovider SDKStructured snippetsYes, to get full content
SDK with search and extractfirecrawl-pyFull page content in markdownNo
Browser automationplaywright, seleniumFull rendered HTMLRequires parsing

For a basic setup with requests, you send a GET or POST to the search API endpoint with your query as a parameter and your API key in the header, then parse response.json() for the results list. The tradeoff is that snippets are 2-3 sentences and rarely contain the full context an LLM needs to reason over. For most RAG and agent workflows, you then loop over the top URLs and make a second request to scrape each one, which adds latency and complexity.

The firecrawl-py SDK handles search and content extraction in one call:

from firecrawl import Firecrawl
 
app = Firecrawl(api_key="your-api-key")
results = app.search("your query here", limit=5)
 
for r in results:
    print(r["url"])
    print(r["markdown"])  # full page content, ready for LLM context

Install with pip install firecrawl-py. The full search API reference covers filtering by date, domain, and content category.

Last updated: Apr 30, 2026
FOOTER
The easiest way to extract
data from the web
Backed by
Y Combinator
LinkedinGithubYouTube
SOC II · Type 2
AICPA
SOC 2
X (Twitter)
Discord