
Description
Open Deep Research
An Open-Source clone of Open AIâs Deep Research experiment. Instead of using a fine-tuned version of o3, this method uses Firecrawlâs extract + search with a reasoning model to deep research the web.
Check out the demo here
Features
- Firecrawl Search + Extract
- Feed realtime data to the AI via search
- Extract structured data from multiple websites via extract
- Next.js App Router
- Advanced routing for seamless navigation and performance
- React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
- AI SDK
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports OpenAI (default), Anthropic, Cohere, and other model providers
- shadcn/ui
- Styling with Tailwind CSS
- Component primitives from Radix UI for accessibility and flexibility
- Data Persistence
- Vercel Postgres powered by Neon for saving chat history and user data
- Vercel Blob for efficient file storage
- NextAuth.js
- Simple and secure authentication
Model Providers
This template ships with OpenAI gpt-4o
as the default. However, with the AI SDK, you can switch LLM providers to OpenAI, Anthropic, Cohere, and many more with just a few lines of code.
This repo is compatible with OpenRouter and OpenAI. To use OpenRouter, you need to set the OPENROUTER_API_KEY
environment variable.
Function Max Duration
By default, the function timeout is set to 300 seconds (5 minutes). If youâre using Vercelâs Hobby tier, youâll need to reduce this to 60 seconds. You can adjust this by changing the MAX_DURATION
environment variable in your .env
file:
MAX_DURATION=60
Learn more about it here
Deploy Your Own
You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:
Running locally
You will need to use the environment variables defined in .env.example
to run Next.js AI Chatbot. Itâs recommended you use Vercel Environment Variables for this, but a .env
file is all that is necessary.
Note: You should not commit your
.env
file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.
- Install Vercel CLI:
npm i -g vercel
- Link local instance with Vercel and GitHub accounts (creates
.vercel
directory):vercel link
- Download your environment variables:
vercel env pull
1. First install all dependencies
pnpm install
2. Then run database migrations
pnpm db:migrate
3. Run the app
pnpm dev
Your app template should now be running on localhost:3000.
Models dependencies
If you want to use a model other than the default, you will need to install the dependencies for that model.
TogetherAIâs Deepseek:
pnpm add @ai-sdk/togetherai
Note: Maximum rate limit https://docs.together.ai/docs/rate-limits
Reasoning Model Configuration
The application uses a separate model for reasoning tasks (like research analysis and structured outputs). This can be configured using the REASONING_MODEL
environment variable.
Available Options
Provider | Models | Notes |
---|---|---|
OpenAI | gpt-4o , o1 , o3-mini | Native JSON schema support |
TogetherAI | deepseek-ai/DeepSeek-R1 | Requires BYPASS_JSON_VALIDATION=true |
Important Notes
- Only certain OpenAI models (gpt-4o, o1, o3-mini) natively support structured JSON outputs
- Other models (deepseek-reasoner) can be used but may require disabling JSON schema validation
- When using models that donât support JSON schema:
- Set
BYPASS_JSON_VALIDATION=true
in your .env file - This allows non-OpenAI models to be used for reasoning tasks
- Note: Without JSON validation, the model responses may be less structured
- Set
- The reasoning model is used for tasks that require structured thinking and analysis, such as:
- Research analysis
- Document suggestions
- Data extraction
- Structured responses
- If no
REASONING_MODEL
is specified, it defaults too1-mini
- If an invalid model is specified, it will fall back to
o1-mini
Usage
Add to your .env
file:
# Choose one of: deepseek-reasoner, deepseek-ai/DeepSeek-R1
REASONING_MODEL=deepseek-ai/DeepSeek-R1
# Required when using models that don't support JSON schema (like deepseek-reasoner)
BYPASS_JSON_VALIDATION=true
The reasoning model is automatically used when the application needs structured outputs or complex analysis, regardless of which model the user has selected for general chat.
Related Templates
Explore more templates similar to this one
Top Italian Restaurants in SF
Search for websites that contain the top italian restaurants in SF. With page content
Quotes.toscrape.com Scrape
Zed.dev Crawl
The first step of many to create an LLM-friendly document for Zed's configuration.
Developers.campsite.com Crawl
o3 mini Company Researcher
This Python script integrates SerpAPI, OpenAI's O3 Mini model, and Firecrawl to create a comprehensive company research tool. The workflow begins by using SerpAPI to search for company information, then leverages the O3 Mini model to intelligently select the most relevant URLs from search results, and finally employs Firecrawl's extraction API to pull detailed information from those sources. The code includes robust error handling, polling mechanisms for extraction results, and clear formatting of the output, making it an efficient tool for gathering structured company information based on specific user objectives.