Generative UI with LangChain and Firecrawl
Repository

Generative UI with LangChain and Firecrawl

A template for building generative UI applications with LangChain Python and Firecrawl.

UI
Langchain

Description

Generative UI with LangChain Python ๐Ÿฆœ๐Ÿ”—

Generative UI with LangChain Python

Overview (Python LangGraph FireCrawl ๐Ÿ”ฅ Tool)

This application aims to provide a template for building generative UI applications with LangChain Python. It comes pre-built with a few UI features which you can use to play about with gen ui. The UI components are built using Shadcn.

Getting Started

Installation

First, clone the repository and install dependencies:

git clone https://github.com/Gitmaxd/gen-ui-firecrawl-python.git

cd gen-ui-firecrawl-python

Install dependencies in the frontend and backend directories:

cd ./frontend

yarn install
cd ../backend

poetry install

Secrets

Next, if you plan on using the existing pre-built UI components, youโ€™ll need to set a few environment variables:

Copy the .env.example file to .env inside the backend directory.

LangSmith keys are optional, but highly recommended if you plan on developing this application further.

The OPENAI_API_KEY is required. Get your OpenAI API key from the OpenAI dashboard.

Sign up/in to LangSmith and get your API key.

Create a new GitHub PAT (Personal Access Token) with the repo scope.

Create a free Geocode account.

The FIRECRAWL_API_KEY is required for web based data extraction (this demo). Get your FireCrawl API key and 500 free credits from the FireCrawl dashboard.

# ------------------LangSmith tracing------------------
LANGCHAIN_API_KEY=...
LANGCHAIN_CALLBACKS_BACKGROUND=true
LANGCHAIN_TRACING_V2=true
# -----------------------------------------------------

GITHUB_TOKEN=...
OPENAI_API_KEY=...
GEOCODE_API_KEY=...
FIRECRAWL_API_KEY=...

Running the Application

cd ./frontend

yarn dev

This will start a development server on http://localhost:3000.

Then, in a new terminal window:

cd ../backend

poetry run start

Go further

If youโ€™re interested in ways to take this demo application further, Iโ€™d consider the following:

Generating entire React components to be rendered, instead of relying on pre-built components. OR: Using the LLM to build custom components using a UI library like Shadcn. Multi-tool and component usage. Update the LangGraph agent to call multiple tools, and appending multiple different UI components to the client rendered UI. Generative UI outside of the chatbot window. Have the UI dynamically render in different areas on the screen. E.g a dashboard, where the components are dynamically rendered based on the LLMs output.

Related Templates

Explore more templates similar to this one

Playground

Map a documentation website

/map
Playground

Zed.dev Crawl

The first step of many to create an LLM-friendly document for Zed's configuration.

/crawl
Playground

Developers.campsite.com Crawl

/crawl
Snippet

o3 mini Company Researcher

This Python script integrates SerpAPI, OpenAI's O3 Mini model, and Firecrawl to create a comprehensive company research tool. The workflow begins by using SerpAPI to search for company information, then leverages the O3 Mini model to intelligently select the most relevant URLs from search results, and finally employs Firecrawl's extraction API to pull detailed information from those sources. The code includes robust error handling, polling mechanisms for extraction results, and clear formatting of the output, making it an efficient tool for gathering structured company information based on specific user objectives.

o3 mini
Research
Snippet

o1 Web Crawler

o1
Crawler
Playground

Docs.google.com Scrape

/scrape
Playground

test

/scrape
Snippet

Llama 4 Maverick Web Extractor

This Python script integrates SerpAPI, Together AI's Llama 4 Maverick model (specifically "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"), and Firecrawl to extract structured company information. The workflow first uses SerpAPI to search for company data, then employs the Llama 4 model to intelligently select the most relevant URLs (prioritizing official sources and limiting to 3 URLs), and finally leverages Firecrawl's extraction API to pull detailed information from those sources. The code includes robust error handling, logging, and polling mechanisms to ensure reliable data extraction across the entire process.

Llama 4
Extractor