Description
LLM Podcast Engine
This is a Next.js application that uses machine learning to generate a podcast from news articles.
Getting Started
-
Clone the repository:
git clone https://github.com/developersdigest/llm-podcast-engine.git
-
Install dependencies:
cd llm-podcast-engine pnpm install
-
Set up environment variables:
Create a
.env
file in the root directory and add the following variables:FIRECRAWL_API_KEY=your_firecrawl_api_key GROQ_API_KEY=your_groq_api_key ELEVENLABS_API_KEY=your_elevenlabs_api_key
You can obtain these API keys from the following sources:
-
Start the development server:
pnpm dev
This will start the Next.js development server and you can access the application at
http://localhost:3000
.
Related Templates
Explore more templates similar to this one
Map a documentation website
Zed.dev Crawl
The first step of many to create an LLM-friendly document for Zed's configuration.
Developers.campsite.com Crawl
o3 mini Company Researcher
This Python script integrates SerpAPI, OpenAI's O3 Mini model, and Firecrawl to create a comprehensive company research tool. The workflow begins by using SerpAPI to search for company information, then leverages the O3 Mini model to intelligently select the most relevant URLs from search results, and finally employs Firecrawl's extraction API to pull detailed information from those sources. The code includes robust error handling, polling mechanisms for extraction results, and clear formatting of the output, making it an efficient tool for gathering structured company information based on specific user objectives.
o1 Web Crawler
Docs.google.com Scrape
test
Llama 4 Maverick Web Extractor
This Python script integrates SerpAPI, Together AI's Llama 4 Maverick model (specifically "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"), and Firecrawl to extract structured company information. The workflow first uses SerpAPI to search for company data, then employs the Llama 4 model to intelligently select the most relevant URLs (prioritizing official sources and limiting to 3 URLs), and finally leverages Firecrawl's extraction API to pull detailed information from those sources. The code includes robust error handling, logging, and polling mechanisms to ensure reliable data extraction across the entire process.