An Open-Source clone of Open AI's Deep Research experiment. Instead of using a fine-tuned version of o3, this method uses Firecrawl's extract + search with a reasoning model to deep research the web.
Check out the demo here
- Firecrawl Search + Extract
- Feed realtime data to the AI via search
- Extract structured data from multiple websites via extract
- Next.js App Router
- Advanced routing for seamless navigation and performance
- React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
- AI SDK
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports OpenAI (default), Anthropic, Cohere, and other model providers
- shadcn/ui
- Styling with Tailwind CSS
- Component primitives from Radix UI for accessibility and flexibility
- Data Persistence
- Vercel Postgres powered by Neon for saving chat history and user data
- Vercel Blob for efficient file storage
- NextAuth.js
- Simple and secure authentication
This template ships with OpenAI gpt-4o
as the default. However, with the AI SDK, you can switch LLM providers to OpenAI, Anthropic, Cohere, and many more with just a few lines of code.
This repo is compatible with OpenRouter and OpenAI. To use OpenRouter, you need to set the OPENROUTER_API_KEY
environment variable.
You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:
You will need to use the environment variables defined in .env.example
to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env
file is all that is necessary.
Note: You should not commit your
.env
file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.
- Install Vercel CLI:
npm i -g vercel
- Link local instance with Vercel and GitHub accounts (creates
.vercel
directory):vercel link
- Download your environment variables:
vercel env pull
pnpm install
pnpm db:migrate
pnpm dev
Your app template should now be running on localhost:3000.
If you want to use a model other than the default, you will need to install the dependencies for that model.
DeepSeek:
pnpm add @ai-sdk/deepseek
TogetherAI's Deepseek:
pnpm add @ai-sdk/togetherai
Note: Maximum rate limit https://docs.together.ai/docs/rate-limits
The application uses a separate model for reasoning tasks (like research analysis and structured outputs). This can be configured using the REASONING_MODEL
environment variable.
Provider | Models | Notes |
---|---|---|
OpenAI | gpt-4o , o1 , o3-mini |
Native JSON schema support |
TogetherAI | deepseek-ai/DeepSeek-R1 |
Requires BYPASS_JSON_VALIDATION=true |
Deepseek | deepseek-reasoner |
Requires BYPASS_JSON_VALIDATION=true |
- Only certain OpenAI models (gpt-4o, o1, o3-mini) natively support structured JSON outputs
- Other models (deepseek-reasoner) can be used but may require disabling JSON schema validation
- When using models that don't support JSON schema:
- Set
BYPASS_JSON_VALIDATION=true
in your .env file - This allows non-OpenAI models to be used for reasoning tasks
- Note: Without JSON validation, the model responses may be less structured
- Set
- The reasoning model is used for tasks that require structured thinking and analysis, such as:
- Research analysis
- Document suggestions
- Data extraction
- Structured responses
- If no
REASONING_MODEL
is specified, it defaults too1-mini
- If an invalid model is specified, it will fall back to
o1-mini
Add to your .env
file:
# Choose one of: deepseek-reasoner, deepseek-ai/DeepSeek-R1
REASONING_MODEL=deepseek-reasoner
# Required when using models that don't support JSON schema (like deepseek-reasoner)
BYPASS_JSON_VALIDATION=true
The reasoning model is automatically used when the application needs structured outputs or complex analysis, regardless of which model the user has selected for general chat.