An open-source alternative to Gemini Deep Research, built to generate AI-powered reports from web search results with precision and efficiency. Supporting multiple AI platforms (Google, OpenAI, Anthropic) and models, it offers flexibility in choosing the right AI model for your research needs.
This app functions in three key steps:
- Search Results Retrieval: Using either Google Custom Search or Bing Search API (configurable), the app fetches comprehensive search results for the specified search term.
- Content Extraction: Leveraging JinaAI, it retrieves and processes the contents of the selected search results, ensuring accurate and relevant information.
- Report Generation: With the curated search results and extracted content, the app generates a detailed report using your chosen AI model (Gemini, GPT-4, Sonnet, etc.), providing insightful and synthesized output tailored to your custom prompts.
- Knowledge Base: Save and access your generated reports in a personal knowledge base for future reference and easy retrieval.
Open Deep Research combines powerful tools to streamline research and report creation in a user-friendly, open-source platform. You can customize the app to your needs (select your preferred search provider, AI model, customize prompts, update rate limits, and configure the number of results both fetched and selected).
- 🔍 Flexible web search with Google or Bing APIs
- ⏱️ Time-based filtering of search results
- đź“„ Content extraction from web pages
- 🤖 Multi-platform AI support (Google Gemini, OpenAI GPT, Anthropic Sonnet)
- 🎯 Flexible model selection with granular configuration
- đź“Š Multiple export formats (PDF, Word, Text)
- 🧠Knowledge Base for saving and accessing past reports
- ⚡ Rate limiting for stability
- 📱 Responsive design
Try it out at: Open Deep Research
The Knowledge Base feature allows you to:
- Save generated reports for future reference (reports are saved in the browser's local storage)
- Access your research history
- Quickly load and review past reports
- Build a personal research library over time
The app's settings can be customized through the configuration file at lib/config.ts
. Here are the key parameters you can adjust:
Control rate limiting and the number of requests allowed per minute for different operations:
rateLimits: {
enabled: true, // Enable/disable rate limiting (set to false to skip Redis setup)
search: 5, // Search requests per minute
contentFetch: 20, // Content fetch requests per minute
reportGeneration: 5, // Report generation requests per minute
}
Note: If you set enabled: false
, you can run the application without setting up Redis. This is useful for local development or when you don't need rate limiting.
The app supports both Google Custom Search and Bing Search APIs. You can configure your preferred search provider in lib/config.ts
:
search: {
resultsPerPage: 10,
maxSelectableResults: 3,
provider: 'google', // 'google' or 'bing'
safeSearch: {
google: 'active', // 'active' or 'off'
bing: 'moderate' // 'moderate', 'strict', or 'off'
},
market: 'en-US',
}
To use Google Custom Search:
- Get your API key from Google Cloud Console
- Create a Custom Search Engine and get your CX ID from Google Programmable Search
- Add the credentials to your
.env.local
file:
GOOGLE_SEARCH_API_KEY="your-api-key"
GOOGLE_SEARCH_CX="your-cx-id"
To use Bing Search:
- Get your API key from Azure Portal
- Add the credential to your
.env.local
file:
AZURE_SUB_KEY="your-azure-key"
The Knowledge Base feature allows you to build a personal research library by:
- Saving generated reports with their original search queries
- Accessing and loading past reports instantly
- Building a searchable archive of your research
- Maintaining context across research sessions
Reports saved to the Knowledge Base include:
- The full report content with all sections
- Original search query and prompt
- Source URLs and references
- Generation timestamp
You can access your Knowledge Base through the dedicated button in the UI, which opens a sidebar containing all your saved reports.
Configure which AI platforms and models are available. The app supports multiple AI platforms (Google, OpenAI, Anthropic, DeepSeek) with various models for each platform. You can enable/disable platforms and individual models based on your needs:
platforms: {
google: {
enabled: true,
models: {
'gemini-flash': {
enabled: true,
label: 'Gemini Flash',
},
'gemini-flash-thinking': {
enabled: true,
label: 'Gemini Flash Thinking',
},
'gemini-exp': {
enabled: false,
label: 'Gemini Exp',
},
},
},
openai: {
enabled: true,
models: {
'gpt-4o': {
enabled: false,
label: 'GPT-4o',
},
'o1-mini': {
enabled: false,
label: 'o1-mini',
},
'o1': {
enabled: false,
label: 'o1',
},
},
},
anthropic: {
enabled: true,
models: {
'sonnet-3.5': {
enabled: false,
label: 'Claude 3 Sonnet',
},
'haiku-3.5': {
enabled: false,
label: 'Claude 3 Haiku',
},
},
},
deepseek: {
enabled: true,
models: {
'chat': {
enabled: true,
label: 'DeepSeek V3',
},
'reasoner': {
enabled: true,
label: 'DeepSeek R1',
},
},
},
}
For each platform:
enabled
: Controls whether the platform is available- For each model:
enabled
: Controls whether the specific model is selectablelabel
: The display name shown in the UI
Disabled models will appear grayed out in the UI but remain visible to show all available options. This allows users to see the full range of available models while clearly indicating which ones are currently accessible.
To modify these settings, update the values in lib/config.ts
. The changes will take effect after restarting the development server.
When using advanced reasoning models like OpenAI's o1 or DeepSeek Reasoner, you may need to increase the serverless function duration limit as these models typically take longer to generate comprehensive reports. The default duration might not be sufficient.
For Vercel deployments, you can increase the duration limit in your vercel.json
:
{
"functions": {
"app/api/report/route.ts": {
"maxDuration": 120
}
}
}
Or modify the duration in your route file:
// In app/api/report/route.ts
export const maxDuration = 120 // Set to 120 seconds or higher
Note: The maximum duration limit may vary based on your hosting platform and subscription tier.
The app supports local model inference through Ollama integration. You can:
- Install Ollama on your machine
- Pull your preferred models using
ollama pull model-name
- Configure the model in
lib/config.ts
:
platforms: {
ollama: {
enabled: true,
models: {
'your-model-name': {
enabled: true,
label: 'Your Model Display Name'
}
}
}
}
Local models through Ollama bypass rate limiting since they run on your machine. This makes them perfect for development, testing, or when you need unlimited generations.
- Node.js 20+
- npm, yarn, pnpm, or bun
- Clone the repository:
git clone https://github.com/btahir/open-deep-research
cd open-deep-research
- Install dependencies:
npm install
# or
yarn install
# or
pnpm install
# or
bun install
- Create a
.env.local
file in the root directory:
# Google Gemini Pro API key (required for AI report generation)
GEMINI_API_KEY=your_gemini_api_key
# OpenAI API key (optional - required only if OpenAI models are enabled)
OPENAI_API_KEY=your_openai_api_key
# Anthropic API key (optional - required only if Anthropic models are enabled)
ANTHROPIC_API_KEY=your_anthropic_api_key
# DeepSeek API key (optional - required only if DeepSeek models are enabled)
DEEPSEEK_API_KEY=your_deepseek_api_key
# Upstash Redis (required for rate limiting)
UPSTASH_REDIS_REST_URL=your_upstash_redis_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_token
# Bing Search API (Optional - if using Bing as search provider)
AZURE_SUB_KEY="your-azure-subscription-key"
# Google Custom Search API (Optional - if using Google as search provider)
GOOGLE_SEARCH_API_KEY="your-google-search-api-key"
GOOGLE_SEARCH_CX="your-google-search-cx"
Note: You only need to provide API keys for the platforms you plan to use. If a platform is enabled in the config but its API key is missing, those models will appear disabled in the UI.
- Start the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
- Open http://localhost:3000 in your browser.
- Go to Azure Portal
- Create a Bing Search resource
- Get the subscription key from "Keys and Endpoint"
You'll need two components to use Google Custom Search:
-
Get API Key:
- Visit Get a Key page
- Follow the prompts to get your API key
- Copy it for the
GOOGLE_SEARCH_API_KEY
environment variable
-
Get Search Engine ID (CX):
- Visit Programmable Search Engine Control Panel
- Create a new search engine
- After creation, find your Search Engine ID in the "Overview" page's "Basic" section
- Copy the ID (this is the
cx
parameter) for theGOOGLE_SEARCH_CX
environment variable
- Visit Google AI Studio
- Create an API key
- Copy the API key
- Visit OpenAI Platform
- Sign up or log in to your account
- Go to API Keys section
- Create a new API key
- Visit Anthropic Console
- Sign up or log in to your account
- Go to API Keys section
- Create a new API key
- Visit DeepSeek Platform
- Sign up or log in to your account
- Go to API Keys section
- Create a new API key
- Sign up at Upstash
- Create a new Redis database
- Copy the REST URL and REST Token
- Next.js 15 - React framework
- TypeScript - Type safety
- Tailwind CSS - Styling
- shadcn/ui - UI components
- Google Gemini - AI model
- JinaAI - Content extraction
- Azure Bing Search - Web search
- Upstash Redis - Rate limiting
- jsPDF & docx - Document generation
The app will use the configured provider (default: Google) for all searches. You can switch providers by updating the provider
value in the config file.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
- Inspired by Google's Gemini Deep Research feature
- Built with amazing open source tools and APIs
If you're interested in following all the random projects I'm working on, you can find me on Twitter: