这是indexloc提供的服务,不要输入任何密码
Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified .DS_Store
Binary file not shown.
95 changes: 95 additions & 0 deletions DeepResearcherAgent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# Deep Research Agent

A multi-stage research workflow agent powered by Agno, Firecrawl, and Nebius AI Cloud. This application enables users to perform deep, structured research on any topic, with automated data collection, analysis, and report generation.

## Features
- **Streamlit UI**: Simple interface for entering research queries and API keys.
- **Agentic Workflow**: Multi-stage process (search, analyze, write) using Agno agents.
- **Web Search**: Uses Firecrawl for data gathering.
- **AI Analysis**: Nebius-powered models for insight extraction and report writing.
- **References**: All sources and references are included in the final report.

---

## Setup Guide

### 1. Clone the Repository
```bash
git clone https://github.com/Sumanth077/Hands-On-AI-Engineering.git
cd Hands-On-AI-Engineering/DeepResearcherAgent
```

### 2. Create a Python Virtual Environment (Recommended)
```bash
python3 -m venv venv
source venv/bin/activate
```

### 3. Install Dependencies
```bash
pip install -r requirements.txt
```

### 4. Set Up Environment Variables
Create a `.env` file in the `DeepResearcherAgent` directory with your API keys:
```
FIRECRAWL_API_KEY=your_firecrawl_api_key
NEBIUS_API_KEY=your_nebius_api_key
```
You can obtain API keys from [Firecrawl](https://firecrawl.dev/) and [Nebius AI Cloud](https://nebius.ai/).

---

## Usage Guide

### 1. Run the Streamlit App
```bash
streamlit run app.py
```
- Enter your API keys in the sidebar.
- Type your research query in the main area and click **Submit**.
- The agent will search, analyze, and generate a detailed report with references.

### 2. Run as a Python Script
You can also run the agent directly from the command line:
```bash
python app.py
```

### 3. Run the MCP Server (Optional)
To expose the agent as an MCP tool:
```bash
python mcpserver.py
```

---

## How It Works
1. **Searcher Agent**: Uses Firecrawl to gather data from the web.
2. **Analyst Agent**: Analyzes findings, extracts insights, and lists actual references.
3. **Writer Agent**: Produces a polished, structured report with citations.

All agents are orchestrated using Agno's workflow system. The final output is a comprehensive markdown report.

---

## Troubleshooting
- Ensure your API keys are valid and set in the `.env` file or via the Streamlit sidebar.
- If you encounter missing dependencies, run `pip install -r requirements.txt` again.
- For issues with Streamlit, ensure you are using Python 3.8 or higher.

---

## License
MIT

## Author
Avikumar Talaviya

---

## References
- [Agno Documentation](https://github.com/agnolabs/agno)
- [Firecrawl](https://firecrawl.dev/)
- [Nebius AI Cloud](https://nebius.ai/)
- [Streamlit](https://streamlit.io/)
138 changes: 138 additions & 0 deletions DeepResearcherAgent/agents.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# define deep research agent for the agentic application using agno, firecrawl

#importing tools
import agno.agent import Agent
import agno.models.nebius import Nebius
from dotenv import load_dotenv
from typing import List, Dict, Any, Iterator
from agno.utils.log import logger
from agno.utils.pprint import prrint_run_response
from agno.tools.firecrawl import FirecrawlTools
from agno.workflow import Workflow, RunEvent, RunResponse
from pydantic import BaseModel, Field
import os

load_dotenv()

# define the deep research agent class with agno Workflow
class DeepResearchAgent(Workflow):
"""
Deep Research Agent for in-depth financial analysis and research
steps will include data collection and gathering, data analysis to generate insights,
and detailed report generation using agno agents workflow.
"""

# search the web from the user query
searcher: Agent = Agent(
tools=[FirecrawlTools(api_key=os.getenv("FIRECRAWL_API_KEY"))],
model= Nebius(
id="deepseek-ai/DeepSeek-V3-0324", api_key=os.getenv("NEBIUS_API_KEY")),
show_progress=True,
show_tool_calls=True,
markdown=True,
description=(
"A deep research agent for in-depth financial analysis and research."
" It collects data, analyzes it to generate insights, and creates detailed reports."
" Genuine, reliable, diverse sources of information."
),
Instructions=(
"1. Start with a clear research question or objective."
"2. Use the search tools to gather data from various sources."
"3. Analyze the collected data to identify trends and insights."
"4. Compile the findings with proper statistics, reports, charts and tables."
"5. Organize your findings in a clear, structured format (e.g., markdown table or sections by source type).\n"
"6. If the topic is ambiguous, clarify with the user before proceeding.\n"
"7. Be as comprehensive and verbose as possible—err on the side of including more detail.\n"
"8. Mention the References & Sources of the Content. (It's Must)"
),
)

# analyst: Agent that dissects and finds the insights from the retrieved information
analyst: Agent = Agent(
model= Nebius(
id="deepseek-ai/DeepSeek-V3-0324", api_key=os.getenv("NEBIUS_API_KEY")),
show_progress=True,
show_tool_calls=True,
markdown=True,
description=(
"You are AnalystBot-X, a critical thinker who synthesizes research findings "
"into actionable insights. Your job is to analyze, compare, and interpret the "
"information provided by the researcher."
),
Instructions=(
"1. Identify key themes, trends, and contradictions in the research.\n"
"2. Highlight the most important findings and their implications.\n"
"3. Suggest areas for further investigation if gaps are found.\n"
"4. Present your analysis in a structured, easy-to-read format.\n"
"5. Extract and list ONLY the reference links or sources that were ACTUALLY found and provided by the researcher in their findings. Do NOT create, invent, or hallucinate any links.\n"
"6. If no links were provided by the researcher, do not include a References section.\n"
"7. Don't add hallucinations or make up information. Use ONLY the links that were explicitly passed to you by the researcher.\n"
"8. Verify that each link you include was actually present in the researcher's findings before listing it.\n"
"9. If there's no Link found from the previous agent then just say, No reference Found."
),
)

# writer: produce polished reports and summaries based on the research findings
writer: Agent = Agent(
model= Nebius(
id="deepseek-ai/DeepSeek-V3-0324", api_key=os.getenv("NEBIUS_API_KEY")),
show_progress=True,
show_tool_calls=True,
markdown=True,
description=(
"You are WriterBot-X, a skilled communicator who transforms research findings "
"into clear, engaging reports. Your job is to write, edit, and polish the content "
"produced by the researcher and analyst."
),
Instructions=(
"1. Start with a clear understanding of the research question and objectives.\n"
"2. Use the findings from the researcher and analyst to create a cohesive narrative.\n"
"3. Include relevant statistics, quotes, and examples to support your points.\n"
"4. Organize the report in a logical structure with headings and subheadings.\n"
"5. Edit for clarity, conciseness, and coherence.\n"
"6. Ensure proper citation of all sources and references.\n"
"7. Use a professional tone and style appropriate for the target audience.\n"
"8. Review and revise the report based on feedback from the team."
),
)

def run(self, topic: str) -> Iterator[RunResponse]:
"""
Run the deep research agent workflow with the given topic.
This method orchestrates the search, analysis, and writing processes.
"""
logger.info(f"Starting deep research on topic: {topic}")

# Step 1: Search for information
search_response = self.searcher.run(topic)
prrint_run_response(search_response)

# Step 2: Analyze the findings
analysis_response = self.analyst.run(search_response)
prrint_run_response(analysis_response)

# Step 3: Write the final report
write_response = self.writer.run(analysis_response)
prrint_run_response(write_response)

yield from write_response

def final_research(query: str) -> str:
research_agent = DeepResearchAgent()
results = research_agent.run(query)

logger.info(f"Deep research completed for query: {query}")

# collect the final report into single string
full_report = ""
for response in results:
if response.content:
full_report += response.content

return full_report


if __name__ == "__main__":
topic = "Give the detailed guide on how to work with Git/GitHub"
final_report = final_research(topic)
print(final_report)
30 changes: 30 additions & 0 deletions DeepResearcherAgent/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# create an streamlit app with simple user interface and side bar to input api keys and on the UI, user can add
# their research queries for deep research agent to answer the them

import streamlit as st
from agents import DeepResearchAgent
import time
from dotenv import load_dotenv
import base64

st.set_page_config(page_title="Deep Research Agent", page_icon=":mag_right:")

st.title("Deep Research Agent")

# Sidebar for API keys
st.sidebar.header("API Keys")
firecrawl_api_key = st.sidebar.text_input("Firecrawl API Key", type="password")
nebius_api_key = st.sidebar.text_input("Nebius API Key", type="password")

# Main content area
st.header("Research Query")
query = st.text_area("Enter your research query here:")

if st.button("Submit"):
with st.spinner("Running deep research..."):
# Initialize the deep research agent
research_agent = DeepResearchAgent()
final_report = research_agent.run(query)

st.success("Deep research completed!")
st.markdown(final_report)
22 changes: 22 additions & 0 deletions DeepResearcherAgent/mcpserver.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import asyncio
from mcp.server.fastmcp import FastMCP
from agents import final_research

# Create FastMCP instance
mcp = FastMCP("deep-researcher-agent")


@mcp.tool()
async def deep_research_tool(query: str) -> str:
"""
Run the deep research agent on the provided query.

Arg: user query

Returns: The research response from the deep research agent."""
return await final_research(query)

# run the FastMCP server
if __name__ == "__main__":
mcp.run()

18 changes: 18 additions & 0 deletions DeepResearcherAgent/projects.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
[project]
name = "deep-researcher-agent"
version = "0.1.0"
description = "A multi-stage research workflow agent powered by agno, firecrawl, and nebius AI cloud"
authors = [
{name = "Avikumar Talaviya", email = "avikumar.talaviya@gmail.com"}
]
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
"agno",
"openai",
"firecrawl",
"python-dotenv",
"mcp",
"streamlit",
"pydantic",
]
9 changes: 9 additions & 0 deletions DeepResearcherAgent/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
agno
firecrawl
mcp
python-dotenv
streamlit
pydantic
os
asyncio
openai
4 changes: 2 additions & 2 deletions Financial analyst using agno and GPT-OSS coder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ python-dotenv>=1.0.0
"mcpServers": {
"financial-analyst-agno": {
"command": "python",
"args": ["/absolute/path/to/mcp_financial_main.py"],
"args": ["/absolute/path/to/main.py"],
"env": {
"FIRECRAWL_API_KEY": "your_api_key_here",
"OLLAMA_BASE_URL": "http://localhost:11434"
Expand All @@ -103,7 +103,7 @@ python-dotenv>=1.0.0
"mcpServers": {
"financial-analyst-agno": {
"command": "python",
"args": ["/absolute/path/to/mcp_financial_main.py"],
"args": ["/absolute/path/to/main.py"],
"env": {
"FIRECRAWL_API_KEY": "your_api_key_here"
}
Expand Down
Loading