这是indexloc提供的服务,不要输入任何密码
Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions fern/apis/prod/openapi/openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2087,7 +2087,7 @@ components:
description:
title: Description
type: string
default: An helpful agent.
default: A helpful agent.
avatar:
title: Avatar
type: string
Expand Down Expand Up @@ -2239,7 +2239,7 @@ components:
description:
title: Description
type: string
default: An helpful tool.
default: a helpful tool.
type:
title: Type
type: string
Expand Down
8 changes: 6 additions & 2 deletions fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,12 @@ navigation:
path: ./mdx/sdk/workflows.mdx
- page: Configuring Vector Database
path: ./mdx/sdk/configure_vector_database.mdx
- page: SuperAgent Markup Language
path: ./mdx/sdk/saml.mdx
- section: SAML
contents:
- page: Intro
path: ./mdx/saml/intro.mdx
- page: Structured Outputs
path: ./mdx/saml/structured_outputs.mdx
- section: Installation
contents:
- page: Docker Compose
Expand Down
8 changes: 4 additions & 4 deletions fern/mdx/apps/image-gen.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ With our client setup we can now go ahead and create a `Replicate` tool and corr
<CodeBlocks>
<CodeBlock title="Python">
```python
promt = """You are an helpful AI Assistant that can use Playground AI API to generate images.
promt = """You are a helpful AI Assistant that can use Playground AI API to generate images.

Follow these steps:

Expand Down Expand Up @@ -100,7 +100,7 @@ With our client setup we can now go ahead and create a `Replicate` tool and corr
<CodeBlock title="Javascript">
```javascript
prompt = `
You are an helpful AI Assistant that can use Playground AI API to generate images.
You are a helpful AI Assistant that can use Playground AI API to generate images.

Follow these steps:

Expand Down Expand Up @@ -185,7 +185,7 @@ Below you will find the full code for this assistant. You can optionaly use the
base_url="https://api.beta.superagent.sh" # or your local environment
)

promt = """You are an helpful AI Assistant that can use Playground AI API to generate images.
promt = """You are a helpful AI Assistant that can use Playground AI API to generate images.

Follow these steps:

Expand Down Expand Up @@ -236,7 +236,7 @@ Below you will find the full code for this assistant. You can optionaly use the
})

prompt = `
You are an helpful AI Assistant that can use Playground AI API to generate images.
You are a helpful AI Assistant that can use Playground AI API to generate images.

Follow these steps:

Expand Down
File renamed without changes.
66 changes: 66 additions & 0 deletions fern/mdx/saml/structured_outputs.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
Besides SDK we support setting output schemas as well. In this page we will show you how to set output schema for forcing LLM to generate output in a specific format.
Output schema is a JSON object that describes the structure of the output. This can be anything from a simple JSON object to a complex nested structure.


Note: We will use `Scraper` tool in our examples. You can get the API Key from [ScrapingBee's website](https://www.scrapingbee.com/)

### Natural Language Schema
This is a simple and intuitive way to define the output schema. It can be anything.

```yaml
workflows:
- anthropic:
llm: claude-3-haiku-20240307
name: Structured Assistant
tools:
- scraper:
name: browser
use_for: searching the internet
metadata:
apiKey: <YOUR_SCRAPINGBEE_API_KEY>
prompt: You're a helpful assistant.
output_schema: |-
[
{
product_url: Product URL that links to the Amazon page
product_name: Name of the product
product_price: Price of the product in U.S dollars
}
]
```


### JSON Schema
JSON schema is more powerful than natural language schema. It allows you to define the output schema in a more structured way.

```yaml
workflows:
- anthropic:
llm: claude-3-haiku-20240307
name: Structured Assistant
tools:
- scraper:
name: browser
use_for: searching the internet
metadata:
apiKey: <YOUR_SCRAPINGBEE_API_KEY>
prompt: You're a helpful assistant.
output_schema:
type: array
items:
type: object
required:
- product_url
- product_name
- product_price
properties:
product_url:
type: string
product_name:
type: string
product_price:
type: number
description: the price of the product in U.S dollars
```

For example, you can ask for `I need a list of books about programming from Amazon.`. The scraper tool will scrape the data from Amazon's website and return the data in the format you have defined in the output schema.
4 changes: 2 additions & 2 deletions fern/mdx/sdk/basic_example.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ By seperating the creation of each object you can reuse, LLMs, Agents or any oth
"isActive": True,
"initialMessage": "Hi there! How can I help you?",
"llmModel": "GPT_3_5_TURBO_16K_0613",
"prompt": "You are an helpful AI Assistant",
"prompt": "You are a helpful AI Assistant",
})

client.agent.add_llm(agent_id=agent.data.id, llm_id=llm.data.id)
Expand Down Expand Up @@ -184,7 +184,7 @@ By seperating the creation of each object you can reuse, LLMs, Agents or any oth
isActive: true,
initialMessage: "Hi there! How can I help you?",
llmModel: "GPT_3_5_TURBO_16K_0613",
prompt: "You are an helpful AI Assistant",
prompt: "You are a helpful AI Assistant",
})


Expand Down
4 changes: 2 additions & 2 deletions fern/mdx/sdk/structured_outputs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ You can force your Assistant to reply using structured outputs. This can be bene
```python
prediction = client.agent.invoke(
agent_id=agent.data.id,
input="List the top 5 articles on https://news.ycombinator.com."
input="List the top 5 articles on https://news.ycombinator.com.",
enable_streaming=False,
session_id="my_session_id",
output_schema="[{title: string, points: number, url: string}]" # Your desired output schema
Expand All @@ -103,7 +103,7 @@ You can force your Assistant to reply using structured outputs. This can be bene
<CodeBlock title="Javascript">
```javascript
const {data: prediction} = await client.agent.invoke(agent.id, {
input: "List the top 5 articles on https://news.ycombinator.com."
input: "List the top 5 articles on https://news.ycombinator.com.",
enableStreaming: false,
sessionId: "my_session_id",
outputSchema: "[{title: string, points: number, url: string}]" // Your desired output schema
Expand Down
1 change: 1 addition & 0 deletions libs/superagent/app/agents/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ async def get_agent(self):
agent_id=self.agent_id,
session_id=self.session_id,
enable_streaming=self.enable_streaming,
output_schema=self.output_schema,
callbacks=self.callbacks,
llm_params=self.llm_params,
agent_config=self.agent_config,
Expand Down
39 changes: 14 additions & 25 deletions libs/superagent/app/agents/langchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
from app.tools import TOOL_TYPE_MAPPING, create_pydantic_model_from_object, create_tool
from app.tools.datasource import DatasourceTool, StructuredDatasourceTool
from app.utils.llm import LLM_MAPPING
from prisma.models import LLM, Agent, AgentDatasource, AgentTool
from lib.prompts import JSON_FORMAT_INSTRUCTIONS
from prisma.models import LLM, AgentDatasource, AgentTool

DEFAULT_PROMPT = (
"You are a helpful AI Assistant, answer the users questions to "
Expand Down Expand Up @@ -159,10 +160,10 @@ def get_llm_params(self):
"max_tokens": options.get("max_tokens"),
}

async def _get_llm(self, llm: LLM, agent: Agent):
async def _get_llm(self, llm: LLM):
if llm.provider == "OPENAI":
return ChatOpenAI(
model=LLM_MAPPING[agent.llmModel],
model=LLM_MAPPING[self.agent_config.llmModel],
openai_api_key=llm.apiKey,
streaming=self.enable_streaming,
callbacks=self.callbacks,
Expand All @@ -176,26 +177,16 @@ async def _get_llm(self, llm: LLM, agent: Agent):
**self.get_llm_params(),
)

async def _get_prompt(self, agent: Agent):
base_prompt = agent.prompt or DEFAULT_PROMPT
if self.output_schema:
content = f"""
{base_prompt}\n\n"
Always answer using the below output schema.
The output should be formatted as a JSON instance that conforms to the JSON schema below.

As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
async def _get_prompt(self):
base_prompt = self.agent_config.prompt or DEFAULT_PROMPT
content = f"{datetime.datetime.now().strftime('%Y-%m-%d')}"

Here is the output schema:
```
{self.output_schema}
```
"""
if self.output_schema:
content += JSON_FORMAT_INSTRUCTIONS.format(
base_prompt=base_prompt, output_schema=self.output_schema
)
else:
content = f"{base_prompt}"

content += f"\n\nCurrent date: {datetime.datetime.now().strftime('%Y-%m-%d')}"
content += f"{base_prompt}"

return SystemMessage(content=content)

Expand Down Expand Up @@ -231,14 +222,12 @@ async def _get_memory(
return memory

async def get_agent(self):
llm = await self._get_llm(
llm=self.agent_config.llms[0].llm, agent=self.agent_config
)
llm = await self._get_llm(llm=self.agent_config.llms[0].llm)
tools = await self._get_tools(
agent_datasources=self.agent_config.datasources,
agent_tools=self.agent_config.tools,
)
prompt = await self._get_prompt(agent=self.agent_config)
prompt = await self._get_prompt()
memory = await self._get_memory()

if len(tools) > 0:
Expand Down
33 changes: 29 additions & 4 deletions libs/superagent/app/agents/llm.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import datetime
import logging
from typing import Optional

Expand All @@ -7,6 +8,7 @@
from app.utils.callbacks import CustomAsyncIteratorCallbackHandler
from app.utils.llm import LLM_REVERSE_MAPPING
from app.utils.prisma import prisma
from lib.prompts import JSON_FORMAT_INSTRUCTIONS
from prisma.enums import AgentType, LLMProvider
from prisma.models import Agent

Expand Down Expand Up @@ -59,6 +61,12 @@ async def init(self):
return self


DEFAULT_PROMPT = (
"You are a helpful AI Assistant, answer the users questions to "
"the best of your ability."
)


class LLMAgent(AgentBase):
def get_llm_params(self):
llm = self.agent_config.llms[0].llm
Expand All @@ -74,10 +82,31 @@ def get_llm_params(self):
"max_tokens": options.get("max_tokens"),
}

async def _get_prompt(self):
base_prompt = self.agent_config.prompt or DEFAULT_PROMPT
print("OUTPUT SCHEMA", self.output_schema)

prompt = f"Current date: {datetime.datetime.now().strftime('%Y-%m-%d')}\n"

if self.output_schema:
prompt += f"""
{JSON_FORMAT_INSTRUCTIONS.format(
base_prompt=base_prompt, output_schema=self.output_schema
)}
Always surround the output with "```json```" to ensure proper formatting.
"""
else:
prompt = base_prompt

return prompt

async def get_agent(self):
enable_streaming = self.enable_streaming
agent_config = self.agent_config
session_id = self.session_id
model = agent_config.metadata.get("model", "gpt-3.5-turbo-0125")
api_key = agent_config.llms[0].llm.apiKey
prompt = await self._get_prompt()

class CustomAgentExecutor:
def __init__(self, llm_agent_instance: LLMAgent, *args, **kwargs):
Expand All @@ -102,10 +131,6 @@ async def ainvoke(self, input, *_, **kwargs):
input=input
)

model = agent_config.metadata.get("model", "gpt-3.5-turbo-0125")
prompt = agent_config.prompt
api_key = agent_config.llms[0].llm.apiKey

if function_calling_res.get("output"):
INPUT_TEMPLATE = "{input}\n Context: {context}\n"
input = INPUT_TEMPLATE.format(
Expand Down
11 changes: 11 additions & 0 deletions libs/superagent/app/agents/test.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[
{
"product_url": "https://www.amazon.com/gp/product/B0B12BQ44J",
"product_name": "HP 14\" Ultral Light Laptop for Students and Business, Intel Quad-Core N4120, 16GB RAM, 192GB Storage(64GB eMMC+128GB Ghost Manta SD), 1 Year Office 365, Webcam, HDMI, WiFi, USB-A&C, Win 11",
"product_price": 799.99
},
{
"product_url": "https://www.amazon.com/gp/product/B0B12BQ44J",
"product_name": "HP 14\" Ultral Light Laptop for Students and Business, Intel Quad-Core N4120, 8GB RAM, 192GB Storage(64GB eMMC+128GB Micro SD), 1 Year Office 365, Webcam, HDMI, WiFi, USB-A&C, Win S"
}
]
4 changes: 2 additions & 2 deletions libs/superagent/app/models/request.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ class Agent(BaseModel):
prompt: Optional[str]
llmModel: Optional[str]
llmProvider: Optional[LLMProvider]
description: Optional[str] = "An helpful agent."
description: Optional[str] = "a helpful agent."
avatar: Optional[str]
type: Optional[AgentType] = AgentType.SUPERAGENT
parameters: Optional[OpenAiAssistantParameters]
Expand Down Expand Up @@ -116,7 +116,7 @@ class DatasourceUpdate(BaseModel):

class Tool(BaseModel):
name: str
description: Optional[str] = "An helpful tool."
description: Optional[str] = "a helpful tool."
type: str
metadata: Optional[Dict[Any, Any]]
returnDirect: Optional[bool] = False
Expand Down
2 changes: 1 addition & 1 deletion libs/superagent/app/tools/flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ async def generate_route(function_schema: Dict[str, Any]) -> str:
model="openrouter/mistralai/mixtral-8x7b-instruct",
api_key=config("OPENROUTER_API_KEY"),
messages=[
{"role": "system", "content": "You are an helpful assistant."},
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": prompt,
Expand Down
2 changes: 1 addition & 1 deletion libs/superagent/app/tools/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def create_function_response_prompt(input: str, context: str) -> str:
"""

prompt = (
"You are an helpful AI Assistant, answer the question by "
"You are a helpful AI Assistant, answer the question by "
"providing the most suitable response based on the context provided.\n\n"
f"Input: {input}\n\n"
f"Context:\n{context}"
Expand Down
13 changes: 1 addition & 12 deletions libs/superagent/app/workflows/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,6 @@
from app.agents.base import AgentBase
from app.utils.callbacks import CustomAsyncIteratorCallbackHandler

# Adapted from https://github.com/langchain-ai/langchain/blob/d1a2e194c376f241116bf8e520f1a9bb297cdf3a/libs/core/langchain_core/output_parsers/format_instructions.py
JSON_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.

As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.

Here is the output schema:
```
{schema}
```
"""


class WorkflowBase:
def __init__(
Expand Down Expand Up @@ -69,6 +57,7 @@ async def arun(self, input: Any):
"callbacks": self.callbacks[stepIndex],
},
)
print("agent_response", agent_response)
if output_schema:
# TODO: throw error if output is not valid
json_parser = SimpleJsonOutputParser()
Expand Down
Loading