Detailed guide on creating and managing agents within the CrewAI framework.
Agent
is an autonomous unit that can:
Researcher
agent might excel at gathering and analyzing information, while a Writer
agent might be better at creating content.Attribute | Parameter | Type | Description |
---|---|---|---|
Role | role | str | Defines the agent’s function and expertise within the crew. |
Goal | goal | str | The individual objective that guides the agent’s decision-making. |
Backstory | backstory | str | Provides context and personality to the agent, enriching interactions. |
LLM (optional) | llm | Union[str, LLM, Any] | Language model that powers the agent. Defaults to the model specified in OPENAI_MODEL_NAME or “gpt-4”. |
Tools (optional) | tools | List[BaseTool] | Capabilities or functions available to the agent. Defaults to an empty list. |
Function Calling LLM (optional) | function_calling_llm | Optional[Any] | Language model for tool calling, overrides crew’s LLM if specified. |
Max Iterations (optional) | max_iter | int | Maximum iterations before the agent must provide its best answer. Default is 20. |
Max RPM (optional) | max_rpm | Optional[int] | Maximum requests per minute to avoid rate limits. |
Max Execution Time (optional) | max_execution_time | Optional[int] | Maximum time (in seconds) for task execution. |
Verbose (optional) | verbose | bool | Enable detailed execution logs for debugging. Default is False. |
Allow Delegation (optional) | allow_delegation | bool | Allow the agent to delegate tasks to other agents. Default is False. |
Step Callback (optional) | step_callback | Optional[Any] | Function called after each agent step, overrides crew callback. |
Cache (optional) | cache | bool | Enable caching for tool usage. Default is True. |
System Template (optional) | system_template | Optional[str] | Custom system prompt template for the agent. |
Prompt Template (optional) | prompt_template | Optional[str] | Custom prompt template for the agent. |
Response Template (optional) | response_template | Optional[str] | Custom response template for the agent. |
Allow Code Execution (optional) | allow_code_execution | Optional[bool] | Enable code execution for the agent. Default is False. |
Max Retry Limit (optional) | max_retry_limit | int | Maximum number of retries when an error occurs. Default is 2. |
Respect Context Window (optional) | respect_context_window | bool | Keep messages under context window size by summarizing. Default is True. |
Code Execution Mode (optional) | code_execution_mode | Literal["safe", "unsafe"] | Mode for code execution: ‘safe’ (using Docker) or ‘unsafe’ (direct). Default is ‘safe’. |
Multimodal (optional) | multimodal | bool | Whether the agent supports multimodal capabilities. Default is False. |
Inject Date (optional) | inject_date | bool | Whether to automatically inject the current date into tasks. Default is False. |
Date Format (optional) | date_format | str | Format string for date when inject_date is enabled. Default is “%Y-%m-%d” (ISO format). |
Reasoning (optional) | reasoning | bool | Whether the agent should reflect and create a plan before executing a task. Default is False. |
Max Reasoning Attempts (optional) | max_reasoning_attempts | Optional[int] | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
Embedder (optional) | embedder | Optional[Dict[str, Any]] | Configuration for the embedder used by the agent. |
Knowledge Sources (optional) | knowledge_sources | Optional[List[BaseKnowledgeSource]] | Knowledge sources available to the agent. |
Use System Prompt (optional) | use_system_prompt | Optional[bool] | Whether to use system prompt (for o1 model support). Default is True. |
src/latest_ai_development/config/agents.yaml
file and modify the template to match your requirements.
{topic}
) will be replaced with values from your inputs when running the crew:CrewBase
:
agents.yaml
) should match the method names in your Python code.Agent
class. Here’s a comprehensive example showing all available parameters:
role
, goal
, and backstory
are required and shape the agent’s behaviorllm
determines the language model used (default: OpenAI’s GPT-4)memory
: Enable to maintain conversation historyrespect_context_window
: Prevents token limit issuesknowledge_sources
: Add domain-specific knowledge basesmax_iter
: Maximum attempts before giving best answermax_execution_time
: Timeout in secondsmax_rpm
: Rate limiting for API callsmax_retry_limit
: Retries on errorallow_code_execution
: Must be True to run codecode_execution_mode
:
"safe"
: Uses Docker (recommended for production)"unsafe"
: Direct execution (use only in trusted environments)multimodal
: Enable multimodal capabilities for processing text and visual contentreasoning
: Enable agent to reflect and create plans before executing tasksinject_date
: Automatically inject current date into task descriptionssystem_template
: Defines agent’s core behaviorprompt_template
: Structures input formatresponse_template
: Formats agent responsessystem_template
and prompt_template
are defined. The response_template
is optional but recommended for consistent output formatting.{role}
, {goal}
, and {backstory}
in your templates. These will be automatically populated during execution.memory
is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.respect_context_window
parameter.
respect_context_window=True
)respect_context_window=False
)respect_context_window=True
)"Context length exceeded. Summarizing content to fit the model context window."
respect_context_window=False
)"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."
respect_context_window=True
(Default) when:respect_context_window=False
when:verbose=True
to see context management in actionTrue
and False
to see which works better for your use caserespect_context_window
to your preferred behavior and CrewAI handles the rest!kickoff()
kickoff()
method. This provides a simpler way to interact with an agent when you don’t need the full crew orchestration capabilities.
kickoff()
Workskickoff()
method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent’s capabilities (tools, reasoning, etc.).
Parameter | Type | Description |
---|---|---|
messages | Union[str, List[Dict[str, str]]] | Either a string query or a list of message dictionaries with role/content |
response_format | Optional[Type[Any]] | Optional Pydantic model for structured output |
LiteAgentOutput
object with the following properties:
raw
: String containing the raw output textpydantic
: Parsed Pydantic model (if a response_format
was provided)agent_role
: Role of the agent that produced the outputusage_metrics
: Token usage metrics for the executionresponse_format
:
kickoff_async()
with the same parameters:
kickoff()
method uses a LiteAgent
internally, which provides a simpler execution flow while preserving all of the agent’s configuration (role, goal, backstory, tools, etc.).allow_code_execution
, be cautious with user input and always validate itcode_execution_mode: "safe"
(Docker) in production environmentsmax_execution_time
limits to prevent infinite loopsrespect_context_window: true
to prevent token limit issuesmax_rpm
to avoid rate limitingcache: true
to improve performance for repetitive tasksmax_iter
and max_retry_limit
based on task complexityknowledge_sources
for domain-specific informationembedder
when using custom embedding modelssystem_template
, prompt_template
, response_template
) for fine-grained control over agent behaviorreasoning: true
for agents that need to plan and reflect before executing complex tasksmax_reasoning_attempts
to control planning iterations (None for unlimited attempts)inject_date: true
to provide agents with current date awareness for time-sensitive tasksdate_format
using standard Python datetime format codesmultimodal: true
for agents that need to process both text and visual contentallow_delegation: true
when agents need to work togetherstep_callback
to monitor and log agent interactionsllm
for complex reasoningfunction_calling_llm
for efficient tool usageinject_date: true
to provide agents with current date awareness for time-sensitive tasksdate_format
using standard Python datetime format codesreasoning: true
for complex tasks that benefit from upfront planning and reflectionuse_system_prompt: false
for older models that don’t support system messagesllm
supports the features you need (like function calling)max_rpm
respect_context_window