Skip to content

Crews

What is a Crew?

A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.

Crew Attributes

Attribute Parameters Description
Tasks tasks A list of tasks assigned to the crew.
Agents agents A list of agents that are part of the crew.
Process (optional) process The process flow (e.g., sequential, hierarchical) the crew follows.
Verbose (optional) verbose The verbosity level for logging during execution.
Manager LLM (optional) manager_llm The language model used by the manager agent in a hierarchical process. Required when using a hierarchical process.
Function Calling LLM (optional) function_calling_llm If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling.
Config (optional) config Optional configuration settings for the crew, in Json or Dict[str, Any] format.
Max RPM (optional) max_rpm Maximum requests per minute the crew adheres to during execution.
Language (optional) language Language used for the crew, defaults to English.
Language File (optional) language_file Path to the language file to be used for the crew.
Memory (optional) memory Utilized for storing execution memories (short-term, long-term, entity memory).
Cache (optional) cache Specifies whether to use a cache for storing the results of tools' execution.
Embedder (optional) embedder Configuration for the embedder to be used by the crew. Mostly used by memory for now.
Full Output (optional) full_output Whether the crew should return the full output with all tasks outputs or just the final output.
Step Callback (optional) step_callback A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific step_callback.
Task Callback (optional) task_callback A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution.
Share Crew (optional) share_crew Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models.
Output Log File (optional) output_log_file Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file.
Manager Agent (optional) manager_agent manager sets a custom agent that will be used as a manager.
Manager Callbacks (optional) manager_callbacks manager_callbacks takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used.
Prompt File (optional) prompt_file Path to the prompt JSON file to be used for the crew.
Planning (optional) planning Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description.
Planning LLM (optional) planning_llm The language model used by the AgentPlanner in a planning process.

Crew Max RPM

The max_rpm attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' max_rpm settings if you set it.

Creating a Crew

When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.

Example: Assembling a Crew

from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
from crewai_tools import tool

@tool('DuckDuckGoSearch')
def search(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

# Define agents with specific roles and tools
researcher = Agent(
    role='Senior Research Analyst',
    goal='Discover innovative AI technologies',
    backstory="""You're a senior research analyst at a large company.
        You're responsible for analyzing data and providing insights
        to the business.
        You're currently working on a project to analyze the
        trends and innovations in the space of artificial intelligence.""",
    tools=[search]
)

writer = Agent(
    role='Content Writer',
    goal='Write engaging articles on AI discoveries',
    backstory="""You're a senior writer at a large company.
        You're responsible for creating content to the business.
        You're currently working on a project to write about trends
        and innovations in the space of AI for your next meeting.""",
    verbose=True
)

# Create tasks for the agents
research_task = Task(
    description='Identify breakthrough AI technologies',
    agent=researcher,
    expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
    description='Draft an article on the latest AI technologies',
    agent=writer,
    expected_output='3 paragraph blog post on the latest AI technologies'
)

# Assemble the crew with a sequential process
my_crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_article_task],
    process=Process.sequential,
    full_output=True,
    verbose=True,
)

Crew Output

Understanding Crew Outputs

The output of a crew in the crewAI framework is encapsulated within the CrewOutput class. This class provides a structured way to access results of the crew's execution, including various formats such as raw strings, JSON, and Pydantic models. The CrewOutput includes the results from the final task output, token usage, and individual task outputs.

Crew Output Attributes

Attribute Parameters Type Description
Raw raw str The raw output of the crew. This is the default format for the output.
Pydantic pydantic Optional[BaseModel] A Pydantic model object representing the structured output of the crew.
JSON Dict json_dict Optional[Dict[str, Any]] A dictionary representing the JSON output of the crew.
Tasks Output tasks_output List[TaskOutput] A list of TaskOutput objects, each representing the output of a task in the crew.
Token Usage token_usage Dict[str, Any] A summary of token usage, providing insights into the language model's performance during execution.

Crew Output Methods and Properties

Method/Property Description
json Returns the JSON string representation of the crew output if the output format is JSON.
to_dict Converts the JSON and Pydantic outputs to a dictionary.
**str** Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw.

Accessing Crew Outputs

Once a crew has been executed, its output can be accessed through the output attribute of the Crew object. The CrewOutput class provides various ways to interact with and present this output.

Example

# Example crew execution
crew = Crew(
    agents=[research_agent, writer_agent],
    tasks=[research_task, write_article_task],
    verbose=2
)

result = crew.kickoff()

# Accessing the crew output
print(f"Raw Output: {crew_output.raw}")
if crew_output.json_dict:
    print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
if crew_output.pydantic:
    print(f"Pydantic Output: {crew_output.pydantic}")
print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")

Memory Utilization

Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.

Cache Utilization

Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.

Crew Usage Metrics

After the crew execution, you can access the usage_metrics attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.

# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)

Crew Execution Process

  • Sequential Process: Tasks are executed one after another, allowing for a linear flow of work.
  • Hierarchical Process: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. Note: A manager_llm or manager_agent is required for this process and it's essential for validating the process flow.

Kicking Off a Crew

Once your crew is assembled, initiate the workflow with the kickoff() method. This starts the execution process according to the defined process flow.

# Start the crew's task execution
result = my_crew.kickoff()
print(result)

Different ways to Kicking Off a Crew

Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: kickoff(), kickoff_for_each(), kickoff_async(), and kickoff_for_each_async().

kickoff(): Starts the execution process according to the defined process flow. kickoff_for_each(): Executes tasks for each agent individually. kickoff_async(): Initiates the workflow asynchronously. kickoff_for_each_async(): Executes tasks for each agent individually in an asynchronous manner.

# Start the crew's task execution
result = my_crew.kickoff()
print(result)

# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
    print(result)

# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)

# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
    print(async_result)

These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs

Replaying from specific task:

You can now replay from a specific task using our cli command replay.

The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command crewai replay -t <task_id>, you can specify the task_id for the replay process.

Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.

Replaying from specific task Using the CLI

To use the replay feature, follow these steps:

  1. Open your terminal or command prompt.
  2. Navigate to the directory where your CrewAI project is located.
  3. Run the following command:

To view latest kickoff task_ids use:

crewai log-tasks-outputs
crewai replay -t <task_id>

These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.