What is a Crew?

A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.

Crew Attributes

AttributeParametersDescription
TaskstasksA list of tasks assigned to the crew.
AgentsagentsA list of agents that are part of the crew.
Process (optional)processThe process flow (e.g., sequential, hierarchical) the crew follows. Default is sequential.
Verbose (optional)verboseThe verbosity level for logging during execution. Defaults to False.
Manager LLM (optional)manager_llmThe language model used by the manager agent in a hierarchical process. Required when using a hierarchical process.
Function Calling LLM (optional)function_calling_llmIf passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew’s LLM for function calling.
Config (optional)configOptional configuration settings for the crew, in Json or Dict[str, Any] format.
Max RPM (optional)max_rpmMaximum requests per minute the crew adheres to during execution. Defaults to None.
Language (optional)languageLanguage used for the crew, defaults to English.
Language File (optional)language_filePath to the language file to be used for the crew.
Memory (optional)memoryUtilized for storing execution memories (short-term, long-term, entity memory). Defaults to False.
Cache (optional)cacheSpecifies whether to use a cache for storing the results of tools’ execution. Defaults to True.
Embedder (optional)embedderConfiguration for the embedder to be used by the crew. Mostly used by memory for now. Default is {"provider": "openai"}.
Full Output (optional)full_outputWhether the crew should return the full output with all tasks outputs or just the final output. Defaults to False.
Step Callback (optional)step_callbackA function that is called after each step of every agent. This can be used to log the agent’s actions or to perform other operations; it won’t override the agent-specific step_callback.
Task Callback (optional)task_callbackA function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution.
Share Crew (optional)share_crewWhether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models.
Output Log File (optional)output_log_fileWhether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file.
Manager Agent (optional)manager_agentmanager sets a custom agent that will be used as a manager.
Manager Callbacks (optional)manager_callbacksmanager_callbacks takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used.
Prompt File (optional)prompt_filePath to the prompt JSON file to be used for the crew.
Planning (optional)planningAdds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description.
Planning LLM (optional)planning_llmThe language model used by the AgentPlanner in a planning process.

Crew Max RPM: The max_rpm attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents’ max_rpm settings if you set it.

Crew Output

The output of a crew in the CrewAI framework is encapsulated within the CrewOutput class. This class provides a structured way to access results of the crew’s execution, including various formats such as raw strings, JSON, and Pydantic models. The CrewOutput includes the results from the final task output, token usage, and individual task outputs.

Crew Output Attributes

AttributeParametersTypeDescription
RawrawstrThe raw output of the crew. This is the default format for the output.
PydanticpydanticOptional[BaseModel]A Pydantic model object representing the structured output of the crew.
JSON Dictjson_dictOptional[Dict[str, Any]]A dictionary representing the JSON output of the crew.
Tasks Outputtasks_outputList[TaskOutput]A list of TaskOutput objects, each representing the output of a task in the crew.
Token Usagetoken_usageDict[str, Any]A summary of token usage, providing insights into the language model’s performance during execution.

Crew Output Methods and Properties

Method/PropertyDescription
jsonReturns the JSON string representation of the crew output if the output format is JSON.
to_dictConverts the JSON and Pydantic outputs to a dictionary.
**str**Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw.

Accessing Crew Outputs

Once a crew has been executed, its output can be accessed through the output attribute of the Crew object. The CrewOutput class provides various ways to interact with and present this output.

Example

Code
# Example crew execution
crew = Crew(
    agents=[research_agent, writer_agent],
    tasks=[research_task, write_article_task],
    verbose=True
)

crew_output = crew.kickoff()

# Accessing the crew output
print(f"Raw Output: {crew_output.raw}")
if crew_output.json_dict:
    print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
if crew_output.pydantic:
    print(f"Pydantic Output: {crew_output.pydantic}")
print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")

Memory Utilization

Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.

Cache Utilization

Caches can be employed to store the results of tools’ execution, making the process more efficient by reducing the need to re-execute identical tasks.

Crew Usage Metrics

After the crew execution, you can access the usage_metrics attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.

Code
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)

Crew Execution Process

  • Sequential Process: Tasks are executed one after another, allowing for a linear flow of work.
  • Hierarchical Process: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. Note: A manager_llm or manager_agent is required for this process and it’s essential for validating the process flow.

Kicking Off a Crew

Once your crew is assembled, initiate the workflow with the kickoff() method. This starts the execution process according to the defined process flow.

Code
# Start the crew's task execution
result = my_crew.kickoff()
print(result)

Different Ways to Kick Off a Crew

Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: kickoff(), kickoff_for_each(), kickoff_async(), and kickoff_for_each_async().

  • kickoff(): Starts the execution process according to the defined process flow.
  • kickoff_for_each(): Executes tasks for each agent individually.
  • kickoff_async(): Initiates the workflow asynchronously.
  • kickoff_for_each_async(): Executes tasks for each agent individually in an asynchronous manner.
Code
# Start the crew's task execution
result = my_crew.kickoff()
print(result)

# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
    print(result)

# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)

# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
    print(async_result)

These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.

Replaying from a Specific Task

You can now replay from a specific task using our CLI command replay.

The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command crewai replay -t <task_id>, you can specify the task_id for the replay process.

Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.

Replaying from a Specific Task Using the CLI

To use the replay feature, follow these steps:

  1. Open your terminal or command prompt.
  2. Navigate to the directory where your CrewAI project is located.
  3. Run the following command:

To view the latest kickoff task IDs, use:

crewai log-tasks-outputs

Then, to replay from a specific task, use:

crewai replay -t <task_id>

These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.