# Agents Detailed guide on creating and managing agents within the CrewAI framework. ## Overview of an Agent In the CrewAI framework, an `Agent` is an autonomous unit that can: * Perform specific tasks * Make decisions based on its role and goal * Use tools to accomplish objectives * Communicate and collaborate with other agents * Maintain memory of interactions * Delegate tasks when allowed Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content. ## Agent Attributes | Attribute | Parameter | Type | Description | | :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- | | **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. | | **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. | | **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. | | **LLM** *(optional)* | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". | | **Tools** *(optional)* | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. | | **Function Calling LLM** *(optional)* | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. | | **Max Iterations** *(optional)* | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. | | **Max RPM** *(optional)* | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. | | **Max Execution Time** *(optional)* | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. | | **Memory** *(optional)* | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. | | **Verbose** *(optional)* | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. | | **Allow Delegation** *(optional)* | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. | | **Step Callback** *(optional)* | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. | | **Cache** *(optional)* | `cache` | `bool` | Enable caching for tool usage. Default is True. | | **System Template** *(optional)* | `system_template` | `Optional[str]` | Custom system prompt template for the agent. | | **Prompt Template** *(optional)* | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. | | **Response Template** *(optional)* | `response_template` | `Optional[str]` | Custom response template for the agent. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. | | **Max Retry Limit** *(optional)* | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. | | **Respect Context Window** *(optional)* | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. | | **Code Execution Mode** *(optional)* | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. | | **Embedder Config** *(optional)* | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. | | **Knowledge Sources** *(optional)* | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. | | **Use System Prompt** *(optional)* | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. | ## Creating Agents There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**. ### YAML Configuration (Recommended) Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects. After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements. Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew: ```python Code crew.kickoff(inputs={'topic': 'AI Agents'}) ``` Here's an example of how to configure agents using YAML: ```yaml agents.yaml # src/latest_ai_development/config/agents.yaml researcher: role: > {topic} Senior Data Researcher goal: > Uncover cutting-edge developments in {topic} backstory: > You're a seasoned researcher with a knack for uncovering the latest developments in {topic}. Known for your ability to find the most relevant information and present it in a clear and concise manner. reporting_analyst: role: > {topic} Reporting Analyst goal: > Create detailed reports based on {topic} data analysis and research findings backstory: > You're a meticulous analyst with a keen eye for detail. You're known for your ability to turn complex data into clear and concise reports, making it easy for others to understand and act on the information you provide. ``` To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`: ```python Code # src/latest_ai_development/crew.py from crewai import Agent, Crew, Process from crewai.project import CrewBase, agent, crew from crewai_tools import SerperDevTool @CrewBase class LatestAiDevelopmentCrew(): """LatestAiDevelopment crew""" agents_config = "config/agents.yaml" @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], verbose=True, tools=[SerperDevTool()] ) @agent def reporting_analyst(self) -> Agent: return Agent( config=self.agents_config['reporting_analyst'], verbose=True ) ``` The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code. ### Direct Code Definition You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters: ```python Code from crewai import Agent from crewai_tools import SerperDevTool # Create an agent with all available parameters agent = Agent( role="Senior Data Scientist", goal="Analyze and interpret complex datasets to provide actionable insights", backstory="With over 10 years of experience in data science and machine learning, " "you excel at finding patterns in complex datasets.", llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4" function_calling_llm=None, # Optional: Separate LLM for tool calling memory=True, # Default: True verbose=False, # Default: False allow_delegation=False, # Default: False max_iter=20, # Default: 20 iterations max_rpm=None, # Optional: Rate limit for API calls max_execution_time=None, # Optional: Maximum execution time in seconds max_retry_limit=2, # Default: 2 retries on error allow_code_execution=False, # Default: False code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe") respect_context_window=True, # Default: True use_system_prompt=True, # Default: True tools=[SerperDevTool()], # Optional: List of tools knowledge_sources=None, # Optional: List of knowledge sources embedder_config=None, # Optional: Custom embedder configuration system_template=None, # Optional: Custom system prompt template prompt_template=None, # Optional: Custom prompt template response_template=None, # Optional: Custom response template step_callback=None, # Optional: Callback function for monitoring ) ``` Let's break down some key parameter combinations for common use cases: #### Basic Research Agent ```python Code research_agent = Agent( role="Research Analyst", goal="Find and summarize information about specific topics", backstory="You are an experienced researcher with attention to detail", tools=[SerperDevTool()], verbose=True # Enable logging for debugging ) ``` #### Code Development Agent ```python Code dev_agent = Agent( role="Senior Python Developer", goal="Write and debug Python code", backstory="Expert Python developer with 10 years of experience", allow_code_execution=True, code_execution_mode="safe", # Uses Docker for safety max_execution_time=300, # 5-minute timeout max_retry_limit=3 # More retries for complex code tasks ) ``` #### Long-Running Analysis Agent ```python Code analysis_agent = Agent( role="Data Analyst", goal="Perform deep analysis of large datasets", backstory="Specialized in big data analysis and pattern recognition", memory=True, respect_context_window=True, max_rpm=10, # Limit API calls function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls ) ``` #### Custom Template Agent ```python Code custom_agent = Agent( role="Customer Service Representative", goal="Assist customers with their inquiries", backstory="Experienced in customer support with a focus on satisfaction", system_template="""<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>""", prompt_template="""<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>""", response_template="""<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""", ) ``` ### Parameter Details #### Critical Parameters * `role`, `goal`, and `backstory` are required and shape the agent's behavior * `llm` determines the language model used (default: OpenAI's GPT-4) #### Memory and Context * `memory`: Enable to maintain conversation history * `respect_context_window`: Prevents token limit issues * `knowledge_sources`: Add domain-specific knowledge bases #### Execution Control * `max_iter`: Maximum attempts before giving best answer * `max_execution_time`: Timeout in seconds * `max_rpm`: Rate limiting for API calls * `max_retry_limit`: Retries on error #### Code Execution * `allow_code_execution`: Must be True to run code * `code_execution_mode`: * `"safe"`: Uses Docker (recommended for production) * `"unsafe"`: Direct execution (use only in trusted environments) #### Templates * `system_template`: Defines agent's core behavior * `prompt_template`: Structures input format * `response_template`: Formats agent responses When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution. ## Agent Tools Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from: * [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) * [LangChain Tools](https://python.langchain.com/docs/integrations/tools) Here's how to add tools to an agent: ```python Code from crewai import Agent from crewai_tools import SerperDevTool, WikipediaTools # Create tools search_tool = SerperDevTool() wiki_tool = WikipediaTools() # Add tools to agent researcher = Agent( role="AI Technology Researcher", goal="Research the latest AI developments", tools=[search_tool, wiki_tool], verbose=True ) ``` ## Agent Memory and Context Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks. ```python Code from crewai import Agent analyst = Agent( role="Data Analyst", goal="Analyze and remember complex data patterns", memory=True, # Enable memory verbose=True ) ``` When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks. ## Important Considerations and Best Practices ### Security and Code Execution * When using `allow_code_execution`, be cautious with user input and always validate it * Use `code_execution_mode: "safe"` (Docker) in production environments * Consider setting appropriate `max_execution_time` limits to prevent infinite loops ### Performance Optimization * Use `respect_context_window: true` to prevent token limit issues * Set appropriate `max_rpm` to avoid rate limiting * Enable `cache: true` to improve performance for repetitive tasks * Adjust `max_iter` and `max_retry_limit` based on task complexity ### Memory and Context Management * Use `memory: true` for tasks requiring historical context * Leverage `knowledge_sources` for domain-specific information * Configure `embedder_config` when using custom embedding models * Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior ### Agent Collaboration * Enable `allow_delegation: true` when agents need to work together * Use `step_callback` to monitor and log agent interactions * Consider using different LLMs for different purposes: * Main `llm` for complex reasoning * `function_calling_llm` for efficient tool usage ### Model Compatibility * Set `use_system_prompt: false` for older models that don't support system messages * Ensure your chosen `llm` supports the features you need (like function calling) ## Troubleshooting Common Issues 1. **Rate Limiting**: If you're hitting API rate limits: * Implement appropriate `max_rpm` * Use caching for repetitive operations * Consider batching requests 2. **Context Window Errors**: If you're exceeding context limits: * Enable `respect_context_window` * Use more efficient prompts * Clear agent memory periodically 3. **Code Execution Issues**: If code execution fails: * Verify Docker is installed for safe mode * Check execution permissions * Review code sandbox settings 4. **Memory Issues**: If agent responses seem inconsistent: * Verify memory is enabled * Check knowledge source configuration * Review conversation history management Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly. # CLI Learn how to use the CrewAI CLI to interact with CrewAI. # CrewAI CLI Documentation The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows. ## Installation To use the CrewAI CLI, make sure you have CrewAI installed: ```shell pip install crewai ``` ## Basic Usage The basic structure of a CrewAI CLI command is: ```shell crewai [COMMAND] [OPTIONS] [ARGUMENTS] ``` ## Available Commands ### 1. Create Create a new crew or flow. ```shell crewai create [OPTIONS] TYPE NAME ``` * `TYPE`: Choose between "crew" or "flow" * `NAME`: Name of the crew or flow Example: ```shell crewai create crew my_new_crew crewai create flow my_new_flow ``` ### 2. Version Show the installed version of CrewAI. ```shell crewai version [OPTIONS] ``` * `--tools`: (Optional) Show the installed version of CrewAI tools Example: ```shell crewai version crewai version --tools ``` ### 3. Train Train the crew for a specified number of iterations. ```shell crewai train [OPTIONS] ``` * `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5) * `-f, --filename TEXT`: Path to a custom file for training (default: "trained\_agents\_data.pkl") Example: ```shell crewai train -n 10 -f my_training_data.pkl ``` ### 4. Replay Replay the crew execution from a specific task. ```shell crewai replay [OPTIONS] ``` * `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks Example: ```shell crewai replay -t task_123456 ``` ### 5. Log-tasks-outputs Retrieve your latest crew\.kickoff() task outputs. ```shell crewai log-tasks-outputs ``` ### 6. Reset-memories Reset the crew memories (long, short, entity, latest\_crew\_kickoff\_outputs). ```shell crewai reset-memories [OPTIONS] ``` * `-l, --long`: Reset LONG TERM memory * `-s, --short`: Reset SHORT TERM memory * `-e, --entities`: Reset ENTITIES memory * `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS * `-a, --all`: Reset ALL memories Example: ```shell crewai reset-memories --long --short crewai reset-memories --all ``` ### 7. Test Test the crew and evaluate the results. ```shell crewai test [OPTIONS] ``` * `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3) * `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini") Example: ```shell crewai test -n 5 -m gpt-3.5-turbo ``` ### 8. Run Run the crew. ```shell crewai run ``` Make sure to run these commands from the directory where your CrewAI project is set up. Some commands may require additional configuration or setup within your project structure. ### 9. API Keys When running `crewai create crew` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one. Once you've selected an LLM provider, you will be prompted for API keys. #### Initial API key providers The CLI will initially prompt for API keys for the following services: * OpenAI * Groq * Anthropic * Google Gemini * SambaNova When you select a provider, the CLI will prompt you to enter your API key. #### Other Options If you select option 6, you will be able to select from a list of LiteLLM supported providers. When you select a provider, the CLI will prompt you to enter the Key name and the API key. See the following link for each provider's key name: * [LiteLLM Providers](https://docs.litellm.ai/docs/providers) # Collaboration Exploring the dynamics of agent collaboration within the CrewAI framework, focusing on the newly integrated features for enhanced functionality. ## Collaboration Fundamentals Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem. * **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings. * **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks. * **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents. ## Enhanced Attributes for Improved Collaboration The `Crew` class has been enriched with several attributes to support advanced functionalities: | Feature | Description | | :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. | | **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. | | **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. | | **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. | | **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. | | **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) | | **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. | | **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. | | **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. | | **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. | | **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. | | **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. | | **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. | | **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. | | **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. | | **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. | ## Delegation (Dividing to Conquer) Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability. ## Implementing Collaboration and Delegation Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs. ## Example Scenario Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow. ## Conclusion The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation. # Crews Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities. ## What is a Crew? A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow. ## Crew Attributes | Attribute | Parameters | Description | | :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Tasks** | `tasks` | A list of tasks assigned to the crew. | | **Agents** | `agents` | A list of agents that are part of the crew. | | **Process** *(optional)* | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. | | **Verbose** *(optional)* | `verbose` | The verbosity level for logging during execution. Defaults to `False`. | | **Manager LLM** *(optional)* | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** | | **Function Calling LLM** *(optional)* | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. | | **Config** *(optional)* | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. | | **Max RPM** *(optional)* | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. | | **Language** *(optional)* | `language` | Language used for the crew, defaults to English. | | **Language File** *(optional)* | `language_file` | Path to the language file to be used for the crew. | | **Memory** *(optional)* | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | | **Memory Config** *(optional)* | `memory_config` | Configuration for the memory provider to be used by the crew. | | **Cache** *(optional)* | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. | | **Embedder** *(optional)* | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. | | **Full Output** *(optional)* | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. | | **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. | | **Task Callback** *(optional)* | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. | | **Share Crew** *(optional)* | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. | | **Output Log File** *(optional)* | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. | | **Manager Agent** *(optional)* | `manager_agent` | `manager` sets a custom agent that will be used as a manager. | | **Prompt File** *(optional)* | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | **Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. ## Creating Crews There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**. ### YAML Configuration (Recommended) Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects. After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself. #### Example Crew Class with Decorators ```python code from crewai import Agent, Crew, Task, Process from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff @CrewBase class YourCrewName: """Description of your crew""" # Paths to your YAML configuration files # To see an example agent and task defined in YAML, checkout the following: # - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended # - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended agents_config = 'config/agents.yaml' tasks_config = 'config/tasks.yaml' @before_kickoff def prepare_inputs(self, inputs): # Modify inputs before the crew starts inputs['additional_data'] = "Some extra information" return inputs @after_kickoff def process_output(self, output): # Modify output after the crew finishes output.raw += "\nProcessed after kickoff." return output @agent def agent_one(self) -> Agent: return Agent( config=self.agents_config['agent_one'], verbose=True ) @agent def agent_two(self) -> Agent: return Agent( config=self.agents_config['agent_two'], verbose=True ) @task def task_one(self) -> Task: return Task( config=self.tasks_config['task_one'] ) @task def task_two(self) -> Task: return Task( config=self.tasks_config['task_two'] ) @crew def crew(self) -> Crew: return Crew( agents=self.agents, # Automatically collected by the @agent decorator tasks=self.tasks, # Automatically collected by the @task decorator. process=Process.sequential, verbose=True, ) ``` Tasks will be executed in the order they are defined. The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management. #### Decorators overview from `annotations.py` CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling: * `@CrewBase`: Marks the class as a crew base class. * `@agent`: Denotes a method that returns an `Agent` object. * `@task`: Denotes a method that returns a `Task` object. * `@crew`: Denotes the method that returns the `Crew` object. * `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts. * `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes. These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them. ### Direct Code Definition (Alternative) Alternatively, you can define the crew directly in code without using YAML configuration files. ```python code from crewai import Agent, Crew, Task, Process from crewai_tools import YourCustomTool class YourCrewName: def agent_one(self) -> Agent: return Agent( role="Data Analyst", goal="Analyze data trends in the market", backstory="An experienced data analyst with a background in economics", verbose=True, tools=[YourCustomTool()] ) def agent_two(self) -> Agent: return Agent( role="Market Researcher", goal="Gather information on market dynamics", backstory="A diligent researcher with a keen eye for detail", verbose=True ) def task_one(self) -> Task: return Task( description="Collect recent market data and identify trends.", expected_output="A report summarizing key trends in the market.", agent=self.agent_one() ) def task_two(self) -> Task: return Task( description="Research factors affecting market dynamics.", expected_output="An analysis of factors influencing the market.", agent=self.agent_two() ) def crew(self) -> Crew: return Crew( agents=[self.agent_one(), self.agent_two()], tasks=[self.task_one(), self.task_two()], process=Process.sequential, verbose=True ) ``` In this example: * Agents and tasks are defined directly within the class without decorators. * We manually create and manage the list of agents and tasks. * This approach provides more control but can be less maintainable for larger projects. ## Crew Output The output of a crew in the CrewAI framework is encapsulated within the `CrewOutput` class. This class provides a structured way to access results of the crew's execution, including various formats such as raw strings, JSON, and Pydantic models. The `CrewOutput` includes the results from the final task output, token usage, and individual task outputs. ### Crew Output Attributes | Attribute | Parameters | Type | Description | | :--------------- | :------------- | :------------------------- | :--------------------------------------------------------------------------------------------------- | | **Raw** | `raw` | `str` | The raw output of the crew. This is the default format for the output. | | **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the crew. | | **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the crew. | | **Tasks Output** | `tasks_output` | `List[TaskOutput]` | A list of `TaskOutput` objects, each representing the output of a task in the crew. | | **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage, providing insights into the language model's performance during execution. | ### Crew Output Methods and Properties | Method/Property | Description | | :-------------- | :------------------------------------------------------------------------------------------------ | | **json** | Returns the JSON string representation of the crew output if the output format is JSON. | | **to\_dict** | Converts the JSON and Pydantic outputs to a dictionary. | | \***\*str\*\*** | Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw. | ### Accessing Crew Outputs Once a crew has been executed, its output can be accessed through the `output` attribute of the `Crew` object. The `CrewOutput` class provides various ways to interact with and present this output. #### Example ```python Code # Example crew execution crew = Crew( agents=[research_agent, writer_agent], tasks=[research_task, write_article_task], verbose=True ) crew_output = crew.kickoff() # Accessing the crew output print(f"Raw Output: {crew_output.raw}") if crew_output.json_dict: print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}") if crew_output.pydantic: print(f"Pydantic Output: {crew_output.pydantic}") print(f"Tasks Output: {crew_output.tasks_output}") print(f"Token Usage: {crew_output.token_usage}") ``` ## Memory Utilization Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies. ## Cache Utilization Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks. ## Crew Usage Metrics After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement. ```python Code # Access the crew's usage metrics crew = Crew(agents=[agent1, agent2], tasks=[task1, task2]) crew.kickoff() print(crew.usage_metrics) ``` ## Crew Execution Process * **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work. * **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` or `manager_agent` is required for this process and it's essential for validating the process flow. ### Kicking Off a Crew Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow. ```python Code # Start the crew's task execution result = my_crew.kickoff() print(result) ``` ### Different Ways to Kick Off a Crew Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`. * `kickoff()`: Starts the execution process according to the defined process flow. * `kickoff_for_each()`: Executes tasks for each agent individually. * `kickoff_async()`: Initiates the workflow asynchronously. * `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner. ```python Code # Start the crew's task execution result = my_crew.kickoff() print(result) # Example of using kickoff_for_each inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}] results = my_crew.kickoff_for_each(inputs=inputs_array) for result in results: print(result) # Example of using kickoff_async inputs = {'topic': 'AI in healthcare'} async_result = my_crew.kickoff_async(inputs=inputs) print(async_result) # Example of using kickoff_for_each_async inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}] async_results = my_crew.kickoff_for_each_async(inputs=inputs_array) for async_result in async_results: print(async_result) ``` These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. ### Replaying from a Specific Task You can now replay from a specific task using our CLI command `replay`. The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t `, you can specify the `task_id` for the replay process. Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from. ### Replaying from a Specific Task Using the CLI To use the replay feature, follow these steps: 1. Open your terminal or command prompt. 2. Navigate to the directory where your CrewAI project is located. 3. Run the following command: To view the latest kickoff task IDs, use: ```shell crewai log-tasks-outputs ``` Then, to replay from a specific task, use: ```shell crewai replay -t ``` These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks. # Flows Learn how to create and manage AI workflows using CrewAI Flows. ## Introduction CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations. Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities. 1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows. 2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow. 3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows. 4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows. ## Getting Started Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task. ```python Code from crewai.flow.flow import Flow, listen, start from dotenv import load_dotenv from litellm import completion class ExampleFlow(Flow): model = "gpt-4o-mini" @start() def generate_city(self): print("Starting flow") response = completion( model=self.model, messages=[ { "role": "user", "content": "Return the name of a random city in the world.", }, ], ) random_city = response["choices"][0]["message"]["content"] print(f"Random City: {random_city}") return random_city @listen(generate_city) def generate_fun_fact(self, random_city): response = completion( model=self.model, messages=[ { "role": "user", "content": f"Tell me a fun fact about {random_city}", }, ], ) fun_fact = response["choices"][0]["message"]["content"] return fun_fact flow = ExampleFlow() result = flow.kickoff() print(f"Generated fun fact: {result}") ``` In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task. When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console. **Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API. ### @start() The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started. ### @listen() The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument. #### Usage The `@listen()` decorator can be used in several ways: 1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered. ```python Code @listen("generate_city") def generate_fun_fact(self, random_city): # Implementation ``` 2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered. ```python Code @listen(generate_city) def generate_fun_fact(self, random_city): # Implementation ``` ### Flow Output Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow. #### Retrieving the Final Output When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method. Here's how you can access the final output: ```python Code from crewai.flow.flow import Flow, listen, start class OutputExampleFlow(Flow): @start() def first_method(self): return "Output from first_method" @listen(first_method) def second_method(self, first_output): return f"Second method received: {first_output}" flow = OutputExampleFlow() final_output = flow.kickoff() print("---- Final Output ----") print(final_output) ``` ```text Output ---- Final Output ---- Second method received: Output from first_method ``` In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow. The `kickoff()` method will return the final output, which is then printed to the console. #### Accessing and Updating State In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution. Here's an example of how to update and access the state: ```python Code from crewai.flow.flow import Flow, listen, start from pydantic import BaseModel class ExampleState(BaseModel): counter: int = 0 message: str = "" class StateExampleFlow(Flow[ExampleState]): @start() def first_method(self): self.state.message = "Hello from first_method" self.state.counter += 1 @listen(first_method) def second_method(self): self.state.message += " - updated by second_method" self.state.counter += 1 return self.state.message flow = StateExampleFlow() final_output = flow.kickoff() print(f"Final Output: {final_output}") print("Final State:") print(flow.state) ``` ```text Output Final Output: Hello from first_method - updated by second_method Final State: counter=2 message='Hello from first_method - updated by second_method' ``` In this example, the state is updated by both `first_method` and `second_method`. After the Flow has run, you can access the final state to see the updates made by these methods. By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems, while also maintaining and accessing the state throughout the Flow's execution. ## Flow State Management Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management, allowing developers to choose the approach that best fits their application's needs. ### Unstructured State Management In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema. ```python Code from crewai.flow.flow import Flow, listen, start class UntructuredExampleFlow(Flow): @start() def first_method(self): self.state.message = "Hello from structured flow" self.state.counter = 0 @listen(first_method) def second_method(self): self.state.counter += 1 self.state.message += " - updated" @listen(second_method) def third_method(self): self.state.counter += 1 self.state.message += " - updated again" print(f"State after third_method: {self.state}") flow = UntructuredExampleFlow() flow.kickoff() ``` **Key Points:** * **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints. * **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly. ### Structured State Management Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments. ```python Code from crewai.flow.flow import Flow, listen, start from pydantic import BaseModel class ExampleState(BaseModel): counter: int = 0 message: str = "" class StructuredExampleFlow(Flow[ExampleState]): @start() def first_method(self): self.state.message = "Hello from structured flow" @listen(first_method) def second_method(self): self.state.counter += 1 self.state.message += " - updated" @listen(second_method) def third_method(self): self.state.counter += 1 self.state.message += " - updated again" print(f"State after third_method: {self.state}") flow = StructuredExampleFlow() flow.kickoff() ``` **Key Points:** * **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability. * **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors. * **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model. ### Choosing Between Unstructured and Structured State Management * **Use Unstructured State Management when:** * The workflow's state is simple or highly dynamic. * Flexibility is prioritized over strict state definitions. * Rapid prototyping is required without the overhead of defining schemas. * **Use Structured State Management when:** * The workflow requires a well-defined and consistent state structure. * Type safety and validation are important for your application's reliability. * You want to leverage IDE features like auto-completion and type checking for better developer experience. By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements. ## Flow Control ### Conditional Logic: `or` The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output. ```python Code from crewai.flow.flow import Flow, listen, or_, start class OrExampleFlow(Flow): @start() def start_method(self): return "Hello from the start method" @listen(start_method) def second_method(self): return "Hello from the second method" @listen(or_(start_method, second_method)) def logger(self, result): print(f"Logger: {result}") flow = OrExampleFlow() flow.kickoff() ``` ```text Output Logger: Hello from the start method Logger: Hello from the second method ``` When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`. The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output. ### Conditional Logic: `and` The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output. ```python Code from crewai.flow.flow import Flow, and_, listen, start class AndExampleFlow(Flow): @start() def start_method(self): self.state["greeting"] = "Hello from the start method" @listen(start_method) def second_method(self): self.state["joke"] = "What do computers eat? Microchips." @listen(and_(start_method, second_method)) def logger(self): print("---- Logger ----") print(self.state) flow = AndExampleFlow() flow.kickoff() ``` ```text Output ---- Logger ---- {'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'} ``` When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output. The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output. ### Router The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method. You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically. ```python Code import random from crewai.flow.flow import Flow, listen, router, start from pydantic import BaseModel class ExampleState(BaseModel): success_flag: bool = False class RouterFlow(Flow[ExampleState]): @start() def start_method(self): print("Starting the structured flow") random_boolean = random.choice([True, False]) self.state.success_flag = random_boolean @router(start_method) def second_method(self): if self.state.success_flag: return "success" else: return "failed" @listen("success") def third_method(self): print("Third method running") @listen("failed") def fourth_method(self): print("Fourth method running") flow = RouterFlow() flow.kickoff() ``` ```text Output Starting the structured flow Third method running Fourth method running ``` In the above example, the `start_method` generates a random boolean value and sets it in the state. The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean. If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`. The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value. When you run this Flow, the output will change based on the random boolean value generated by the `start_method`. ## Adding Crews to Flows Creating a flow with multiple crews in CrewAI is straightforward. You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command: ```bash crewai create flow name_of_flow ``` This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews. ### Folder Structure After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following: | Directory/File | Description | | :--------------------- | :------------------------------------------------------------------ | | `name_of_flow/` | Root directory for the flow. | | ├── `crews/` | Contains directories for specific crews. | | │ └── `poem_crew/` | Directory for the "poem\_crew" with its configurations and scripts. | | │ ├── `config/` | Configuration files directory for the "poem\_crew". | | │ │ ├── `agents.yaml` | YAML file defining the agents for "poem\_crew". | | │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem\_crew". | | │ ├── `poem_crew.py` | Script for "poem\_crew" functionality. | | ├── `tools/` | Directory for additional tools used in the flow. | | │ └── `custom_tool.py` | Custom tool implementation. | | ├── `main.py` | Main script for running the flow. | | ├── `README.md` | Project description and instructions. | | ├── `pyproject.toml` | Configuration file for project dependencies and settings. | | └── `.gitignore` | Specifies files and directories to ignore in version control. | ### Building Your Crews In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains: * `config/agents.yaml`: Defines the agents for the crew. * `config/tasks.yaml`: Defines the tasks for the crew. * `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself. You can copy, paste, and edit the `poem_crew` to create other crews. ### Connecting Crews in `main.py` The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution. Here's an example of how you can connect the `poem_crew` in the `main.py` file: ```python Code #!/usr/bin/env python from random import randint from pydantic import BaseModel from crewai.flow.flow import Flow, listen, start from .crews.poem_crew.poem_crew import PoemCrew class PoemState(BaseModel): sentence_count: int = 1 poem: str = "" class PoemFlow(Flow[PoemState]): @start() def generate_sentence_count(self): print("Generating sentence count") self.state.sentence_count = randint(1, 5) @listen(generate_sentence_count) def generate_poem(self): print("Generating poem") result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count}) print("Poem generated", result.raw) self.state.poem = result.raw @listen(generate_poem) def save_poem(self): print("Saving poem") with open("poem.txt", "w") as f: f.write(self.state.poem) def kickoff(): poem_flow = PoemFlow() poem_flow.kickoff() def plot(): poem_flow = PoemFlow() poem_flow.plot() if __name__ == "__main__": kickoff() ``` In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. ### Running the Flow (Optional) Before running the flow, you can install the dependencies by running: ```bash crewai install ``` Once all of the dependencies are installed, you need to activate the virtual environment by running: ```bash source .venv/bin/activate ``` After activating the virtual environment, you can run the flow by executing one of the following commands: ```bash crewai flow kickoff ``` or ```bash uv run kickoff ``` The flow will execute, and you should see the output in the console. ## Plot Flows Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows. ### What are Plots? Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations. ### How to Generate a Plot CrewAI provides two convenient methods to generate plots of your flows: #### Option 1: Using the `plot()` Method If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow. ```python Code # Assuming you have a flow instance flow.plot("my_flow_plot") ``` This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot. #### Option 2: Using the Command Line If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup. ```bash crewai flow plot ``` This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow. ### Understanding the Plot The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details. By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others. ### Conclusion Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation. ## Next Steps If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example: 1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow) 2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow) 3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows) 4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow) By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback. Also, check out our YouTube video on how to use flows in CrewAI below!