# GET /inputs Source: https://docs.crewai.com/en/api-reference/inputs enterprise-api.en.yaml get /inputs Get required inputs for your crew # Introduction Source: https://docs.crewai.com/en/api-reference/introduction Complete reference for the CrewAI Enterprise REST API # CrewAI Enterprise API Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services. ## Quick Start Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab. Use the `GET /inputs` endpoint to see what parameters your crew expects. Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`. Use `GET /status/{kickoff_id}` to check execution status and retrieve results. ## Authentication All API requests require authentication using a Bearer token. Include your token in the `Authorization` header: ```bash curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \ https://your-crew-url.crewai.com/inputs ``` ### Token Types | Token Type | Scope | Use Case | | :-------------------- | :------------------------ | :----------------------------------------------------------- | | **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration | | **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations | You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard. ## Base URL Each deployed crew has its own unique API endpoint: ``` https://your-crew-name.crewai.com ``` Replace `your-crew-name` with your actual crew's URL from the dashboard. ## Typical Workflow 1. **Discovery**: Call `GET /inputs` to understand what your crew needs 2. **Execution**: Submit inputs via `POST /kickoff` to start processing 3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion 4. **Results**: Extract the final output from the completed response ## Error Handling The API uses standard HTTP status codes: | Code | Meaning | | ----- | :----------------------------------------- | | `200` | Success | | `400` | Bad Request - Invalid input format | | `401` | Unauthorized - Invalid bearer token | | `404` | Not Found - Resource doesn't exist | | `422` | Validation Error - Missing required inputs | | `500` | Server Error - Contact support | ## Interactive Testing **Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons. Each endpoint page shows you: * ✅ **Exact request format** with all parameters * ✅ **Response examples** for success and error cases * ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.) * ✅ **Authentication examples** with proper Bearer token format ### **To Test Your Actual API:** Copy the cURL examples and replace the URL + token with your real values Import the examples into your preferred API testing tool **Example workflow:** 1. **Copy this cURL example** from any endpoint page 2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL 3. **Replace the Bearer token** with your real token from the dashboard 4. **Run the request** in your terminal or API client ## Need Help? Get help with API integration and troubleshooting Manage your crews and view execution logs # POST /kickoff Source: https://docs.crewai.com/en/api-reference/kickoff enterprise-api.en.yaml post /kickoff Start a crew execution # GET /status/{kickoff_id} Source: https://docs.crewai.com/en/api-reference/status enterprise-api.en.yaml get /status/{kickoff_id} Get execution status # Changelog Source: https://docs.crewai.com/en/changelog Product updates, improvements, and bug fixes for CrewAI ## v0.177.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.177.0) ## Core Improvements & Fixes * Achieved parity between `rag` package and current implementation * Enhanced LLM event handling with task and agent metadata * Fixed mutable default arguments by replacing them with `None` * Suppressed Pydantic deprecation warnings during initialization * Fixed broken example link in `README.md` * Removed Python 3.12+ only Ruff rules for compatibility * Migrated CI workflows to use `uv` and updated dev tooling ## New Features & Enhancements * Added tracing improvements and cleanup * Centralized event logic by moving `events` module to `crewai.events` ## Documentation & Guides * Updated Enterprise Action Auth Token section documentation * Published documentation updates for `v0.175.0` release ## Cleanup & Refactoring * Refactored parser into modular functions for better structure ## v0.175.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.175.0) ## Core Improvements & Fixes * Fixed migration of the `tool` section during `crewai update` * Reverted OpenAI pin: now requires `openai >=1.13.3` due to fixed import issues * Fixed flaky tests and improved test stability * Improved `Flow` listener resumability for HITL and cyclic flows * Enhanced timeout handling in `PlusAPI` and `TraceBatchManager` * Batched entity memory items to reduce redundant operations ## New Features & Enhancements * Added support for additional parameters in `Flow.start()` methods * Displayed task names in verbose CLI output * Added centralized embedding types and introduced a base embedding client * Introduced generic clients for ChromaDB and Qdrant * Added support for `crewai config reset` to clear tokens * Enabled `crewai_trigger_payload` auto-injection * Simplified RAG client initialization and introduced RAG configuration system * Added Qdrant RAG provider support * Improved tracing with better event data * Added support to remove Auth0 and email entry on `crewai login` ## Documentation & Guides * Added documentation for automation triggers * Fixed API Reference OpenAPI sources and redirects * Added hybrid search alpha parameter to the docs ## Cleanup & Deprecations * Added deprecation notice for `Task.max_retries` * Removed Auth0 dependency from login flow ## v0.165.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.165.1) ## Core Improvements & Fixes * Fixed compatibility in `XMLSearchTool` by converting config values to strings for `configparser` * Fixed flaky Pytest test involving `PytestUnraisableExceptionWarning` * Mocked telemetry in test suite for more stable CI runs * Moved Chroma lockfile handling to `db_storage_path` * Ignored deprecation warnings from `chromadb` * Pinned OpenAI version `<1.100.0` due to `ResponseTextConfigParam` import issue ## New Features & Enhancements * Included exchanged agent messages into `ExternalMemory` metadata * Automatically injected `crewai_trigger_payload` * Renamed internal flag `inject_trigger_input` to `allow_crewai_trigger_context` * Continued tracing improvements and ephemeral tracing logic * Consolidated tracing logic conditions * Added support for `agent_id`-linked memory entries in `Mem0` ## Documentation & Guides * Added example to Tool Repository docs * Updated Mem0 documentation for Short-Term and Entity Memory integration * Revised Korean translations and improved sentence structures ## Cleanup & Chores * Removed deprecated AgentOps integration ## v0.165.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.165.0) ## Core Improvements & Fixes * Fixed compatibility in `XMLSearchTool` by converting config values to strings for `configparser` * Fixed flaky Pytest test involving `PytestUnraisableExceptionWarning` * Mocked telemetry in test suite for more stable CI runs * Moved Chroma lockfile handling to `db_storage_path` * Ignored deprecation warnings from `chromadb` * Pinned OpenAI version `<1.100.0` due to `ResponseTextConfigParam` import issue ## New Features & Enhancements * Included exchanged agent messages into `ExternalMemory` metadata * Automatically injected `crewai_trigger_payload` * Renamed internal flag `inject_trigger_input` to `allow_crewai_trigger_context` * Continued tracing improvements and ephemeral tracing logic * Consolidated tracing logic conditions * Added support for `agent_id`-linked memory entries in `Mem0` ## Documentation & Guides * Added example to Tool Repository docs * Updated Mem0 documentation for Short-Term and Entity Memory integration * Revised Korean translations and improved sentence structures ## Cleanup & Chores * Removed deprecated AgentOps integration ## v0.159.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.159.0) ## Core Improvements & Fixes * Improved LLM message formatting performance for better runtime efficiency * Fixed use of incorrect endpoint in enterprise configuration auth/parameters * Commented out listener resumability check for stability during partial flow resumption ## New Features & Enhancements * Added `enterprise configure` command to CLI for streamlined enterprise setup * Introduced partial flow resumability support ## Documentation & Guides * Added documentation for new tools * Added Korean translations * Updated documentation with TrueFoundry integration details * Added RBAC documentation and general cleanup * Fixed API reference and revamped examples/cookbooks across EN, PT-BR, and KO ## v0.157.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.157.0) ## v0.157.0 What's Changed ## Core Improvements & Fixes * Enabled word wrapping for long input tool * Allowed persisting Flow state with `BaseModel` entries * Optimized string operations using `partition()` for performance * Dropped support for deprecated User Memory system * Bumped LiteLLM version to `1.74.9` * Fixed CLI to show missing modules more clearly during import * Supported device authorization with Okta ## New Features & Enhancements * Added `crewai config` CLI command group with tests * Added default value support for `crew.name` * Introduced initial tracing capabilities * Added support for LangDB integration * Added support for CLI configuration documentation ## Documentation & Guides * Updated MCP documentation with `connect_timeout` attribute * Added LangDB integration documentation * Added CLI config documentation * General feature doc updates and cleanup ## v0.152.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.152.0) ## Core Improvements & Fixes * Removed `crewai signup` references and replaced them with `crewai login` * Fixed support for adding memories to Mem0 using `agent_id` * Changed the default value in Mem0 configuration * Updated import error to show missing module files clearly * Added timezone support to event timestamps ## New Features & Enhancements * Enhanced `Flow` class to support custom flow names * Refactored RAG components into a dedicated top-level module ## Documentation & Guides * Fixed incorrect model naming in Google Vertex AI documentation ## v0.150.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.150.0) ## Core Improvements & Fixes * Used file lock around Chroma client initialization * Removed workaround related to SQLite without FTS5 * Dropped unsupported `stop` parameter for LLM models automatically * Fixed `save` method and updated related test cases * Fixed message handling for Ollama models when last message is from assistant * Removed duplicate print on LLM call error * Added deprecation notice to `UserMemory` * Upgraded LiteLLM to version 1.74.3 ## New Features & Enhancements * Added support for ad-hoc tool calling via internal LLM class * Updated Mem0 Storage from v1.1 to v2 ## Documentation & Guides * Fixed neatlogs documentation * Added Tavily Search & Extractor tools to the Search-Research suite * Added documentation for `SerperScrapeWebsiteTool` and reorganized Serper section * General documentation updates and improvements ## crewai-tools v0.58.0 ### New Tools / Enhancements * **SerperScrapeWebsiteTool**: Added a tool for extracting clean content from URLs * **Bedrock AgentCore**: Integrated browser and code interpreter toolkits for Bedrock agents * **Stagehand Update**: Refactored and updated Stagehand integration ### Fixes & Cleanup * **FTS5 Support**: Enabled SQLite FTS5 for improved text search in test workflows * **Test Speedups**: Parallelized GitHub Actions test suite for faster CI runs * **Cleanup**: Removed SQLite workaround due to FTS5 support being available\ **MongoDBVectorSearchTool**: Fixed serialization and schema handling ## v0.148.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.148.0) ## Core Improvements & Fixes * Used production WorkOS environment ID * Added SQLite FTS5 support to test workflow * Fixed agent knowledge handling * Compared using `BaseLLM` class instead of `LLM` * Fixed missing `create_directory` parameter in `Task` class ## New Features & Enhancements * Introduced Agent evaluation functionality * Added Evaluator experiment and regression testing methods * Implemented thread-safe `AgentEvaluator` * Enabled event emission for Agent evaluation * Supported evaluation of single `Agent` and `LiteAgent` * Added integration with `neatlogs` * Added crew context tracking for LLM guardrail events ## Documentation & Guides * Added documentation for `guardrail` attributes and usage examples * Added integration guide for `neatlogs` * Updated documentation for Agent repository and `Agent.kickoff` usage ## v0.141.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.141.0) ## Core Improvements & Fixes * Sped up GitHub Actions tests through parallelization ## New Features & Enhancements * Added crew context tracking for LLM guardrail events ## Documentation & Guides * Added documentation for Agent repository usage * Added documentation for `Agent.kickoff` method ## v0.140.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.140.0) ## Core Improvements & Fixes * Fixed typo in test prompts * Fixed project name normalization by stripping trailing slashes during crew creation * Ensured environment variables are written in uppercase * Updated LiteLLM dependency * Refactored collection handling in `RAGStorage` * Implemented PEP 621 dynamic versioning ## New Features & Enhancements * Added capability to track LLM calls by task and agent * Introduced `MemoryEvents` to monitor memory usage * Added console logging for memory system and LLM guardrail events * Improved data training support for models up to 7B parameters * Added Scarf and Reo.dev analytics tracking * CLI workos login ## Documentation & Guides * Updated CLI LLM documentation * Added Nebius integration to the docs * Corrected typos in installation and pt-BR documentation * Added docs about `MemoryEvents` * Implemented docs redirects and included development tools ## v0.134.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.134.0) ## Core Improvements & Fixes * Fixed tools parameter syntax * Fixed type annotation in `Task` * Fixed SSL error when retrieving LLM data from GitHub * Ensured compatibility with Pydantic 2.7.x * Removed `mkdocs` from project dependencies * Upgraded Langfuse code examples to use Python SDK v3 * Added sanitize role feature in `mem0` storage * Improved Crew search during memory reset * Improved console printer output ## New Features & Enhancements * Added support for initializing a tool from defined `Tool` attributes * Added official way to use MCP Tools within a `CrewBase` * Enhanced MCP tools support to allow selecting multiple tools per agent in `CrewBase` * Added Oxylabs Web Scraping tools ## Documentation & Guides * Updated `quickstart.mdx` * Added docs on `LLMGuardrail` events * Updated documentation with comprehensive service integration details * Updated recommendation filters for MCP and Enterprise tools * Updated docs for Maxim observability * Added pt-BR documentation translation * General documentation improvements ## v0.130.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.130.0) ## Core Improvements & Fixes * Removed duplicated message related to Tool result output * Fixed missing `manager_agent` tokens in `usage_metrics` from kickoff * Fixed telemetry singleton to respect dynamic environment variables * Fixed issue where Flow status logs could hide human input * Increased default X-axis spacing for flow plotting ## New Features & Enhancements * Added support for multi-org actions in the CLI * Enabled async tool executions for more efficient workflows * Introduced `LiteAgent` with Guardrail integration * Upgraded `LiteLLM` to support latest OpenAI version ## Documentation & Guides * Documented minimum `UV` version for Tool repository * Improved examples for Hallucination Guardrail * Updated planning docs for LLM usage * Added documentation for Maxim support in Agent observability * Expanded integrations documentation with images for enterprise features * Fixed guide on persistence * Updated Python version support to support python 3.13.x ## v0.126.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.126.0) ### What’s Changed #### Core Improvements & Fixes * Added support for Python 3.13 * Fixed agent knowledge sources issue * Persisted available tools from a Tool repository * Enabled tools to be loaded from Agent repository via their own module * Logged usage of tools when called by an LLM #### New Features & Enhancements * Added streamable-http transport support in MCP integration * Added support for community analytics * Expanded OpenAI-compatible section with a Gemini example * Introduced transparency features for prompts and memory systems * Minor enhancements for Tool publishing #### Documentation & Guides * Major restructuring of docs for better navigation * Expanded MCP integration documentation * Updated memory docs and README visuals * Fixed missing await keywords in async kickoff examples * Updated Portkey and Azure embeddings documentation * Added enterprise testing image to the LLM guide * General updates to the README ## v0.121.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.121.1) Bug fixes and better docs ## v0.121.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.121.0) # What’s Changed ## Core Improvements & Fixes * Fixed encoding error when creating tools * Fixed failing llama test * Updated logging configuration for consistency * Enhanced telemetry initialization and event handling ## New Features & Enhancements * Added markdown attribute to the Task class * Added reasoning attribute to the Agent class * Added inject\_date flag to Agent for automatic date injection * Implemented HallucinationGuardrail (no-op with test coverage) ## Documentation & Guides * Added documentation for StagehandTool and improved MDX structure * Added documentation for MCP integration and updated enterprise docs * Documented knowledge events and updated reasoning docs * Added stop parameter documentation * Fixed import references in doc examples (before\_kickoff, after\_kickoff) * General docs updates and restructuring for clarity ## v0.120.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.120.1) ## Whats New * Fixes Interpolation with hyphens ## v0.120.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.120.0) ### Core Improvements & Fixes • Enabled full Ruff rule set by default for stricter linting • Addressed race condition in FilteredStream using context managers • Fixed agent knowledge reset issue • Refactored agent fetching logic into utility module ### New Features & Enhancements • Added support for loading an Agent directly from a repository • Enabled setting an empty context for Task • Enhanced Agent repository feedback and fixed Tool auto-import behavior • Introduced direct initialization of knowledge (bypassing knowledge\_sources) ### Documentation & Guides • Updated security.md for current security practices • Cleaned up Google setup section for clarity • Added link to AI Studio when entering Gemini key • Updated Arize Phoenix observability guide • Refreshed flow documentation ## v0.119.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.119.0) What’s Changed ## Core Improvements & Fixes * Improved test reliability by enhancing pytest handling for flaky tests * Fixed memory reset crash when embedding dimensions mismatch * Enabled parent flow identification for Crew and LiteAgent * Prevented telemetry-related crashes when unavailable * Upgraded LiteLLM version for better compatibility * Fixed llama converter tests by removing skip\_external\_api ## New Features & Enhancements * Introduced knowledge retrieval prompt re-writting in Agent for improved tracking and debugging * Made LLM setup and quickstart guides model-agnostic ## Documentation & Guides * Added advanced configuration docs for the RAG tool * Updated Windows troubleshooting guide * Refined documentation examples for better clarity * Fixed typos across docs and config files ## v0.118.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.118.0) ### Core Improvements & Fixes * Fixed issues with missing prompt or system templates. * Removed global logging configuration to avoid unintended overrides. * Renamed TaskGuardrail to LLMGuardrail for improved clarity. * Downgraded litellm to version 1.167.1 for compatibility. * Added missing **init**.py files to ensure proper module initialization. ### New Features & Enhancements * Added support for no-code Guardrail creation to simplify AI behavior controls. ### Documentation & Guides * Removed CrewStructuredTool from public documentation to reflect internal usage. * Updated enterprise documentation and YouTube embed for improved onboarding experience. ## v0.117.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.117.1) * build: upgrade crewai-tools * upgrade liteLLM to latest version * Fix Mem0 OSS ## v0.117.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.117.0) # What's Changed ## New Features & Enhancements * Added `result_as_answer` parameter support in `@tool` decorator. * Introduced support for new language models: GPT-4.1, Gemini-2.0, and Gemini-2.5 Pro. * Enhanced knowledge management capabilities. * Added Huggingface provider option in CLI. * Improved compatibility and CI support for Python 3.10+. ## Core Improvements & Fixes * Fixed issues with incorrect template parameters and missing inputs. * Improved asynchronous flow handling with coroutine condition checks. * Enhanced memory management with isolated configuration and correct memory object copying. * Fixed initialization of lite agents with correct references. * Addressed Python type hint issues and removed redundant imports. * Updated event placement for improved tool usage tracking. * Raised explicit exceptions when flows fail. * Removed unused code and redundant comments from various modules. * Updated GitHub App token action to v2. ## Documentation & Guides * Enhanced documentation structure, including enterprise deployment instructions. * Automatically create output folders for documentation generation. * Fixed broken link in `WeaviateVectorSearchTool` documentation. * Fixed guardrail documentation usage and import paths for JSON search tools. * Updated documentation for `CodeInterpreterTool`. * Improved SEO, contextual navigation, and error handling for documentation pages. ## v0.114.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.114.0) # What's Changed ## New Features & Enhancements * Agents as an atomic unit. (`Agent(...).kickoff()`) * Support to Custom LLM implementations. * Integrated External Memory and Opik observability. * Enhanced YAML extraction. * Multimodal agent validation. * Added Secure fingerprints for agents and crews. ## Core Improvements & Fixes * Improved serialization, agent copying, and Python compatibility. * Added wildcard support to emit() * Added support for additional router calls and context window adjustments. * Fixed typing issues, validation, and import statements. * Improved method performance. * Enhanced agent task handling, event emissions, and memory management. * Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs. ## Documentation & Guides * Improved documentation structure, theme, and organization. * Added guides for Local NVIDIA NIM with WSL2, W\&B Weave, and Arize Phoenix. * Updated tool configuration examples, prompts, and observability docs. * Guide on using singular agents within Flows ## v0.108.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.108.0) # Features * Converted tabs to spaces in crew\.py template in PR #2190 * Enhanced LLM Streaming Response Handling and Event System in PR #2266 * Included model\_name in PR #2310 * Enhanced Event Listener with rich visualization and improved logging in PR #2321 * Added fingerprints in PR #2332 # Bug Fixes * Fixed Mistral issues in PR #2308 * Fixed a bug in documentation in PR #2370 * Fixed type check error in fingerprint property in PR #2369 # Documentation Updates * Improved tool documentation in PR #2259 * Updated installation guide for the uv tool package in PR #2196 * Added instructions for upgrading crewAI with the uv tool in PR #2363 * Added documentation for ApifyActorsTool in PR #2254 ## v0.105.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.105.0) **Core Improvements & Fixes** * Fixed issues with missing template variables and user memory configuration. * Improved async flow support and addressed agent response formatting. * Enhanced memory reset functionality and fixed CLI memory commands. * Fixed type issues, tool calling properties, and telemetry decoupling. **New Features & Enhancements** * Added Flow state export and improved state utilities. * Enhanced agent knowledge setup with optional crew embedder. * Introduced event emitter for better observability and LLM call tracking. * Added support for Python 3.10 and ChatOllama from langchain\_ollama. * Integrated context window size support for the o3-mini model. * Added support for multiple router calls. **Documentation & Guides** * Improved documentation layout and hierarchical structure. * Added QdrantVectorSearchTool guide and clarified event listener usage. * Fixed typos in prompts and updated Amazon Bedrock model listings. ## v0.102.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.102.0) ### Core Improvements & Fixes * Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models. * Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks. * Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class. * Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types. ### New Features & Enhancements * Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support. * Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation. * Data Handling Improvements: Updated excel\_knowledge\_source.py to process multi-tab files. * General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues. * Adding new tool QdrantVectorSearchTool ### Documentation & Guides * Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation. * Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation. * Fixed Various Typos & Formatting Issues. ### Maintenance & Miscellaneous * Refined Google Docs integrations and task handling for the current year. ## v0.100.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.100.0) * Feat: Add Composio docs * Feat: Add SageMaker as a LLM provider * Fix: Overall LLM connection issues * Fix: Using safe accessors on training * Fix: Add version check to crew\_chat.py * Docs: New docs for crewai chat * Docs: Improve formatting and clarity in CLI and Composio Tool docs ## v0.98.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.98.0) * Feat: Conversation crew v1 * Feat: Add unique ID to flow states * Feat: Add @persist decorator with FlowPersistence interface * Integration: Add SambaNova integration * Integration: Add NVIDIA NIM provider in cli * Integration: Introducing VoyageAI * Chore: Update date to current year in template * Fix: Fix API Key Behavior and Entity Handling in Mem0 Integration * Fix: Fixed core invoke loop logic and relevant tests * Fix: Make tool inputs actual objects and not strings * Fix: Add important missing parts to creating tools * Fix: Drop litellm version to prevent windows issue * Fix: Before kickoff if inputs are none * Fix: TYPOS * Fix: Nested pydantic model issue * Fix: Docling issues * Fix: union issue * Docs updates ## v0.95.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.95.0) * Feat: Adding Multimodal Abilities to Crew * Feat: Programatic Guardrails * Feat: HITL multiple rounds * Feat: Gemini 2.0 Support * Feat: CrewAI Flows Improvements * Feat: Add Workflow Permissions * Feat: Add support for langfuse with litellm * Feat: Portkey Integration with CrewAI * Feat: Add interpolate\_only method and improve error handling * Feat: Docling Support * Feat: Weviate Support * Fix: output\_file not respecting system path * Fix disk I/O error when resetting short-term memory. * Fix: CrewJSONEncoder now accepts enums * Fix: Python max version * Fix: Interpolation for output\_file in Task * Fix: Handle coworker role name case/whitespace properly * Fix: Add tiktoken as explicit dependency and document Rust requirement * Fix: Include agent knowledge in planning process * Fix: Change storage initialization to None for KnowledgeStorage * Fix: Fix optional storage checks * Fix: include event emitter in flows * Fix: Docstring, Error Handling, and Type Hints Improvements * Fix: Suppressed userWarnings from litellm pydantic issues ## v0.86.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.86.0) * remove all references to pipeline and pipeline router * docs: Add Nvidia NIM as provider in Custom LLM * add knowledge demo + improve knowledge docs * Brandon/cre 509 hitl multiple rounds of followup * New docs about yaml crew with decorators. Simplify template crew ## v0.85.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.85.0) * Added knowledge to agent level * Feat/remove langchain * Improve typed task outputs * Log in to Tool Repository on `crewai login` * Fixes issues with result as answer not properly exiting LLM loop * fix: missing key name when running with ollama provider * fix spelling issue found * Update readme for running mypy * Add knowledge to mint.json * Update Github actions * Docs Update Agents docs to include two approaches for creating an agent * Documentation Improvements: LLM Configuration and Usage ## v0.83.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.83.0) * New `before_kickoff` and `after_kickoff` crew callbacks * Support to pre-seed agents with Knowledge * Add support for retrieving user preferences and memories using Mem0 * Fix Async Execution * Upgrade chroma and adjust embedder function generator * Update CLI Watson supported models + docs * Reduce level for Bandit * Fixing all tests * Update Docs ## v0.80.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.80.0) * Fixing Tokens callback replacement bug * Fixing Step callback issue * Add cached prompt tokens info on usage metrics * Fix crew\_train\_success test ## v0.79.4 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.79.4) Series of small bug fixes around llms support ## v0.79.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.79.0) * Add inputs to flows * Enhance log storage to support more data types * Add support to IBM memory * Add Watson as an option in CLI * Add security.md file * Replace .netrc with uv environment variables * Move BaseTool to main package and centralize tool description generation * Raise an error if an LLM doesnt return a response * Fix flows to support cycles and added in test * Update how we name crews and fix missing config * Update docs ## v0.76.9 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.76.9) * Update plot command for flow to crewai flow plot * Add tomli so we can support 3.10 * Forward install command options to `uv sync` * Improve tool text description and args * Improve tooling and flow docs * Update flows cli to allow you to easily add additional crews to a flow with crewai flow add-crew * Fixed flows bug when using multiple start and listen(and\_(..., ..., ...)) ## v0.76.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.76.2) Updating crewai create commadn ## v0.76.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.76.0) * fix/fixed missing API prompt + CLI docs update * chore(readme): fixing step for 'running tests' in the contribution * support unsafe code execution. add in docker install and running checks * Fix memory imports for embedding functions by ## v0.75.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.75.1) new `--provider` option on crewai crewat ## v0.75.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.75.0) * Fixing test post training * Simplify flows * Adapt `crewai tool install ` * Ensure original embedding config works * Fix bugs * Update docs - Including adding Cerebras LLM example configuration to LLM docs * Drop unnecessary tests ## v0.74.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.74.2) * feat: add poetry.lock to uv migration * fix tool calling issue ## v0.74.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.74.0) * UV migration * Adapt Tools CLI to UV * Add warning from Poetry -> UV * CLI to allow for model selection & submitting API keys * New Memory Base * Fix Linting and Warnings * Update Docs * Bug fixesh ## v0.70.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.70.1) * New Flow feature * Flow visualizer * Create `crewai create flow` command * Create `crewai tool create ` command * Add Git validations for publishing tools * fix: JSON encoding date objects * New Docs * Update README * Bug fixes ## v0.65.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.65.2) * Adding experimental Flows feature * Fixing order of tasks bug * Updating templates ## v0.64.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.64.0) * Ordering tasks properly * Fixing summarization logic * Fixing stop words logic * Increases default max iterations to 20 * Fix crew's key after input interpolation * Fixing Training Feature * Adding initial tools API * TYPOS * Updating Docs Fixes: #1359 #1355 #1353 #1356 and others ## v0.63.6 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.63.6) * Updating projects templates ## v0.63.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.63.5) * Bringing support to o1 family back, and any model that don't support stop words * Updating dependencies * Updating logs * Updating docs ## v0.63.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.63.2) * Adding OPENAI\_BASE\_URL as fallback * Adding proper LLM import * Updating docs ## v0.63.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.63.1) * Small bug fix for support future CrewAI deploy ## v0.63.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.63.0) * New LLM class to interact with LLMs (leveraging LiteLLM) * Adding support to custom memory interfaces * Bringing GPT-4o-mini as the default model * Updates Docs * Updating dependencies * Bug fixes * Remove redundant task creation in `kickoff_for_each_async` ## v0.61.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.61.0) * Updating dependencies * Printing max rpm message in different color * Updating all cassettes for tests * Always ending on a user message - to better support certain models like bedrock ones * Overall small bug fixes ## v0.60.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.60.0) * Removing LangChain and Rebuilding Executor * Get all of out tests back to green * Adds the ability to not use system prompt use\_system\_prompt on the Agent * Adds the ability to not use stop words (to support o1 models) use\_stop\_words on the Agent * Sliding context window gets renamed to respect\_context\_window, and enable by default * Delegation is now disabled by default * Inner prompts were slightly changed as well * Overall reliability and quality of results * New support for: * Number of max requests per minute * A maximum number of iterations before giving a final answer * Proper take advantage of system prompts * Token calculation flow * New logging of the crew and agent execution ## v0.55.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.55.2) * Adding ability for auto complete * Add name and expected\_output to TaskOutput * New `crewai install` CLI * New `crewai deploy` CLI * Cleaning up of Pipeline feature * Updated docs * Dev experience improvements like bandit CI pipeline * Fix bugs: * Ability to use `planning_llm` * Fix YAML based projects * Fix Azure support * Add support to Python 3.10 * Moving away from Pydantic v1 ## v0.51.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.51.0) * crewAI Testing / Evaluation - [https://docs.crewai.com/core-concepts/Testing/](https://docs.crewai.com/core-concepts/Testing/) * Adding new sliding context window * Allowing all attributes on YAML - [https://docs.crewai.com/getting-started/Start-a-New-CrewAI-Project-Template-Method/#customizing-your-project](https://docs.crewai.com/getting-started/Start-a-New-CrewAI-Project-Template-Method/#customizing-your-project) * Adding initial Pipeline Structure - [https://docs.crewai.com/core-concepts/Pipeline/](https://docs.crewai.com/core-concepts/Pipeline/) * Ability to set LLM for planning step - [https://docs.crewai.com/core-concepts/Planning/](https://docs.crewai.com/core-concepts/Planning/) * New crew run command - [https://docs.crewai.com/getting-started/Start-a-New-CrewAI-Project-Template-Method/#running-your-project](https://docs.crewai.com/getting-started/Start-a-New-CrewAI-Project-Template-Method/#running-your-project) * Saving file now dumps dict into JSON - [https://docs.crewai.com/core-concepts/Tasks/#creating-directories-when-saving-files](https://docs.crewai.com/core-concepts/Tasks/#creating-directories-when-saving-files) * Using verbose settings for tool outputs * Added new Github Templates * New Vision tool - [https://docs.crewai.com/tools/VisionTool/](https://docs.crewai.com/tools/VisionTool/) * New DALL-E Tool - [https://docs.crewai.com/tools/DALL-ETool/](https://docs.crewai.com/tools/DALL-ETool/) * New MySQL tool - [https://docs.crewai.com/tools/MySQLTool/](https://docs.crewai.com/tools/MySQLTool/) * New NL2SQL Tool - [https://docs.crewai.com/tools/NL2SQLTool.md](https://docs.crewai.com/tools/NL2SQLTool.md) * Bug Fixes: * Bug with planning feature output * Async tasks for hierarchical process * Better pydantic output for non OAI models * JSON truncation issues * Fix logging types * Only import AgentOps if the Env Key is set * Sanitize agent roles to ensure valid directory names (Windows) * Tools name shouldn't contain space for OpenAI * A bunch of minor issues ## v0.41.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.41.1) * Fix bug with planning feature ## v0.41.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.41.0) * **\[Breaking Change]** Type Safe output * All crews and tasks now return a proper object TaskOuput and CrewOutput * **\[Feature]** New planning feature for crews (plan before act) * by adding planning=True to the Crew instance * **\[Feature]** Introduced Replay Feature * New CLI that allow you to list the tasks from last run and replay from a specific one * **\[Feature]** Ability to reset memory * You can clean your crew memory before running it again * **\[Feature]** Add retry feature for LLM calls * You can retry llm calls and not stop the crew execution * **\[Feature]** Added ability to customize converter * **\[Tool]** Enhanced tools with type hinting and new attributes * **\[Tool]** Added MultiON Tool * **\[Tool]** Fixed filecrawl tools * **\[Tool]** Fixed bug in Scraping tool * **\[Tools]** Bumped crewAI-tools dependency to version * **\[Bugs]** General bug fixes and improvements * **\[Bugs]** Telemetry fixes * **\[Bugs]** Spell check corrections * **\[Docs]** Updated documentation ## v0.36.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.36.0) * Bug fix * Updating Docs * Updating native prompts * Fixing TYPOs on the prompts * Adding AgentOps native support * Adding Firecrawl Tools * Adding new ability to return a tool results as an agent result * Improving coding Interpreter tool * Adding new option to create your own corveter class (docs pending) ## v0.35.8 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.8) * fixing embechain dependency issue ## v0.35.7 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.7) * New @composiohq integration is out * Documentation update * Custom GPT Updated * Adjusting manager verbosity level * Bug fixes ## v0.35.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.5) * Fix embedchain dependency ## v0.35.4 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.4) * Updating crewai create CLI to use the new version ## v0.35.3 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.3) * Code Execution Bug fixed * Updating overall docs * Bumping version of crewai-tools * Bumping versions of many dependencies * Overall bugfixes ## v0.35.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.35.0) * Your agents can now execute code * Bring Any 3rd-party agent, LlamaIndex, LangChain and Autogen agents can all be part of your crew now! * Train you crew before you execute it and get consistent results! New CLI `crewai train -n X` * Bug fixes and docs updates (still missing some new docs updates coming soon) ## v0.32.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.32.2) * Updating `crewai create` CLI to use the new version * Fixing delegation agent matching ## v0.32.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.32.0) * New `kickoff_for_each`, `kickoff_async` and `kickoff_for_each_async` methods for better control over the kickoff process * Adding support for all LlamaIndex hub integrations * Adding `usage_metrics` to full output or a crew * Adding support to multiple crews on the new YAML format * Updating dependencies * Fixed Bugs and TYPOs * Documentation updated * Added search in docs * Making gpt-4o the default model * Adding new docs for LangTrace, Browserbase and Exa Search * Adding timestamp to logging ## v0.30.11 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.30.11) * Updating project generation template ## v0.30.8 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.30.8) * Updating dependencies * Small bug fixes on crewAI project structure * Removing custom YAML parser for now ## v0.30.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.30.5) * Making agent delegation more versatile for smaller models ## v0.30.4 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.30.4) **Docs Update will follow** sorry about that and thank you for bearing with me, we are launching new docs soon! ➿ Fixing task callback 🧙 Ability to set a specific agent as manager instead of having crew create your one 📄 Ability to set system, prompt and response templates, so it works more reliable with opensource models (works better with smaller models) 👨‍💻 Improving json and pydantic output (works better with smaller models) 🔎 Improving tool name recognition (works better with smaller models) 🧰 Improvements for tool usage (works better with smaller models) 📃 Initial support to bring your own prompts 2️⃣ Fixing duplicating token calculator metrics 🪚 Adding couple new tools, Browserbase and Exa Search 📁 Ability to create directory when saving as file 🔁 Updating dependencies - double check tools 📄 Overall small documentation improvements 🐛 Smaller bug fixes (typos and such) 👬 Fixing co-worker / coworker issues 👀 Smaller Readme Updates ## v0.28.8 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.28.8) * updating version used on crewai CLI ## v0.28.7 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/0.28.7) * Bug fixes * Updating crewAI tool version with bug fixes ## v0.28.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.28.5) * Major Long term memory interpolation issue * Updating tools package dependency with fixes * Removing unnecessary certificate ## v0.28.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.28.2) * Major long term memory fix ## v0.28.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.28.1) * Updating crewai-tools to 0.1.15 ## v0.28.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.28.0) * Not overriding LLM callbacks * Adding `max_execution_time` support * Adding specific memory docs * Moving tool usage logging color to purple from yellow * Updating Docs ## v0.27.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.27.0) * 🧠 **Memory (shared crew memory)** - To enable it just add `memory=True` to your crew, it will work transparently and make outcomes better and more reliable, it's disable by default for now * 🤚🏼 **Native Human Input Support:** [docs](https://docs.crewai.com/how-to/Human-Input-on-Execution/) * 🌐 **Universal RAG Tools Support:** Any models, beyond just OpenAI. [Example](https://docs.crewai.com/tools/DirectorySearchTool/#custom-model-and-embeddings) * 🔍 **Enhanced Cache Control:** Meet the ingenious cache\_function attribute: [docs](https://docs.crewai.com/core-concepts/Tools/#custom-caching-mechanism) * 🔁 **Updated crewai-tools Dependency:** Always in sync with the latest and greatest. * ⛓️ **Cross Agent Delegation:** Smoother cooperation between agents. * 💠 **Inner Prompt Improvements:** A finer conversational flow. * 📝 **Improving tool usage with better parsing** * 🔒 **Security improvements and updating dependencies** * 📄 **Documentation improved** * 🐛 **Bug fixes** ## v0.22.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.22.5) * Other minor import issues on the new templates ## v0.22.4 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.22.4) Fixing template issues ## v0.22.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.22.2) * Fixing bug on the new cli template * Guaranteeing tasks order on new cli template ## v0.22.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.22.0) * Adding initial CLI `crewai create` command * Adding ability to agents and tasks to be defined using dictionaries * Adding more clear agent logging * Fixing bug Exceed maximum recursion depth bug * Fixing docs * Updating README ## v0.19.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.19.0) * Efficiency in tool usage +1023.21% * Mean tools used +276% * Tool errors slashed by 67%, more reliable than ever. * Delegation capabilities enhanced * Ability to fallback to function calling by setting `function_calling_llm` to Agent or Crew * Ability to get crew execution metrics after `kickoff` with `crew.usage_metrics` * Adding ability for inputs being passed in kickoff now `crew.kickoff(inputs: {'key': 'value})` * Updating Docs ## v0.16.3 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.16.3) * Fixing overall bugs * Making sure code is backwards compatible ## v0.16.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.16.0) * Removing lingering `crewai_tools` dependency * Adding initial support for inputs interpolation (missing docs) * Adding ability to track tools usage, tools error, formatting errors, tokens usage * Updating README ## v0.14.4 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.14.4) * Updating timeouts * Updating docs * Removing crewai\_tools as a mandatory * Making agents memory-less by default for token count reduction (breaking change for people counting on this previously) ## v0.14.3 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.14.3) * Fixing broken docs link * Adding support for agents without tools * Avoid empty task outputs ## v0.14.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.14.0) All improvements from the v0.14.0rc. * Support to export json and pydantic from opensource models ## v0.14.0rc [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.14.0rc0) * Adding support to crewai-tools * Adding support to format tasks output as Pydantic Objects Or JSON * Adding support to save tasks ouput to a file * Improved reliability for inter agent delegation * Revamp tools usage logic to proper use function calling * Updating internal prompts * Supporting tools with no arguments * Bug fixes ## v0.11.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.11.2) * Adding further error logging so users understand what is happening if a tool fails ## v0.11.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.11.1) * It fixes a bug on the tool usage logic that was early caching the result even if there was an error on the usage, preventing it from using the tool again. * It will also print any error message in red allowing the user to understand what was the problem with the tool. ## v0.11.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.11.0) * Ability to set `function_calling_llm` on both the entire crew and individual agents * Some early attempts on cost reduction * Improving function calling for tools * Updates docs ## v0.10.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.10.0) * Ability to get `full_ouput` from crew kickoff with all tasks outputs * Ability to set `step_callback` function for both Agents and Crews so you can get all intermediate steps * Remembering Agent of the expected format after certain number of tool usages. * New tool usage internals now using json, unlocking tools with multiple arguments * Refactoring overall delegation logic, now way more reliable * Fixed `max_inter` bug now properly forcing llm to answer as it gets to that * Rebuilt caching structure, making sure multiple agents can use the same cache * Refactoring Task repeated usage prevention logic * Removing now unnecessary `CrewAgentOutputParser` * Opt-in to share complete crew related data with the crewAI team * Overall Docs update ## v0.5.5 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.5.5) * Overall doc + readme improvements * Fixing RPM controller being set unnecessarily * Adding early stage anonymous telemetry for lib improvement ## v0.5.3 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.5.3) * quick Fix for hierarchical manager ## v0.5.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.5.2) * Adding `manager_llm` for hierarchical process * Improving `max_inter` and `max_rpm` logic * Updating README and Docs ## v0.5.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.5.0) This new version bring a lot of new features and improvements to the library. ## Features * Adding Task Callbacks. * Adding support for Hierarchical process. * Adding ability to references specific tasks in another task. * Adding ability to parallel task execution. ## Improvements * Revamping Max Iterations and Max Requests per Minute. * Developer experience improvements, docstrings and such. * Small improvements and TYPOs. * Fix static typing errors. * Updated README and Docs. ## v0.1.32 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.32) * Moving to LangChain 0.1.0 * Improving Prompts * Adding ability to limit maximum number of iterations for an agent * Adding ability to Request Per Minute throttling for both Agents and Crews * Adding initial support for translations * Adding Greek translation * Improve code readability * Starting new documentation with mkdocs ## v0.1.23 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.23) * Many Reliability improvements * Prompt changes * Initial changes for supporting multiple languages * Fixing bug on task repeated execution * Better execution error handling * Updating READMe ## v0.1.14 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.14) * Adding tool caching a loop execution prevention. (@joaomdmoura) * Adding more guidelines for Agent delegation. (@joaomdmoura) * Updating to use new openai lib version. (@joaomdmoura) * Adding verbose levels to the logger. (@joaomdmoura) * Removing WIP code. (@joaomdmoura) * A lot of developer quality of life improvements (Special thanks to @greysonlalonde). * Updating to pydantic v2 (Special thanks to @greysonlalonde as well). ## v0.1.2 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.2) * Adding ability to use other LLMs, not OpenAI ## v0.1.1 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.1) # CrewAI v0.1.1 Release Notes ## What's New * **Crew Verbose Mode**: Now allowing you to inspect a the tasks are being executed. * **README and Docs Updates**: A series of minor updates on the docs ## v0.1.0 [View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/v0.1.0) # CrewAI v0.1.0 Release Notes We are thrilled to announce the initial release of CrewAI, version 0.1.0! CrewAI is a framework designed to facilitate the orchestration of autonomous AI agents capable of role-playing and collaboration to accomplish complex tasks more efficiently. ## What's New * **Initial Launch**: CrewAI is now officially in the wild! This foundational release lays the groundwork for AI agents to work in tandem, each with its own specialized role and objectives. * **Role-Based Agent Design**: Define and customize agents with specific roles, goals, and the tools they need to succeed. * **Inter-Agent Delegation**: Agents are now equipped to autonomously delegate tasks, enabling dynamic distribution of workload among the team. * **Task Management**: Create and assign tasks dynamically with the flexibility to specify the tools needed for each task. * **Sequential Processes**: Set up your agents to tackle tasks one after the other, ensuring organized and predictable workflows. * **Documentation**: Start exploring CrewAI with our initial documentation that guides you through the setup and use of the framework. ## Enhancements * Detailed API documentation for the `Agent`, `Task`, `Crew`, and `Process` classes. * Examples and tutorials to help you build your first CrewAI application. * Basic setup for collaborative and delegation mechanisms among agents. ## Known Issues * As this is the first release, there may be undiscovered bugs and areas for optimization. We encourage the community to report any issues found during use. ## Upcoming Features * **Advanced Process Management**: In future releases, we will introduce more complex processes for task management including consensual and hierarchical workflows. # Agents Source: https://docs.crewai.com/en/concepts/agents Detailed guide on creating and managing agents within the CrewAI framework. ## Overview of an Agent In the CrewAI framework, an `Agent` is an autonomous unit that can: * Perform specific tasks * Make decisions based on its role and goal * Use tools to accomplish objectives * Communicate and collaborate with other agents * Maintain memory of interactions * Delegate tasks when allowed Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content. CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time. Visual Agent Builder Screenshot The Visual Agent Builder enables: * Intuitive agent configuration with form-based interfaces * Real-time testing and validation * Template library with pre-configured agent types * Easy customization of agent attributes and behaviors ## Agent Attributes | Attribute | Parameter | Type | Description | | :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- | | **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. | | **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. | | **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. | | **LLM** *(optional)* | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". | | **Tools** *(optional)* | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. | | **Function Calling LLM** *(optional)* | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. | | **Max Iterations** *(optional)* | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. | | **Max RPM** *(optional)* | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. | | **Max Execution Time** *(optional)* | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. | | **Verbose** *(optional)* | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. | | **Allow Delegation** *(optional)* | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. | | **Step Callback** *(optional)* | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. | | **Cache** *(optional)* | `cache` | `bool` | Enable caching for tool usage. Default is True. | | **System Template** *(optional)* | `system_template` | `Optional[str]` | Custom system prompt template for the agent. | | **Prompt Template** *(optional)* | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. | | **Response Template** *(optional)* | `response_template` | `Optional[str]` | Custom response template for the agent. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. | | **Max Retry Limit** *(optional)* | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. | | **Respect Context Window** *(optional)* | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. | | **Code Execution Mode** *(optional)* | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. | | **Multimodal** *(optional)* | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. | | **Inject Date** *(optional)* | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. | | **Date Format** *(optional)* | `date_format` | `str` | Format string for date when inject\_date is enabled. Default is "%Y-%m-%d" (ISO format). | | **Reasoning** *(optional)* | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. | | **Max Reasoning Attempts** *(optional)* | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. | | **Embedder** *(optional)* | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. | | **Knowledge Sources** *(optional)* | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. | | **Use System Prompt** *(optional)* | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. | ## Creating Agents There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**. ### YAML Configuration (Recommended) Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects. After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements. Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew: ```python Code crew.kickoff(inputs={'topic': 'AI Agents'}) ``` Here's an example of how to configure agents using YAML: ```yaml agents.yaml # src/latest_ai_development/config/agents.yaml researcher: role: > {topic} Senior Data Researcher goal: > Uncover cutting-edge developments in {topic} backstory: > You're a seasoned researcher with a knack for uncovering the latest developments in {topic}. Known for your ability to find the most relevant information and present it in a clear and concise manner. reporting_analyst: role: > {topic} Reporting Analyst goal: > Create detailed reports based on {topic} data analysis and research findings backstory: > You're a meticulous analyst with a keen eye for detail. You're known for your ability to turn complex data into clear and concise reports, making it easy for others to understand and act on the information you provide. ``` To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`: ```python Code # src/latest_ai_development/crew.py from crewai import Agent, Crew, Process from crewai.project import CrewBase, agent, crew from crewai_tools import SerperDevTool @CrewBase class LatestAiDevelopmentCrew(): """LatestAiDevelopment crew""" agents_config = "config/agents.yaml" @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], # type: ignore[index] verbose=True, tools=[SerperDevTool()] ) @agent def reporting_analyst(self) -> Agent: return Agent( config=self.agents_config['reporting_analyst'], # type: ignore[index] verbose=True ) ``` The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code. ### Direct Code Definition You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters: ```python Code from crewai import Agent from crewai_tools import SerperDevTool # Create an agent with all available parameters agent = Agent( role="Senior Data Scientist", goal="Analyze and interpret complex datasets to provide actionable insights", backstory="With over 10 years of experience in data science and machine learning, " "you excel at finding patterns in complex datasets.", llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4" function_calling_llm=None, # Optional: Separate LLM for tool calling verbose=False, # Default: False allow_delegation=False, # Default: False max_iter=20, # Default: 20 iterations max_rpm=None, # Optional: Rate limit for API calls max_execution_time=None, # Optional: Maximum execution time in seconds max_retry_limit=2, # Default: 2 retries on error allow_code_execution=False, # Default: False code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe") respect_context_window=True, # Default: True use_system_prompt=True, # Default: True multimodal=False, # Default: False inject_date=False, # Default: False date_format="%Y-%m-%d", # Default: ISO format reasoning=False, # Default: False max_reasoning_attempts=None, # Default: None tools=[SerperDevTool()], # Optional: List of tools knowledge_sources=None, # Optional: List of knowledge sources embedder=None, # Optional: Custom embedder configuration system_template=None, # Optional: Custom system prompt template prompt_template=None, # Optional: Custom prompt template response_template=None, # Optional: Custom response template step_callback=None, # Optional: Callback function for monitoring ) ``` Let's break down some key parameter combinations for common use cases: #### Basic Research Agent ```python Code research_agent = Agent( role="Research Analyst", goal="Find and summarize information about specific topics", backstory="You are an experienced researcher with attention to detail", tools=[SerperDevTool()], verbose=True # Enable logging for debugging ) ``` #### Code Development Agent ```python Code dev_agent = Agent( role="Senior Python Developer", goal="Write and debug Python code", backstory="Expert Python developer with 10 years of experience", allow_code_execution=True, code_execution_mode="safe", # Uses Docker for safety max_execution_time=300, # 5-minute timeout max_retry_limit=3 # More retries for complex code tasks ) ``` #### Long-Running Analysis Agent ```python Code analysis_agent = Agent( role="Data Analyst", goal="Perform deep analysis of large datasets", backstory="Specialized in big data analysis and pattern recognition", memory=True, respect_context_window=True, max_rpm=10, # Limit API calls function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls ) ``` #### Custom Template Agent ```python Code custom_agent = Agent( role="Customer Service Representative", goal="Assist customers with their inquiries", backstory="Experienced in customer support with a focus on satisfaction", system_template="""<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>""", prompt_template="""<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>""", response_template="""<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""", ) ``` #### Date-Aware Agent with Reasoning ```python Code strategic_agent = Agent( role="Market Analyst", goal="Track market movements with precise date references and strategic planning", backstory="Expert in time-sensitive financial analysis and strategic reporting", inject_date=True, # Automatically inject current date into tasks date_format="%B %d, %Y", # Format as "May 21, 2025" reasoning=True, # Enable strategic planning max_reasoning_attempts=2, # Limit planning iterations verbose=True ) ``` #### Reasoning Agent ```python Code reasoning_agent = Agent( role="Strategic Planner", goal="Analyze complex problems and create detailed execution plans", backstory="Expert strategic planner who methodically breaks down complex challenges", reasoning=True, # Enable reasoning and planning max_reasoning_attempts=3, # Limit reasoning attempts max_iter=30, # Allow more iterations for complex planning verbose=True ) ``` #### Multimodal Agent ```python Code multimodal_agent = Agent( role="Visual Content Analyst", goal="Analyze and process both text and visual content", backstory="Specialized in multimodal analysis combining text and image understanding", multimodal=True, # Enable multimodal capabilities verbose=True ) ``` ### Parameter Details #### Critical Parameters * `role`, `goal`, and `backstory` are required and shape the agent's behavior * `llm` determines the language model used (default: OpenAI's GPT-4) #### Memory and Context * `memory`: Enable to maintain conversation history * `respect_context_window`: Prevents token limit issues * `knowledge_sources`: Add domain-specific knowledge bases #### Execution Control * `max_iter`: Maximum attempts before giving best answer * `max_execution_time`: Timeout in seconds * `max_rpm`: Rate limiting for API calls * `max_retry_limit`: Retries on error #### Code Execution * `allow_code_execution`: Must be True to run code * `code_execution_mode`: * `"safe"`: Uses Docker (recommended for production) * `"unsafe"`: Direct execution (use only in trusted environments) This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section. Add the code interpreter tool as a tool in the agent as a tool parameter. #### Advanced Features * `multimodal`: Enable multimodal capabilities for processing text and visual content * `reasoning`: Enable agent to reflect and create plans before executing tasks * `inject_date`: Automatically inject current date into task descriptions #### Templates * `system_template`: Defines agent's core behavior * `prompt_template`: Structures input format * `response_template`: Formats agent responses When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting. When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution. ## Agent Tools Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from: * [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) * [LangChain Tools](https://python.langchain.com/docs/integrations/tools) Here's how to add tools to an agent: ```python Code from crewai import Agent from crewai_tools import SerperDevTool, WikipediaTools # Create tools search_tool = SerperDevTool() wiki_tool = WikipediaTools() # Add tools to agent researcher = Agent( role="AI Technology Researcher", goal="Research the latest AI developments", tools=[search_tool, wiki_tool], verbose=True ) ``` ## Agent Memory and Context Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks. ```python Code from crewai import Agent analyst = Agent( role="Data Analyst", goal="Analyze and remember complex data patterns", memory=True, # Enable memory verbose=True ) ``` When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks. ## Context Window Management CrewAI includes sophisticated automatic context window management to handle situations where conversations exceed the language model's token limits. This powerful feature is controlled by the `respect_context_window` parameter. ### How Context Window Management Works When an agent's conversation history grows too large for the LLM's context window, CrewAI automatically detects this situation and can either: 1. **Automatically summarize content** (when `respect_context_window=True`) 2. **Stop execution with an error** (when `respect_context_window=False`) ### Automatic Context Handling (`respect_context_window=True`) This is the **default and recommended setting** for most use cases. When enabled, CrewAI will: ```python Code # Agent with automatic context management (default) smart_agent = Agent( role="Research Analyst", goal="Analyze large documents and datasets", backstory="Expert at processing extensive information", respect_context_window=True, # 🔑 Default: auto-handle context limits verbose=True ) ``` **What happens when context limits are exceeded:** * ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."` * 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history * ✅ **Continued execution**: Task execution continues seamlessly with the summarized context * 📝 **Preserved information**: Key information is retained while reducing token count ### Strict Context Limits (`respect_context_window=False`) When you need precise control and prefer execution to stop rather than lose any information: ```python Code # Agent with strict context limits strict_agent = Agent( role="Legal Document Reviewer", goal="Provide precise legal analysis without information loss", backstory="Legal expert requiring complete context for accurate analysis", respect_context_window=False, # ❌ Stop execution on context limit verbose=True ) ``` **What happens when context limits are exceeded:** * ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."` * 🛑 **Execution stops**: Task execution halts immediately * 🔧 **Manual intervention required**: You need to modify your approach ### Choosing the Right Setting #### Use `respect_context_window=True` (Default) when: * **Processing large documents** that might exceed context limits * **Long-running conversations** where some summarization is acceptable * **Research tasks** where general context is more important than exact details * **Prototyping and development** where you want robust execution ```python Code # Perfect for document processing document_processor = Agent( role="Document Analyst", goal="Extract insights from large research papers", backstory="Expert at analyzing extensive documentation", respect_context_window=True, # Handle large documents gracefully max_iter=50, # Allow more iterations for complex analysis verbose=True ) ``` #### Use `respect_context_window=False` when: * **Precision is critical** and information loss is unacceptable * **Legal or medical tasks** requiring complete context * **Code review** where missing details could introduce bugs * **Financial analysis** where accuracy is paramount ```python Code # Perfect for precision tasks precision_agent = Agent( role="Code Security Auditor", goal="Identify security vulnerabilities in code", backstory="Security expert requiring complete code context", respect_context_window=False, # Prefer failure over incomplete analysis max_retry_limit=1, # Fail fast on context issues verbose=True ) ``` ### Alternative Approaches for Large Data When dealing with very large datasets, consider these strategies: #### 1. Use RAG Tools ```python Code from crewai_tools import RagTool # Create RAG tool for large document processing rag_tool = RagTool() rag_agent = Agent( role="Research Assistant", goal="Query large knowledge bases efficiently", backstory="Expert at using RAG tools for information retrieval", tools=[rag_tool], # Use RAG instead of large context windows respect_context_window=True, verbose=True ) ``` #### 2. Use Knowledge Sources ```python Code # Use knowledge sources instead of large prompts knowledge_agent = Agent( role="Knowledge Expert", goal="Answer questions using curated knowledge", backstory="Expert at leveraging structured knowledge sources", knowledge_sources=[your_knowledge_sources], # Pre-processed knowledge respect_context_window=True, verbose=True ) ``` ### Context Window Best Practices 1. **Monitor Context Usage**: Enable `verbose=True` to see context management in action 2. **Design for Efficiency**: Structure tasks to minimize context accumulation 3. **Use Appropriate Models**: Choose LLMs with context windows suitable for your tasks 4. **Test Both Settings**: Try both `True` and `False` to see which works better for your use case 5. **Combine with RAG**: Use RAG tools for very large datasets instead of relying solely on context windows ### Troubleshooting Context Issues **If you're getting context limit errors:** ```python Code # Quick fix: Enable automatic handling agent.respect_context_window = True # Better solution: Use RAG tools for large data from crewai_tools import RagTool agent.tools = [RagTool()] # Alternative: Break tasks into smaller pieces # Or use knowledge sources instead of large prompts ``` **If automatic summarization loses important information:** ```python Code # Disable auto-summarization and use RAG instead agent = Agent( role="Detailed Analyst", goal="Maintain complete information accuracy", backstory="Expert requiring full context", respect_context_window=False, # No summarization tools=[RagTool()], # Use RAG for large data verbose=True ) ``` The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest! ## Direct Agent Interaction with `kickoff()` Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities. ### How `kickoff()` Works The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.). ```python Code from crewai import Agent from crewai_tools import SerperDevTool # Create an agent researcher = Agent( role="AI Technology Researcher", goal="Research the latest AI developments", tools=[SerperDevTool()], verbose=True ) # Use kickoff() to interact directly with the agent result = researcher.kickoff("What are the latest developments in language models?") # Access the raw response print(result.raw) ``` ### Parameters and Return Values | Parameter | Type | Description | | :---------------- | :--------------------------------- | :------------------------------------------------------------------------ | | `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content | | `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output | The method returns a `LiteAgentOutput` object with the following properties: * `raw`: String containing the raw output text * `pydantic`: Parsed Pydantic model (if a `response_format` was provided) * `agent_role`: Role of the agent that produced the output * `usage_metrics`: Token usage metrics for the execution ### Structured Output You can get structured output by providing a Pydantic model as the `response_format`: ```python Code from pydantic import BaseModel from typing import List class ResearchFindings(BaseModel): main_points: List[str] key_technologies: List[str] future_predictions: str # Get structured output result = researcher.kickoff( "Summarize the latest developments in AI for 2025", response_format=ResearchFindings ) # Access structured data print(result.pydantic.main_points) print(result.pydantic.future_predictions) ``` ### Multiple Messages You can also provide a conversation history as a list of message dictionaries: ```python Code messages = [ {"role": "user", "content": "I need information about large language models"}, {"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"}, {"role": "user", "content": "What are the latest developments in 2025?"} ] result = researcher.kickoff(messages) ``` ### Async Support An asynchronous version is available via `kickoff_async()` with the same parameters: ```python Code import asyncio async def main(): result = await researcher.kickoff_async("What are the latest developments in AI?") print(result.raw) asyncio.run(main()) ``` The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.). ## Important Considerations and Best Practices ### Security and Code Execution * When using `allow_code_execution`, be cautious with user input and always validate it * Use `code_execution_mode: "safe"` (Docker) in production environments * Consider setting appropriate `max_execution_time` limits to prevent infinite loops ### Performance Optimization * Use `respect_context_window: true` to prevent token limit issues * Set appropriate `max_rpm` to avoid rate limiting * Enable `cache: true` to improve performance for repetitive tasks * Adjust `max_iter` and `max_retry_limit` based on task complexity ### Memory and Context Management * Leverage `knowledge_sources` for domain-specific information * Configure `embedder` when using custom embedding models * Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior ### Advanced Features * Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks * Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts) * Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks * Customize the date format with `date_format` using standard Python datetime format codes * Enable `multimodal: true` for agents that need to process both text and visual content ### Agent Collaboration * Enable `allow_delegation: true` when agents need to work together * Use `step_callback` to monitor and log agent interactions * Consider using different LLMs for different purposes: * Main `llm` for complex reasoning * `function_calling_llm` for efficient tool usage ### Date Awareness and Reasoning * Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks * Customize the date format with `date_format` using standard Python datetime format codes * Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc. * Invalid date formats will be logged as warnings and will not modify the task description * Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection ### Model Compatibility * Set `use_system_prompt: false` for older models that don't support system messages * Ensure your chosen `llm` supports the features you need (like function calling) ## Troubleshooting Common Issues 1. **Rate Limiting**: If you're hitting API rate limits: * Implement appropriate `max_rpm` * Use caching for repetitive operations * Consider batching requests 2. **Context Window Errors**: If you're exceeding context limits: * Enable `respect_context_window` * Use more efficient prompts * Clear agent memory periodically 3. **Code Execution Issues**: If code execution fails: * Verify Docker is installed for safe mode * Check execution permissions * Review code sandbox settings 4. **Memory Issues**: If agent responses seem inconsistent: * Check knowledge source configuration * Review conversation history management Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly. # CLI Source: https://docs.crewai.com/en/concepts/cli Learn how to use the CrewAI CLI to interact with CrewAI. Since release 0.140.0, CrewAI Enterprise started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library. ## Overview The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows. ## Installation To use the CrewAI CLI, make sure you have CrewAI installed: ```shell Terminal pip install crewai ``` ## Basic Usage The basic structure of a CrewAI CLI command is: ```shell Terminal crewai [COMMAND] [OPTIONS] [ARGUMENTS] ``` ## Available Commands ### 1. Create Create a new crew or flow. ```shell Terminal crewai create [OPTIONS] TYPE NAME ``` * `TYPE`: Choose between "crew" or "flow" * `NAME`: Name of the crew or flow Example: ```shell Terminal crewai create crew my_new_crew crewai create flow my_new_flow ``` ### 2. Version Show the installed version of CrewAI. ```shell Terminal crewai version [OPTIONS] ``` * `--tools`: (Optional) Show the installed version of CrewAI tools Example: ```shell Terminal crewai version crewai version --tools ``` ### 3. Train Train the crew for a specified number of iterations. ```shell Terminal crewai train [OPTIONS] ``` * `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5) * `-f, --filename TEXT`: Path to a custom file for training (default: "trained\_agents\_data.pkl") Example: ```shell Terminal crewai train -n 10 -f my_training_data.pkl ``` ### 4. Replay Replay the crew execution from a specific task. ```shell Terminal crewai replay [OPTIONS] ``` * `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks Example: ```shell Terminal crewai replay -t task_123456 ``` ### 5. Log-tasks-outputs Retrieve your latest crew\.kickoff() task outputs. ```shell Terminal crewai log-tasks-outputs ``` ### 6. Reset-memories Reset the crew memories (long, short, entity, latest\_crew\_kickoff\_outputs). ```shell Terminal crewai reset-memories [OPTIONS] ``` * `-l, --long`: Reset LONG TERM memory * `-s, --short`: Reset SHORT TERM memory * `-e, --entities`: Reset ENTITIES memory * `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS * `-kn, --knowledge`: Reset KNOWLEDGE storage * `-akn, --agent-knowledge`: Reset AGENT KNOWLEDGE storage * `-a, --all`: Reset ALL memories Example: ```shell Terminal crewai reset-memories --long --short crewai reset-memories --all ``` ### 7. Test Test the crew and evaluate the results. ```shell Terminal crewai test [OPTIONS] ``` * `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3) * `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini") Example: ```shell Terminal crewai test -n 5 -m gpt-3.5-turbo ``` ### 8. Run Run the crew or flow. ```shell Terminal crewai run ``` Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows. Make sure to run these commands from the directory where your CrewAI project is set up. Some commands may require additional configuration or setup within your project structure. ### 9. Chat Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks. After receiving the results, you can continue interacting with the assistant for further instructions or questions. ```shell Terminal crewai chat ``` Ensure you execute these commands from your CrewAI project's root directory. IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command. ```python @crew def crew(self) -> Crew: return Crew( agents=self.agents, tasks=self.tasks, process=Process.sequential, verbose=True, chat_llm="gpt-4o", # LLM for chat orchestration ) ``` ### 10. Deploy Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com). * **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise. You can login or create an account with: ```shell Terminal crewai login ``` * **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject. ```shell Terminal crewai deploy create ``` * Reads your local project configuration. * Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this. ### 11. Organization Management Manage your CrewAI Enterprise organizations. ```shell Terminal crewai org [COMMAND] [OPTIONS] ``` #### Commands: * `list`: List all organizations you belong to ```shell Terminal crewai org list ``` * `current`: Display your currently active organization ```shell Terminal crewai org current ``` * `switch`: Switch to a specific organization ```shell Terminal crewai org switch ``` You must be authenticated to CrewAI Enterprise to use these organization management commands. * **Create a deployment** (continued): * Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically). * **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise. ```shell Terminal crewai deploy push ``` * Initiates the deployment process on the CrewAI Enterprise platform. * Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID). * **Deployment Status**: You can check the status of your deployment with: ```shell Terminal crewai deploy status ``` This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`). * **Deployment Logs**: You can check the logs of your deployment with: ```shell Terminal crewai deploy logs ``` This streams the deployment logs to your terminal. * **List deployments**: You can list all your deployments with: ```shell Terminal crewai deploy list ``` This lists all your deployments. * **Delete a deployment**: You can delete a deployment with: ```shell Terminal crewai deploy remove ``` This deletes the deployment from the CrewAI Enterprise platform. * **Help Command**: You can get help with the CLI with: ```shell Terminal crewai deploy --help ``` This shows the help message for the CrewAI Deploy CLI. Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.