Product updates, improvements, and bug fixes for CrewAI
rag
package and current implementationNone
README.md
uv
and updated dev toolingevents
module to crewai.events
v0.175.0
releasetool
section during crewai update
openai >=1.13.3
due to fixed import issuesFlow
listener resumability for HITL and cyclic flowsPlusAPI
and TraceBatchManager
Flow.start()
methodscrewai config reset
to clear tokenscrewai_trigger_payload
auto-injectioncrewai login
Task.max_retries
XMLSearchTool
by converting config values to strings for configparser
PytestUnraisableExceptionWarning
db_storage_path
chromadb
<1.100.0
due to ResponseTextConfigParam
import issueExternalMemory
metadatacrewai_trigger_payload
inject_trigger_input
to allow_crewai_trigger_context
agent_id
-linked memory entries in Mem0
XMLSearchTool
by converting config values to strings for configparser
PytestUnraisableExceptionWarning
db_storage_path
chromadb
<1.100.0
due to ResponseTextConfigParam
import issueExternalMemory
metadatacrewai_trigger_payload
inject_trigger_input
to allow_crewai_trigger_context
agent_id
-linked memory entries in Mem0
enterprise configure
command to CLI for streamlined enterprise setupBaseModel
entriespartition()
for performance1.74.9
crewai config
CLI command group with testscrew.name
connect_timeout
attributecrewai signup
references and replaced them with crewai login
agent_id
Flow
class to support custom flow namesstop
parameter for LLM models automaticallysave
method and updated related test casesUserMemory
SerperScrapeWebsiteTool
and reorganized Serper sectionBaseLLM
class instead of LLM
create_directory
parameter in Task
classAgentEvaluator
Agent
and LiteAgent
neatlogs
guardrail
attributes and usage examplesneatlogs
Agent.kickoff
usageAgent.kickoff
methodRAGStorage
MemoryEvents
to monitor memory usageMemoryEvents
Task
mkdocs
from project dependenciesmem0
storageTool
attributesCrewBase
CrewBase
quickstart.mdx
LLMGuardrail
eventsmanager_agent
tokens in usage_metrics
from kickoffLiteAgent
with Guardrail integrationLiteLLM
to support latest OpenAI versionUV
version for Tool repositoryresult_as_answer
parameter support in @tool
decorator.WeaviateVectorSearchTool
documentation.CodeInterpreterTool
.Agent(...).kickoff()
)crewai login
before_kickoff
and after_kickoff
crew callbacksuv sync
crewai tool install <tool>
crewai create flow
commandcrewai tool create <tool>
commandkickoff_for_each_async
crewai install
CLIcrewai deploy
CLIplanning_llm
crewai train -n X
crewai create
CLI to use the new versionkickoff_for_each
, kickoff_async
and kickoff_for_each_async
methods for better control over the kickoff process
usage_metrics
to full output or a crew
max_execution_time
supportmemory=True
to your crew, it will work transparently and make outcomes better and more reliable, it’s disable by default for nowcrewai create
commandfunction_calling_llm
to Agent or Crewkickoff
with crew.usage_metrics
crew.kickoff(inputs: {'key': 'value})
crewai_tools
dependencyfunction_calling_llm
on both the entire crew and individual agentsfull_ouput
from crew kickoff with all tasks outputsstep_callback
function for both Agents and Crews so you can get all intermediate stepsmax_inter
bug now properly forcing llm to answer as it gets to thatCrewAgentOutputParser
manager_llm
for hierarchical processmax_inter
and max_rpm
logicAgent
, Task
, Crew
, and Process
classes.