Learn how to use the CrewAI CLI to interact with CrewAI.
crewai
library.TYPE
: Choose between “crew” or “flow”NAME
: Name of the crew or flow--tools
: (Optional) Show the installed version of CrewAI tools-n, --n_iterations INTEGER
: Number of iterations to train the crew (default: 5)-f, --filename TEXT
: Path to a custom file for training (default: “trained_agents_data.pkl”)-t, --task_id TEXT
: Replay the crew from this task ID, including all subsequent tasks-l, --long
: Reset LONG TERM memory-s, --short
: Reset SHORT TERM memory-e, --entities
: Reset ENTITIES memory-k, --kickoff-outputs
: Reset LATEST KICKOFF TASK OUTPUTS-kn, --knowledge
: Reset KNOWLEDGE storage-akn, --agent-knowledge
: Reset AGENT KNOWLEDGE storage-a, --all
: Reset ALL memories-n, --n_iterations INTEGER
: Number of iterations to test the crew (default: 3)-m, --model TEXT
: LLM Model to run the tests on the Crew (default: “gpt-4o-mini”)crewai run
command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.0.98.0
, when you run the crewai chat
command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
chat_llm
property in your crew.py
file to enable this command.OPENAI_API_KEY
, SERPER_API_KEY
) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a .env
file) before running this.list
: List all organizations you belong tocurrent
: Display your currently active organizationswitch
: Switch to a specific organizationBuilding Images for Crew
, Deploy Enqueued
, Online
).
crewai create crew
command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you’ve selected an LLM provider and model, you will be prompted for API keys.