Introduction

Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.

Using the Testing Feature

We added the CLI command crewai test to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are n_iterations and model, which are optional and default to 2 and gpt-4o-mini respectively. For now, the only provider available is OpenAI.

crewai test

If you want to run more iterations or use a different model, you can specify the parameters like this:

crewai test --n_iterations 5 --model gpt-4o

or using the short forms:

crewai test -n 5 -m gpt-4o

When you run the crewai test command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.

A table of scores at the end will show the performance of the crew in terms of the following metrics:

Tasks/Crew/AgentsRun 1Run 2Avg. TotalAgentsAdditional Info
Task 19.09.59.2Professional Insights
Researcher
Task 29.010.09.5Company Profile Investigator
Task 39.09.09.0Automation Insights
Specialist
Task 49.09.09.0Final Report CompilerAutomation Insights Specialist
Crew9.009.389.2
Execution Time (s)126145135

The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.