Connect CrewAI to LLMs
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.By default, CrewAI uses the
gpt-4o-mini model. This is determined by the OPENAI_MODEL_NAME environment variable, which defaults to “gpt-4o-mini” if not set.
You can easily configure your agents to use a different model or provider as described in this guide.Supported Providers
LiteLLM supports a wide range of providers, including but not limited to:- OpenAI
- Anthropic
- Google (Vertex AI, Gemini)
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- VoyageAI
- Hugging Face
- Ollama
- Mistral AI
- Replicate
- Together AI
- AI21
- Cloudflare Workers AI
- DeepInfra
- Groq
- SambaNova
- Nebius AI Studio
- NVIDIA NIMs
- And many more!
Changing the LLM
To use a different LLM with your CrewAI agents, you have several options:- Using a String Identifier
- Using the LLM Class
Pass the model name as a string when initializing the agent:
Configuration Options
When configuring an LLM for your agent, you have access to a wide range of parameters:| Parameter | Type | Description |
|---|---|---|
| model | str | The name of the model to use (e.g., “gpt-4”, “claude-2”) |
| temperature | float | Controls randomness in output (0.0 to 1.0) |
| max_tokens | int | Maximum number of tokens to generate |
| top_p | float | Controls diversity of output (0.0 to 1.0) |
| frequency_penalty | float | Penalizes new tokens based on their frequency in the text so far |
| presence_penalty | float | Penalizes new tokens based on their presence in the text so far |
| stop | str, List[str] | Sequence(s) to stop generation |
| base_url | str | The base URL for the API endpoint |
| api_key | str | Your API key for authentication |
Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:- Using Environment Variables
- Using LLM Class Attributes
Using Local Models with Ollama
For local models like those provided by Ollama:Download and install Ollama
Changing the Base API URL
You can change the base API URL for any LLM provider by setting thebase_url parameter:
Code
