Connect to any LLM
Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
Connect CrewAI to LLMs
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
By default, CrewAI uses the gpt-4o-mini
model. This is determined by the OPENAI_MODEL_NAME
environment variable, which defaults to “gpt-4o-mini” if not set.
You can easily configure your agents to use a different model or provider as described in this guide.
Supported Providers
LiteLLM supports a wide range of providers, including but not limited to:
- OpenAI
- Anthropic
- Google (Vertex AI, Gemini)
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- Hugging Face
- Ollama
- Mistral AI
- Replicate
- Together AI
- AI21
- Cloudflare Workers AI
- DeepInfra
- Groq
- And many more!
For a complete and up-to-date list of supported providers, please refer to the LiteLLM Providers documentation.
Changing the LLM
To use a different LLM with your CrewAI agents, you have several options:
Pass the model name as a string when initializing the agent:
Configuration Options
When configuring an LLM for your agent, you have access to a wide range of parameters:
Parameter | Type | Description |
---|---|---|
model | str | The name of the model to use (e.g., “gpt-4”, “claude-2”) |
temperature | float | Controls randomness in output (0.0 to 1.0) |
max_tokens | int | Maximum number of tokens to generate |
top_p | float | Controls diversity of output (0.0 to 1.0) |
frequency_penalty | float | Penalizes new tokens based on their frequency in the text so far |
presence_penalty | float | Penalizes new tokens based on their presence in the text so far |
stop | str , List[str] | Sequence(s) to stop generation |
base_url | str | The base URL for the API endpoint |
api_key | str | Your API key for authentication |
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
Using Local Models with Ollama
For local models like those provided by Ollama:
Download and install Ollama
Pull the desired model
For example, run ollama pull llama3.2
to download the model.
Configure your agent
Changing the Base API URL
You can change the base API URL for any LLM provider by setting the base_url
parameter:
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
Conclusion
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the LiteLLM documentation for the most up-to-date information on supported models and configuration options.
Was this page helpful?