Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
gpt-4o-mini
model. This is determined by the OPENAI_MODEL_NAME
environment variable, which defaults to “gpt-4o-mini” if not set.
You can easily configure your agents to use a different model or provider as described in this guide.Parameter | Type | Description |
---|---|---|
model | str | The name of the model to use (e.g., “gpt-4”, “claude-2”) |
temperature | float | Controls randomness in output (0.0 to 1.0) |
max_tokens | int | Maximum number of tokens to generate |
top_p | float | Controls diversity of output (0.0 to 1.0) |
frequency_penalty | float | Penalizes new tokens based on their frequency in the text so far |
presence_penalty | float | Penalizes new tokens based on their presence in the text so far |
stop | str , List[str] | Sequence(s) to stop generation |
base_url | str | The base URL for the API endpoint |
api_key | str | Your API key for authentication |
Download and install Ollama
Pull the desired model
ollama pull llama3.2
to download the model.Configure your agent
base_url
parameter: