Large Language Models (LLMs) in crewAI¶
Introduction¶
Large Language Models (LLMs) are the backbone of intelligent agents in the crewAI framework. This guide will help you understand, configure, and optimize LLM usage for your crewAI projects.
Table of Contents¶
- Key Concepts
- Configuring LLMs for Agents
- 1. Default Configuration
- 2. String Identifier
- 3. LLM Instance
- 4. Custom LLM Objects
- Connecting to OpenAI-Compatible LLMs
- LLM Configuration Options
- Using Ollama (Local LLMs)
- Changing the Base API URL
- Best Practices
- Troubleshooting
Key Concepts¶
- LLM: Large Language Model, the AI powering agent intelligence
- Agent: A crewAI entity that uses an LLM to perform tasks
- Provider: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, more providers)
Configuring LLMs for Agents¶
crewAI offers flexible options for setting up LLMs:
1. Default Configuration¶
By default, crewAI uses the gpt-4o-mini
model. It uses environment variables if no LLM is specified:
- OPENAI_MODEL_NAME
(defaults to "gpt-4o-mini" if not set)
- OPENAI_API_BASE
- OPENAI_API_KEY
2. String Identifier¶
3. LLM Instance¶
List of more providers.
4. Custom LLM Objects¶
Pass a custom LLM implementation or object from another library.
Connecting to OpenAI-Compatible LLMs¶
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
-
Using environment variables:
-
Using LLM class attributes:
LLM Configuration Options¶
When configuring an LLM for your agent, you have access to a wide range of parameters:
Parameter | Type | Description |
---|---|---|
model |
str | The name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1", more providers) |
timeout |
float, int | Maximum time (in seconds) to wait for a response |
temperature |
float | Controls randomness in output (0.0 to 1.0) |
top_p |
float | Controls diversity of output (0.0 to 1.0) |
n |
int | Number of completions to generate |
stop |
str, List[str] | Sequence(s) to stop generation |
max_tokens |
int | Maximum number of tokens to generate |
presence_penalty |
float | Penalizes new tokens based on their presence in the text so far |
frequency_penalty |
float | Penalizes new tokens based on their frequency in the text so far |
logit_bias |
Dict[int, float] | Modifies likelihood of specified tokens appearing in the completion |
response_format |
Dict[str, Any] | Specifies the format of the response (e.g., {"type": "json_object"}) |
seed |
int | Sets a random seed for deterministic results |
logprobs |
bool | Whether to return log probabilities of the output tokens |
top_logprobs |
int | Number of most likely tokens to return the log probabilities for |
base_url |
str | The base URL for the API endpoint |
api_version |
str | The version of the API to use |
api_key |
str | Your API key for authentication |
Example:
llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
Using Ollama (Local LLMs)¶
crewAI supports using Ollama for running open-source models locally:
- Install Ollama: ollama.ai
- Run a model:
ollama run llama2
- Configure agent:
Changing the Base API URL¶
You can change the base API URL for any LLM provider by setting the base_url
parameter:
llm = LLM(
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
Best Practices¶
- Choose the right model: Balance capability and cost.
- Optimize prompts: Clear, concise instructions improve output.
- Manage tokens: Monitor and limit token usage for efficiency.
- Use appropriate temperature: Lower for factual tasks, higher for creative ones.
- Implement error handling: Gracefully manage API errors and rate limits.
Troubleshooting¶
- API Errors: Check your API key, network connection, and rate limits.
- Unexpected Outputs: Refine your prompts and adjust temperature or top_p.
- Performance Issues: Consider using a more powerful model or optimizing your queries.
- Timeout Errors: Increase the
timeout
parameter or optimize your input.