Large Language Models (LLMs) in CrewAI

Large Language Models (LLMs) are the backbone of intelligent agents in the CrewAI framework. This guide will help you understand, configure, and optimize LLM usage for your CrewAI projects.

Key Concepts

  • LLM: Large Language Model, the AI powering agent intelligence
  • Agent: A CrewAI entity that uses an LLM to perform tasks
  • Provider: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, more providers)

Configuring LLMs for Agents

CrewAI offers flexible options for setting up LLMs:

1. Default Configuration

By default, CrewAI uses the gpt-4o-mini model. It uses environment variables if no LLM is specified:

  • OPENAI_MODEL_NAME (defaults to “gpt-4o-mini” if not set)
  • OPENAI_API_BASE
  • OPENAI_API_KEY

2. Updating YAML files

You can update the agents.yml file to refer to the LLM you want to use:

Code
researcher:
    role: Research Specialist
    goal: Conduct comprehensive research and analysis to gather relevant information,
        synthesize findings, and produce well-documented insights.
    backstory: A dedicated research professional with years of experience in academic
        investigation, literature review, and data analysis, known for thorough and
        methodical approaches to complex research questions.
    verbose: true
    llm: openai/gpt-4o
    # llm: azure/gpt-4o-mini
    # llm: gemini/gemini-pro
    # llm: anthropic/claude-3-5-sonnet-20240620
    # llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
    # llm: mistral/mistral-large-latest
    # llm: ollama/llama3:70b
    # llm: groq/llama-3.2-90b-vision-preview
    # llm: watsonx/meta-llama/llama-3-1-70b-instruct
    # llm: nvidia_nim/meta/llama3-70b-instruct
    # llm: sambanova/Meta-Llama-3.1-8B-Instruct
    # ...

Keep in mind that you will need to set certain ENV vars depending on the model you are using to account for the credentials or set a custom LLM object like described below. Here are some of the required ENV vars for some of the LLM integrations:

3. Custom LLM Objects

Pass a custom LLM implementation or object from another library.

See below for examples.

Code
agent = Agent(llm="gpt-4o", ...)

Connecting to OpenAI-Compatible LLMs

You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:

Code
import os

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"

LLM Configuration Options

When configuring an LLM for your agent, you have access to a wide range of parameters:

ParameterTypeDescription
modelstrName of the model to use (e.g., “gpt-4”, “gpt-3.5-turbo”, “ollama/llama3.1”). For more options, visit the providers documentation.
timeoutfloat, intMaximum time (in seconds) to wait for a response.
temperaturefloatControls randomness in output (0.0 to 1.0).
top_pfloatControls diversity of output (0.0 to 1.0).
nintNumber of completions to generate.
stopstr, List[str]Sequence(s) where generation should stop.
max_tokensintMaximum number of tokens to generate.
presence_penaltyfloatPenalizes new tokens based on their presence in prior text.
frequency_penaltyfloatPenalizes new tokens based on their frequency in prior text.
logit_biasDict[int, float]Modifies likelihood of specified tokens appearing.
response_formatDict[str, Any]Specifies the format of the response (e.g., JSON object).
seedintSets a random seed for deterministic results.
logprobsboolReturns log probabilities of output tokens if enabled.
top_logprobsintNumber of most likely tokens for which to return log probabilities.
base_urlstrThe base URL for the API endpoint.
api_versionstrVersion of the API to use.
api_keystrYour API key for authentication.

These are examples of how to configure LLMs for your agent.

Changing the Base API URL

You can change the base API URL for any LLM provider by setting the base_url parameter:

Code
from crewai import LLM

llm = LLM(
    model="custom-model-name",
    base_url="https://api.your-provider.com/v1",
    api_key="your-api-key"
)
agent = Agent(llm=llm, ...)

This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.

Best Practices

  1. Choose the right model: Balance capability and cost.
  2. Optimize prompts: Clear, concise instructions improve output.
  3. Manage tokens: Monitor and limit token usage for efficiency.
  4. Use appropriate temperature: Lower for factual tasks, higher for creative ones.
  5. Implement error handling: Gracefully manage API errors and rate limits.

Troubleshooting

  • API Errors: Check your API key, network connection, and rate limits.
  • Unexpected Outputs: Refine your prompts and adjust temperature or top_p.
  • Performance Issues: Consider using a more powerful model or optimizing your queries.
  • Timeout Errors: Increase the timeout parameter or optimize your input.