CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.

What are LLMs?

Large Language Models (LLMs) are the core intelligence behind CrewAI agents. They enable agents to understand context, make decisions, and generate human-like responses. Here’s what you need to know:

LLM Basics

Large Language Models are AI systems trained on vast amounts of text data. They power the intelligence of your CrewAI agents, enabling them to understand and generate human-like text.

Context Window

The context window determines how much text an LLM can process at once. Larger windows (e.g., 128K tokens) allow for more context but may be more expensive and slower.

Temperature

Temperature (0.0 to 1.0) controls response randomness. Lower values (e.g., 0.2) produce more focused, deterministic outputs, while higher values (e.g., 0.8) increase creativity and variability.

Provider Selection

Each LLM provider (e.g., OpenAI, Anthropic, Google) offers different models with varying capabilities, pricing, and features. Choose based on your needs for accuracy, speed, and cost.

Available Models and Their Capabilities

Here’s a detailed breakdown of supported models and their capabilities, you can compare performance at lmarena.ai and artificialanalysis.ai:

ModelContext WindowBest For
GPT-48,192 tokensHigh-accuracy tasks, complex reasoning
GPT-4 Turbo128,000 tokensLong-form content, document analysis
GPT-4o & GPT-4o-mini128,000 tokensCost-effective large context processing

1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.

Setting Up Your LLM

There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:

The simplest way to get started. Set these variables in your environment:

# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>

# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini  # Default if not set

# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>

Never commit API keys to version control. Use environment files (.env) or your system’s secret management.

Advanced Features and Optimization

Learn how to get the most out of your LLM configuration:

Provider Configuration Examples

Common Issues and Solutions

Most authentication issues can be resolved by checking API key format and environment variable names.

# OpenAI
OPENAI_API_KEY=sk-...

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...

Getting Help

If you need assistance, these resources are available:

Best Practices for API Key Security:

  • Use environment variables or secure vaults
  • Never commit keys to version control
  • Rotate keys regularly
  • Use separate keys for development and production
  • Monitor key usage for unusual patterns