Custom LLM Implementation
Learn how to create custom LLM implementations in CrewAI.
Overview
CrewAI supports custom LLM implementations through the BaseLLM
abstract base class. This allows you to integrate any LLM provider that doesn’t have built-in support in LiteLLM, or implement custom authentication mechanisms.
Quick Start
Here’s a minimal custom LLM implementation:
Using Your Custom LLM
Required Methods
Constructor: __init__()
Critical: You must call super().__init__(model, temperature)
with the required parameters:
Abstract Method: call()
The call()
method is the heart of your LLM implementation. It must:
- Accept messages (string or list of dicts with ‘role’ and ‘content’)
- Return a string response
- Handle tools and function calling if supported
- Raise appropriate exceptions for errors
Optional Methods
Common Patterns
Error Handling
Custom Authentication
Stop Words Support
CrewAI automatically adds "\nObservation:"
as a stop word to control agent behavior. If your LLM supports stop words:
If your LLM doesn’t support stop words natively:
Function Calling
If your LLM supports function calling, implement the complete flow:
Troubleshooting
Common Issues
Constructor Errors
Function Calling Not Working
- Ensure
supports_function_calling()
returnsTrue
- Check that you handle
tool_calls
in the response - Verify
available_functions
parameter is used correctly
Authentication Failures
- Verify API key format and permissions
- Check authentication header format
- Ensure endpoint URLs are correct
Response Parsing Errors
- Validate response structure before accessing nested fields
- Handle cases where content might be None
- Add proper error handling for malformed responses
Testing Your Custom LLM
This guide covers the essentials of implementing custom LLMs in CrewAI.