Introduction
LangDB AI Gateway provides OpenAI-compatible APIs to connect with multiple Large Language Models and serves as an observability platform that makes it effortless to trace CrewAI workflows end-to-end while providing access to 350+ language models. With a singleinit()
call, all agent interactions, task executions, and LLM calls are captured, providing comprehensive observability and production-ready AI infrastructure for your applications.

LangDB CrewAI Trace Example
Features
AI Gateway Capabilities
- Access to 350+ LLMs: Connect to all major language models through a single integration
- Virtual Models: Create custom model configurations with specific parameters and routing rules
- Virtual MCP: Enable compatibility and integration with MCP (Model Context Protocol) systems for enhanced agent communication
- Guardrails: Implement safety measures and compliance controls for agent behavior
Observability & Tracing
- Automatic Tracing: Single
init()
call captures all CrewAI interactions - End-to-End Visibility: Monitor agent workflows from start to finish
- Tool Usage Tracking: Track which tools agents use and their outcomes
- Model Call Monitoring: Detailed insights into LLM interactions
- Performance Analytics: Monitor latency, token usage, and costs
- Debugging Support: Step-through execution for troubleshooting
- Real-time Monitoring: Live traces and metrics dashboard
Setup Instructions
1
Install LangDB
Install the LangDB client with CrewAI feature flag:
2
Set Environment Variables
Configure your LangDB credentials:
3
Initialize Tracing
Import and initialize LangDB before configuring your CrewAI code:
4
Configure CrewAI with LangDB
Set up your LLM with LangDB headers:
Quick Start Example
Here’s a simple example to get you started with LangDB and CrewAI:Complete Example: Research and Planning Agent
This comprehensive example demonstrates a multi-agent workflow with research and planning capabilities.Prerequisites
Environment Setup
Complete Implementation
Running the Example
Viewing Traces in LangDB
After running your CrewAI application, you can view detailed traces in the LangDB dashboard:
LangDB Trace Dashboard
What You’ll See
- Agent Interactions: Complete flow of agent conversations and task handoffs
- Tool Usage: Which tools were called, their inputs, and outputs
- Model Calls: Detailed LLM interactions with prompts image.pngand responses
- Performance Metrics: Latency, token usage, and cost tracking
- Execution Timeline: Step-by-step view of the entire workflow
Troubleshooting
Common Issues
- No traces appearing: Ensure
init()
is called before any CrewAI imports - Authentication errors: Verify your LangDB API key and project ID
Resources
LangDB Documentation
Official LangDB documentation and guides
LangDB Guides
Step-by-step tutorials for building AI agents
GitHub Examples
Complete CrewAI integration examples
LangDB Dashboard
Access your traces and analytics
Model Catalog
Browse 350+ available language models
Enterprise Features
Self-hosted options and enterprise capabilities
Next Steps
This guide covered the basics of integrating LangDB AI Gateway with CrewAI. To further enhance your AI workflows, explore:- Virtual Models: Create custom model configurations with routing strategies
- Guardrails & Safety: Implement content filtering and compliance controls
- Production Deployment: Configure fallbacks, retries, and load balancing