- Unified API Access: Connect to 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) through one API
- Low Latency: Sub-3ms internal latency with intelligent routing and load balancing
- Enterprise Security: SOC 2, HIPAA, GDPR compliance with RBAC and audit logging
- Quota and cost management: Token-based quotas, rate limiting, and comprehensive usage tracking
- Observability: Full request/response logging, metrics, and traces with customizable retention
How TrueFoundry Integrates with CrewAI
Installation & Setup
1
Install CrewAI
2
Get TrueFoundry Access Token
- Sign up for a TrueFoundry account
- Follow the steps here in Quick start
3
Configure CrewAI with TrueFoundry

Complete CrewAI Example
Observability and Governance
Monitor your CrewAI agents through TrueFoundry’s metrics tab:
- Performance Metrics: Track key latency metrics like Request Latency, Time to First Token (TTFS), and Inter-Token Latency (ITL) with P99, P90, and P50 percentiles
- Cost and Token Usage: Gain visibility into your application’s costs with detailed breakdowns of input/output tokens and the associated expenses for each model
- Usage Patterns: Understand how your application is being used with detailed analytics on user activity, model distribution, and team-based usage
- Rate limit and Load balancing: You can set up rate limiting, load balancing and fallback for your models
Tracing
For a more detailed understanding on tracing, please see getting-started-tracing.For tracing, you can add the Traceloop SDK: For tracing, you can add the Traceloop SDK: