
Key Features
Unified API Interface
Call 1000+ LLMs using a single endpoint with unified API interface
API Keys Management
Generate and manage API keys for users/applications
Multimodal Inputs
Support for text, image, and audio inputs across compatible models
Access Control
Fine-grained access control and permissions management
Rate Limiting
Control Models Usage with flexible rate limiting policies per user/model/application
Load balancing
Use virtual models to spread traffic across targets by weight, latency, or priority, with retries and fallbacks.
Budget Limiting
Control spending and enforce cost limits for users, teams, and models
Guardrails
Content filtering and safety checks to ensure
Observability & Metrics
Opentelemetry compliant metrics and logging for all requests.
Prompt Playground
Centralized prompt playground with versioning and management system
Batch Predictions
Process multiple requests efficiently with batch processing
MCP Registry
Deploy and manage your own MCP servers with TrueFoundry AI Gateway.
Centralized Authn/Authz for all MCP Servers
One API key to access all MCP servers and their tools.
Virtual MCP Servers
Create virtual MCP servers combining specific tools from multiple MCP servers.
Agent Playground
Test Agents by adding tools and models from Playground
Build Agents with unified API for all MCP servers
Connect to MCP Servers with a single API in the gateway.
Rate Limiting and Observability for Tools
Coming Soon
Supported Model Providers
We integrate with 1000+ LLMs through the following providers.
















Supported APIs
The following accordions summarize provider support for each gateway endpoint. Each section links to the full guide for that API (same order as Supported APIs in the sidebar).Legend:
- ✅ Supported by Provider and Truefoundry
- Provided by provider, but not by Truefoundry
- Provider does not support this feature
Chat Completion (/chat/completions)
Chat Completion (/chat/completions)
Documentation: Chat Completions API
| Provider | Stream | Non Stream | Tools | JSON Mode | Schema Mode | Prompt Caching | Reasoning | Structured Output |
|---|---|---|---|---|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| Azure OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| Anthropic | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ||
| Bedrock | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ||
| Vertex | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ||
| Cohere | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ||
| Gemini | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| Groq | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| AI21 | ✅ | ✅ | ✅ | |||||
| Cerebras | ✅ | ✅ | ✅ | ✅ | ||||
| SambaNova | ✅ | ✅ | ✅ | ✅ | ||||
| Perplexity-AI | ✅ | ✅ | ✅ | ✅ | ✅ | |||
| Together-AI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| xAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| DeepInfra | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Embedding (/embeddings)
Embedding (/embeddings)
Documentation: Embeddings API
| Provider | String | List of String |
|---|---|---|
| OpenAI | ✅ | ✅ |
| Azure OpenAI | ✅ | ✅ |
| Anthropic | ||
| Bedrock | ✅ | ✅ |
| Vertex | ✅ | ✅ |
| Cohere | ✅ | ✅ |
| Gemini | ||
| Groq | ||
| SambaNova | ||
| Together-AI | ✅ | ✅ |
| xAI | ||
| DeepInfra |
Batch (/batches)
Batch (/batches)
Documentation: Batch API
| Provider | Batch |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Anthropic | |
| Bedrock | ✅ |
| Vertex | ✅ |
| Cohere | |
| Gemini | |
| Groq | |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Fine Tune
Fine Tune
Documentation: Finetune API
| Provider | Fine Tune |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | |
| Anthropic | |
| Bedrock | |
| Vertex | ✅ |
| Cohere | |
| Gemini | |
| Groq | |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Model Response (/responses)
Model Response (/responses)
Documentation: Responses API
| Provider | Model Response |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Anthropic | |
| Bedrock | |
| Vertex | |
| Cohere | |
| Gemini | |
| Groq | |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Image Generation (/images/generations)
Image Generation (/images/generations)
Documentation: Image Generation API
| Provider | Generate |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Bedrock | ✅ |
| Vertex | ✅ |
| Anthropic | |
| Cohere | |
| Gemini | |
| Groq | |
| Together-AI | |
| xAI | |
| DeepInfra |
Image Edit (/images/edits)
Image Edit (/images/edits)
Documentation: Image Edit API
| Provider | Edit |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Bedrock | ✅ |
| Vertex | ✅ |
| Anthropic | |
| Cohere | |
| Gemini | |
| Groq | |
| Together-AI | |
| xAI | |
| DeepInfra |
Image Variation (/images/variations)
Image Variation (/images/variations)
Documentation: Image Variation API
| Provider | Variation |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | |
| Bedrock | ✅ |
| Vertex | |
| Anthropic | |
| Cohere | |
| Gemini | |
| Groq | |
| Together-AI | |
| xAI | |
| DeepInfra |
Text To Speech
Text To Speech
Documentation: Text to Speech API
| Provider | Text To Speech |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Azure AI Foundry | ✅ |
| Anthropic | |
| Bedrock | |
| Vertex | ✅ |
| Cohere | |
| Gemini | ✅ |
| Groq | ✅ |
| Together-AI | |
| xAI | |
| DeepInfra | |
| DeepGram | ✅ |
| Cartesia | ✅ |
| ElevenLabs | ✅ |
Audio Translation
Audio Translation
Documentation: Audio Translation API
| Provider | Translation |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Azure AI Foundry | ✅ |
| Anthropic | |
| Bedrock | |
| Vertex | |
| Cohere | |
| Gemini | |
| Groq | ✅ |
| Together-AI | |
| xAI | |
| DeepInfra |
Speech to Text
Speech to Text
Documentation: Speech to Text API
| Provider | Transcription |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | ✅ |
| Azure AI Foundry | ✅ |
| Anthropic | |
| Bedrock | |
| Vertex | |
| Cohere | |
| Gemini | |
| Groq | ✅ |
| Together-AI | |
| xAI | |
| DeepInfra | |
| DeepGram | ✅ |
| Cartesia | ✅ |
| ElevenLabs | ✅ |
Live / Realtime API
Live / Realtime API
Documentation: Live / Realtime API
| Provider | Live / Realtime API |
|---|---|
| Gemini | ✅ |
| Vertex | ✅ |
| OpenAI | ✅ |
| Azure AI Foundry | ✅ |
Files (/files)
Files (/files)
Documentation: Files API
| Provider | Files |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | |
| Anthropic | ✅ |
| Bedrock | ✅ |
| Vertex | ✅ |
| Cohere | |
| Gemini | |
| Groq | ✅ |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Rerank (/rerank)
Rerank (/rerank)
Documentation: Rerank API
| Provider | Rerank |
|---|---|
| OpenAI | |
| Azure OpenAI | |
| Anthropic | |
| Bedrock | ✅ |
| Vertex | |
| Cohere | ✅ |
| Gemini | |
| Groq | |
| Together-AI | |
| xAI | |
| DeepInfra |
Moderation (/moderations)
Moderation (/moderations)
Documentation: Moderation API
| Provider | Moderation |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | |
| Anthropic | |
| Bedrock | |
| Vertex | |
| Cohere | |
| Gemini | |
| Groq | |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Compaction API
Compaction API
Documentation: Compaction API
| Provider | Compaction API |
|---|---|
| OpenAI | ✅ |
Messages API
Messages API
Documentation: Messages API
| Provider | Messages API |
|---|---|
| Anthropic | ✅ |
Proxy API (/proxy)
Proxy API (/proxy)
Documentation: Proxy APIForward provider-native requests through the gateway while keeping logging, rate limiting, and budget controls. See the guide for setup, headers, and examples by provider.
Deployment Options
You can run the AI Gateway as fully managed SaaS, keep LLM request–response data in your own object storage while Truefoundry operates the gateway, or host the gateway plane (and optionally more of the stack) in your cloud or on-prem for stricter data residency and control. Each option differs in who hosts infrastructure, where traffic flows, and pricing tier. Read the full comparison—including a scenario table, diagrams, and operational notes—in AI Gateway deployment options. For background on how the gateway fits the platform, see gateway plane architecture. To start on managed SaaS, follow the quick start.Frequently Asked Questions
What's the performance impact of using the gateway?
What's the performance impact of using the gateway?
The latency overhead is minimal, typically less than 5ms. Our benchmarks show enterprise-grade performance that scales with your needs. Our SaaS offering is hosted in multiple regions across the world to ensure low latency and high availability. You can also deploy the gateway on-premise or on any cloud provider in your region which
is closer to your users.
is closer to your users.

Can I deploy the gateway on-premise?
Can I deploy the gateway on-premise?
Yes, the AI Gateway supports on-premise deployments on any infrastructure or cloud provider, giving you complete control over your AI operations.
How do I integrate my self-hosted models?
How do I integrate my self-hosted models?
You can easily integrate any OpenAI-compatible self-hosted model. Check our self-hosted models guide for detailed instructions.
Can I use the gateway without the full MLOps platform?
Can I use the gateway without the full MLOps platform?
Yes, The AI Gateway can be used as a standalone solution. You can use the full MLOps platform if you’re using features like model deployment(traditional models and LLMs), model training, llm fine-tuning or training/data-processing workflows.