Skip to main content
TrueFoundry AI Gateway is the proxy layer that sits between your applications and the LLM providers and MCP Servers. It is an enterprise-grade platform that enables users to access 1000+ LLMs using a unified interface while taking care of observability and governance. TrueFoundry AI Gateway architecture diagram showing the gateway as a proxy between applications and multiple LLM providers

Key Features

Unified API Interface

Call 1000+ LLMs using a single endpoint with unified API interface

API Keys Management

Generate and manage API keys for users/applications

Multimodal Inputs

Support for text, image, and audio inputs across compatible models

Access Control

Fine-grained access control and permissions management

Rate Limiting

Control Models Usage with flexible rate limiting policies per user/model/application

Load Balancing

Distribute requests across multiple model instances based on weight, latency or cost metrics.

Budget Limiting

Control spending and enforce cost limits for users, teams, and models

Guardrails

Content filtering and safety checks to ensure

Observability & Metrics

Opentelemetry compliant metrics and logging for all requests.

Prompt Playground

Centralized prompt playground with versioning and management system

Batch Predictions

Process multiple requests efficiently with batch processing

MCP Registry

Deploy and manage your own MCP servers with TrueFoundry AI Gateway.

Centralized Authn/Authz for all MCP Servers

One API key to access all MCP servers and their tools.

Virtual MCP Servers

Create virtual MCP servers combining specific tools from multiple MCP servers.

Agent Playground

Test Agents by adding tools and models from Playground

Build Agents with unified API for all MCP servers

Connect to MCP Servers with a single API in the gateway.

Rate Limiting and Observability for Tools

Coming Soon

Supported Model Providers

We integrate with 1000+ LLMs through the following providers.
If you don’t see the provider you need, there is a high change it will just work as self hosted models or OpenAI provider. Please reach out to us at support@truefoundry.com and we will be happy to guide you.

Gemini & Vertex AI

AWS Bedrock

Azure OpenAI

Azure AI Foundry

OpenAI logoOpenAI

Cohere

Databricks

AI21

Anthropic

Together AI

xAI

DeepInfra

Perplexity AI

Mistral AI

Groq

Self Hosted

OpenRouter

SambaNova

Cerebras

Supported APIs

The following accordions show which features are supported for each provider across different endpoints:
Legend:
  • Supported by Provider and Truefoundry
  • Provided by provider, but not by Truefoundry
  • - Provider does not support this feature

Chat Completion (/chat/completions)

ProviderStreamNon StreamToolsJSON ModeSchema ModePrompt CachingReasoningStructured Output
OpenAI-
Azure OpenAI-
Anthropic--
Bedrock--
Vertex--
Cohere--
Gemini-
Groq-
AI21-----
Cerebras----
SambaNova----
Perplexity-AI---
Together-AI-
xAI
DeepInfra--
ProviderStringList of String
OpenAI
Azure OpenAI
Anthropic--
Bedrock
Vertex
Cohere
Gemini--
Groq--
SambaNova
Together-AI
xAI--
DeepInfra
ProviderGenerate
OpenAI
Azure OpenAI
Bedrock
Vertex
Anthropic-
Cohere-
Gemini
Groq-
Together-AI
xAI-
DeepInfra
ProviderEdit
OpenAI
Azure OpenAI
Bedrock
Vertex
Anthropic-
Cohere-
Gemini
Groq-
Together-AI
xAI-
DeepInfra
ProviderVariation
OpenAI
Azure OpenAI-
Bedrock
Vertex-
Anthropic-
Cohere-
Gemini
Groq-
Together-AI
xAI-
DeepInfra
ProviderTranscription
OpenAI
Azure OpenAI
Anthropic-
Bedrock-
Vertex
Cohere-
Gemini
Groq
Together-AI
xAI-
DeepInfra
DeepGram
Cartesia
ElevenLabs
ProviderTranslation
OpenAI
Azure OpenAI
Anthropic-
Bedrock-
Vertex
Cohere-
Gemini
Groq
Together-AI
xAI-
DeepInfra
ProviderText To Speech
OpenAI
Azure OpenAI
Anthropic-
Bedrock-
Vertex
Cohere-
Gemini
Groq
Together-AI
xAI-
DeepInfra
DeepGram
Cartesia
ElevenLabs
ProviderRerank
OpenAI-
Azure OpenAI-
Anthropic-
Bedrock
Vertex-
Cohere
Gemini-
Groq-
Together-AI
xAI-
DeepInfra
ProviderBatch
OpenAI
Azure OpenAI
Anthropic
Bedrock
Vertex
Cohere
Gemini
Groq-
Cerebras-
Together-AI
xAI-
DeepInfra
ProviderFine Tune
OpenAI
Azure OpenAI-
Anthropic-
Bedrock
Vertex
Cohere
Gemini-
Groq
Cerebras-
Together-AI
xAI-
DeepInfra
ProviderFiles
OpenAI
Azure OpenAI
Anthropic
Bedrock
Vertex
Cohere
Gemini
Groq
Cerebras-
Together-AI
xAI-
DeepInfra
ProviderModeration
OpenAI
Azure OpenAI-
Anthropic-
Bedrock-
Vertex-
Cohere
Gemini-
Groq-
Cerebras-
Together-AI
xAI-
DeepInfra
ProviderModel Response
OpenAI
Azure OpenAI
Anthropic-
Bedrock-
Vertex-
Cohere-
Gemini-
Groq
Cerebras-
Together-AI-
xAI-
DeepInfra-
ProviderCompletion
OpenAI-
Azure OpenAI-
Anthropic-
Bedrock-
Vertex-
Cohere-
Gemini-
Groq-
Cerebras
Together-AI
xAI-
DeepInfra
ProviderLive / Realtime API
Gemini
Vertex
OpenAI
Azure AI Foundry
ProviderCompaction API
OpenAI
ProviderMessages API
Anthropic

Ecosystem & Integrations

Discover how TrueFoundry connects with your favorite AI frameworks and tools to streamline your ML development workflow.

CrewAI

OpenAI Swarm

OpenAI Agents SDK

Phidata

Pydantic AI

LangChain

DSPy

Strands Agents

Deployment Options

The Truefoundry AI Gateway can either be used as a SaaS offering or deployed on-premise.
  • SaaS Offering: You can directly use the gateway as a SaaS offering by signing up on our website, you can find the instructions here.
  • Enterprise Deployment for enterprise security and control. You can deploy the gateway in your cloud or on-premise. You can find the architecture and deployment instructions here.

Frequently Asked Questions

The latency overhead is minimal, typically less than 5ms. Our benchmarks show enterprise-grade performance that scales with your needs. Our SaaS offering is hosted in multiple regions across the world to ensure low latency and high availability. You can also deploy the gateway on-premise or on any cloud provider in your region which
is closer to your users.
Yes, the AI Gateway supports on-premise deployments on any infrastructure or cloud provider, giving you complete control over your AI operations.
You can easily integrate any OpenAI-compatible self-hosted model. Check our self-hosted models guide for detailed instructions.
Yes, The AI Gateway can be used as a standalone solution. You can use the full MLOps platform if you’re using features like model deployment(traditional models and LLMs), model training, llm fine-tuning or training/data-processing workflows.