AutoDeploy: LLM Agent to for GenAI Deployments
Deploying applications is often time-consuming, requiring developers and data scientists to navigate complex tooling before they begin their work. For example, a data scientist who wants to experiment with Redis may need to talk to the platform team to provision ElastiCache on AWS, which can introduce delays and dependencies. While deploying a Helm chart on Kubernetes is a flexible alternative, it requires domain expertise many data scientists may not have. TrueFoundry's Auto Deploy feature eliminates these challenges, enabling rapid deployment without requiring deep infrastructure knowledge. Whether you need to deploy a specific codebase, an open-source project, or a broader technology solution, TrueFoundry streamlines the process so you can focus on what truly matters—building and experimenting.
Deploy the Way You Want
TrueFoundry's Auto Deploy is designed to cater to different developer needs, ensuring a fast and efficient deployment process at every level.

Foundational Layer: Core Deployment Options
The foundational layer of TrueFoundry's Auto Deploy consists of three primary deployment options that are the basis for all other deployment types.
Code Base Deployment: Deploy a Git Repository
If you have a specific codebase, TrueFoundry automates the deployment by identifying entry points, generating a Dockerfile if one is not present, detecting necessary environment variables and configurations, and then handling manifest generation and deploying on TrueFoundry.
Example:
"I want to deploy GitHub - simonqian/react-helloworld: react.js hello world "
Provide the repository URL, and TrueFoundry will take care of the rest—ensuring a smooth and rapid deployment with minimal effort.

Helm Chart Deployment: Deploy a Helm Chart
For applications packaged as Helm charts, TrueFoundry streamlines the deployment by analyzing the values file and documentation and asking specific questions to the user to generate a customized values file. After deployment, it generates contextual documentation to help developers connect to and use the deployed software effectively.
Example:
"I want to deploy oci://registry-1.docker.io/bitnamicharts/redis."
Provide the Helm chart URL, and TrueFoundry ensures a reliable and efficient deployment.

ML Model Deployment: Deploy a Model from Hugging Face
For AI/ML workloads, TrueFoundry enables seamless deployment of models directly from Hugging Face. It also generates a FastAPI code base for models that can be deployed using off-the-shelf model servers like vLLM.
Example:
"I want to deploy mistralai/Mistral-7B-Instruct-v0.3 · Hugging Face "
Provide the model link, and TrueFoundry will handle deployment, ensuring seamless model deployment.
Project Deployment
Building on the foundational layers of code and Helm deployments, TrueFoundry allows developers to deploy specific infrastructure components like Redis and Qdrant or full application stacks like Langfuse.
Example:
"I want to deploy Qdrant."
Specify the project, and TrueFoundry will deploy it with best-practice configurations.

Use Case Deployment
For developers who require a specific type of technology but have not selected a particular project, TrueFoundry builds upon the foundational layers to deploy the most appropriate solution based on the requirement.
Example:
"I want to deploy a vector database."
"I want to deploy an OCR model."
TrueFoundry streamlines the selection and deployment of the right tools, reducing setup time and ensuring a tailored solution for your use case.
Auto-Debugging: Closing the Loop on Auto Deploy
TrueFoundry is closing the loop on Auto Deploy with an integrated auto-debugger that monitors deployment logs, metrics, and events. If an issue is detected, the system can iteratively diagnose and apply corrective actions, ensuring the deployment is operational with minimal manual intervention.
Why Choose TrueFoundry's Auto Deploy?
✅ Speed – Deploy applications in minutes, not hours
✅ Simplicity – No need for extensive infrastructure knowledge
✅ Flexibility – Deploy from code, Helm charts, ML models, specific projects, or broader use cases
With TrueFoundry's Auto Deploy, you can focus on writing code and delivering features while the platform manages the deployment complexities. Whether deploying a GitHub project, an open-source tool like Redis or Qdrant, or a vector search or OCR model, TrueFoundry streamlines the deployment process.
