Provider capabilities
The table below summarizes gateway support for this endpoint by provider.Legend:
- ✅ Supported by Provider and Truefoundry
- Provided by provider, but not by Truefoundry
- Provider does not support this feature
| Provider | Fine Tune |
|---|---|
| OpenAI | ✅ |
| Azure OpenAI | |
| Anthropic | |
| Bedrock | |
| Vertex | ✅ |
| Cohere | |
| Gemini | |
| Groq | |
| Cerebras | |
| Together-AI | |
| xAI | |
| DeepInfra |
Client Setup
All providers use the OpenAI SDK with provider-specific headers. Choose your provider to get started:Training File Format
Create a JSONL file with one JSON object per line. Each line represents a conversation pair for training:Workflow Steps
The finetuning process follows these steps for all providers:- Upload: Upload training file → Get file ID
- Create: Create finetune job → Get job ID
- Monitor: Check status until complete
- Use: Use the fine-tuned model for inference
Step-by-Step Examples
1. Upload Training File
1. Upload Training File
2. Create Finetune Job
2. Create Finetune Job
3. Monitor Job Status
3. Monitor Job Status
Status Reference
queued: Job is queued and waiting to startrunning: Fine-tuning is in progresssucceeded: Fine-tuning completed successfullyfailed: Fine-tuning failed
Hyperparameters
You can configure the following hyperparameters in thehyperparameters object:
n_epochs: Number of training epochsbatch_size: Batch sizelearning_rate_multiplier: Learning rate multiplier