Skip to main content
This guide provides instructions for integrating AnythingLLM with the Truefoundry AI Gateway.

What is AnythingLLM?

AnythingLLM is an all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more. It provides a comprehensive platform for running AI applications locally or in the cloud with complete control over your data and models.

Key Features of AnythingLLM

  • Custom Model Support: Stay fully local with built-in LLM provider running any model you want, or leverage enterprise models from OpenAI, Azure, AWS, and more with flexible model switching capabilities
  • Universal Document Support: Use every type of document including PDFs, Word documents, CSV files, codebases, and much more. Import documents from online locations and make all your business data accessible
  • Complete Privacy Control: AnythingLLM comes with sensible and locally running defaults for your LLM, embedder, vector database, and storage. Nothing is shared unless you explicitly allow it

Prerequisites

Before integrating AnythingLLM with TrueFoundry, ensure you have:
  1. TrueFoundry Account: Create a Truefoundry account and follow the instructions in our Gateway Quick Start Guide
  2. AnythingLLM Installation: Set up AnythingLLM using either the Desktop application or Docker deployment

Integration Steps

This guide assumes you have AnythingLLM installed and running, and have obtained your TrueFoundry AI Gateway base URL.

Step 1: Access AnythingLLM LLM Settings

  1. Launch your AnythingLLM application (Desktop or Docker).
  2. Navigate to Settings and go to LLM Preference:
AnythingLLM settings page showing LLM provider selection interface

Step 2: Configure Generic OpenAI Provider

  1. In the LLM provider search box, type “Generic OpenAI” and select it from the available options.
  2. Configure the TrueFoundry connection with the following settings:
    • Base URL: Enter your TrueFoundry Gateway base URL
    • Chat Model Name: Enter the model name from the unified code snippet (e.g., openai-main/gpt-4o)
    • Token Context Window: Set based on your model’s limits (e.g., 16000, 128000)
    • Max Tokens: Configure according to your needs (e.g., 1024, 2048)
You will get your base URL and model name directly from the unified code snippet:
TrueFoundry playground showing unified code snippet with base URL and model name
Copy the base URL and model ID and paste them into AnythingLLM’s configuration fields.

Step 3: Test Your Integration

  1. Save your configuration in AnythingLLM.
  2. Create a new workspace or open an existing one to test the integration:
AnythingLLM chat interface showing successful test message with TrueFoundry integration
  1. Send a test message to verify that AnythingLLM is successfully communicating with TrueFoundry’s AI Gateway.
Your AnythingLLM application is now integrated with TrueFoundry’s AI Gateway and ready for AI chat, RAG, and agent operations.