Benchmarking Popular Opensource LLMs: Llama2, Falcon, and Mistral

November 23, 2023
Share this post
https://www.truefoundry.com/blog/benchmarking-llama2-falcon-and-mistral
URL
Benchmarking Popular Opensource LLMs: Llama2, Falcon, and Mistral

In this blog, we will show the summary of various open-source LLMs that we have benchmarked. We benchmarked these models from a latency, cost, and requests per second perspective. This will help you evaluate if it can be a good choice based on the business requirements. Please note that we don't cover the qualitative performance in this article - there are different methods to compare LLMs which can be found here.

Use cases Benchmarked

The key use cases across which we benchmarked are:

  1. 1500 Input tokens, 100 output tokens (Similar to Retrieval Augmented Generation use cases)
  2. 50 Input tokens, 500 output tokens (Generation Heavy use cases)

Benchmarking Setup

For benchmarking, we have used locust, an open-source load-testing tool. Locust works by creating users/workers to send requests in parallel. At the beginning of each test, we can set the Number of Users and Spawn Rate. Here the Number of Users signify the Maximum number of users that can spawn/run concurrently, whereas the Spawn Rate signifies how many users will be spawned per second.

In each benchmarking test for a deployment config, we started from 1 user and kept increasing the Number of Users gradually till we saw a steady increase in the RPS. During the test, we also plotted the response times (in ms) and total requests per second.

In each of the 2 deployment configurations, we have used the huggingface text-generation-inference model server having version=0.9.4. The following are the parameters passed to the text-generation-inference image for different model configurations:

LLMs Benchmarked

The 5 open source LLMs benchmarked are as follows:

  1. Mistral-7B-Instruct
  2. LLama2-7B
  3. LLama2-13B
  4. LLama2-70B
  5. Falcon-40B-Instruct

The following table shows a summary of benchmarking LLMs:

MODEL INPUT / OUTPUT TOKENS CONCURRENT USERS / THROUGHPUT GPU TYPE AWS MACHINE TYPE (COST/HR) REGION: US-EAST-1 GCP MACHINE TYPE (COST/HR) REGION: US-EAST4 AZURE MACHINE TYPE (COST/HR) REGION: EAST US (VIRGINIA) SAGEMAKER INSTANCE TYPE (COST/HR) REGION: US-EAST-1
Mistral 7b 1500 Input, 100 Output 7 users / 2.8 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
Mistral 7b 50 Input, 500 Output 40 users / 1.5 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 7b 1500 Input, 100 Output 20 users / 3.6 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 7b 50 Input, 500 Output 62 users / 3.5 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 13b 1500 Input, 100 Output 7 users / 1.4 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 13b 50 Input, 500 Output 23 users / 1.5 A100 40 GB (Count: 1) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-1g (Spot: $1.21/hr, On-Demand: $3.93/hr) Standard_NC24ads_A100_v4 (Spot: $0.95/hr, On-Demand: $3.67/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 70b 1500 Input, 100 Output 15 users / 1.1 A100 40 GB (Count: 4) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-4g (Spot: $4.85/hr, On-Demand: $15.73/hr) Standard_NC96ads_A100_v4 (Spot: $3.82/hr, On-Demand: $14.69/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
LLama 2 70b 50 Input, 500 Output 38 users / 0.8 A100 40 GB (Count: 4) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-4g (Spot: $4.85/hr, On-Demand: $15.73/hr) Standard_NC96ads_A100_v4 (Spot: $3.82/hr, On-Demand: $14.69/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
Falcon 40b 1500 Input, 100 Output 16 users / 2 A100 40 GB (Count: 4) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-4g (Spot: $4.85/hr, On-Demand: $15.73/hr) Standard_NC96ads_A100_v4 (Spot: $3.82/hr, On-Demand: $14.69/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)
Falcon 40b 50 Input, 500 Output 75 users / 2.5 A100 40 GB (Count: 4) p4d.24xlarge (Spot: $7.79/hr, On-Demand: $32.77/hr) a2-highgpu-4g (Spot: $4.85/hr, On-Demand: $15.73/hr) Standard_NC96ads_A100_v4 (Spot: $3.82/hr, On-Demand: $14.69/hr) ml.p4d.24xlarge (On-Demand: $37.68/hr)

Details LLM Benchmarking Blogs on each LLMs

For each of the models mentioned above, refer to the detailed LLM benchmarking blogs as shown below:

Discover More

November 12, 2024

Benchmarking the TrueFoundry LLM Gateway: it's blazing fast ⚡

LLMs & GenAI
April 16, 2024

Cognita: Building an Open Source, Modular, RAG applications for Production

LLMs & GenAI
April 11, 2024

How To Choose The Best Vector Database

LLMs & GenAI
March 28, 2024

Leveraging Fractional GPUs on Kubernetes

GPU
LLMs & GenAI

Related Blogs

No items found.

Blazingly fast way to build, track and deploy your models!

pipeline