Understanding LLAMA 2 Model Benchmarks for Performance Evaluation

September 12, 2023
Share this post
https://www.truefoundry.com/blog/llama-2-benchmarks
URL
Understanding LLAMA 2 Model Benchmarks for Performance Evaluation

We benchmark the performance of LLama2-7B in this article from latency, cost, and requests per second perspective. This will help us evaluate if it can be a good choice based on the business requirements. Please note that we don't cover the qualitative performance in this article - there are different methods to compare LLMs which can be found here.

Model: Llama2-7B

In this blog, we have benchmarked the Llama-2-7B model from NousResearch. This is a pre-trained version of Llama-2 with 7 billion parameters.

Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

Metrics Benchmarked with LLAMA 2 Model: Assessing Key Performance Indicators

  1. Requests per second. (RPS): Requests per second that the model is handling. With higher RPS, the latency usually goes up.
  2. Latency: How much time is taken to complete an inference request?
  3. Economics: What are the costs associated with deploying an LLM?

Use Cases & Deployment Modes with LLAMA 2: Evaluating Scenarios

The key factors across which we benchmarked are:

GPU Type:

  1. A100 40GB GPU
  2. A10  24GB GPU

Prompt Length:

  1. 1500 Input tokens, 100 output tokens (Similar to Retrieval Augmented Generation use cases)
  2. 50 Input tokens, 500 output tokens (Generation Heavy use cases)

Benchmarking Setup with LLAMA 2: Configuring Test Environments

For benchmarking, we have used locust, an open-source load-testing tool. Locust works by creating users/workers to send requests in parallel. At the beginning of each test, we can set the Number of Users and Spawn Rate. Here the Number of Users signify the Maximum number of users that can spawn/run concurrently, whereas the Spawn Rate signifies how many users will be spawned per second.

In each benchmarking test for a deployment config, we started from 1 user and kept increasing the Number of Users gradually till we saw a steady increase in the RPS. During the test, we also plotted the response times (in ms) and total requests per second.

In each of the 2 deployment configurations, we have used the huggingface text-generation-inference model server having version=0.9.4. The following are the parameters passed to the text-generation-inference image for different model configurations:

PARAMETERS LLAMA-2-7B ON A100 LLAMA-2-7B ON A10G
Max Batch Prefill Tokens 6100 10000

Benchmarking Results Summary: Summarizing LLAMA 2 Findings

Latency, RPS, and Cost

We calculate the best latency based on sending only one request at a time. To increase throughput, we send requests parallelly to the LLM. The max throughput is the case when the model is able to process the input requests without significant deterioration in latency.

Benchmarking Results for LLama-2 7B

Tokens Per Second

LLMs process input tokens and generation differently - hence we have calculated the input tokens and output tokens processing rate differently.

Detailed Results: In-Depth LLAMA 2 Analysis

A10 24GB GPU (1500 input + 100 output tokens)

We can observe in the above graphs that the Best Response Time (at 1 user) is 4.1 seconds. We can increase the number of users to throw more traffic at the model - we can see the throughput increasing till 0.9 RPS without a significant drop in latency. Beyond 0.9 RPS, the latency increases drastically which means requests are being queued up.

A10 24GB GPU (50 input + 500 output tokens)

We can observe in the above graphs that the Best Response Time (at 1 user) is 15 seconds. We can increase the number of users to throw more traffic at the model - we can see the throughput increasing till 0.9 RPS without a significant drop in latency. Beyond 0.9 RPS, the latency increases drastically which means requests are being queued up.

A100 40GB GPU (1500 input + 100 output tokens)

We can observe in the above graphs that the Best Response Time (at 1 user) is 2 seconds. We can increase the number of users to throw more traffic at the model - we can see the throughput increasing till 3.6 RPS without a significant drop in latency. Beyond 3.6 RPS, the latency increases drastically which means requests are being queued up.

A100 40GB GPU (50 input + 500 output tokens)

We can observe in the above graphs that the Best Response Time (at 1 user) is 8.5 seconds. We can increase the number of users to throw more traffic at the model - we can see the throughput increasing till 3.5 RPS without a significant drop in latency. Beyond 3.5 RPS, the latency increases drastically which means requests are being queued up.

Hopefully, this will be useful for you to decide if LLama7B will suit your use case and the costs you can expect to incur while hosting Llama7B.

Discover More

November 12, 2024

Benchmarking the TrueFoundry LLM Gateway: it's blazing fast ⚡

LLMs & GenAI
April 16, 2024

Cognita: Building an Open Source, Modular, RAG applications for Production

LLMs & GenAI
April 11, 2024

How To Choose The Best Vector Database

LLMs & GenAI
March 28, 2024

Leveraging Fractional GPUs on Kubernetes

GPU
LLMs & GenAI

Related Blogs

No items found.

Blazingly fast way to build, track and deploy your models!

pipeline