In this tutorial, we will deploy a text generation interface (TGI) endpoint hosting meta-llama/Llama-3.2-3B-Instruct large language model. TGI is one of the leading libraries for LLM-serving and inference, supporting many architectures and models that use them.
For this example you need a Python environment running on your local machine, a Hugging Face account to create a Hugging Face token that is used to fetch the model weights and DataCrunch AI cloud account to create a deployment.
Model Weights
TGI deployment fetches the model weights from Hugging Face.
In this tutorial we are loading meta-llama/Llama-3.2-3B-Instruct model which requires you to agree to the model provider's policy.
Some models on Hugging Face require the user to accept their usage policy, so please verify this for any model you are deploying.
You will also require the User Access Token in order to fetch the weights. You can obtain the Access Token in your Hugging Face account by clicking the Profile icon (top right corner) and selecting Access Tokens.
For deploying the TGI endpoint, the READ permissions are sufficient.
Please store the obtained token safely. You will need it for the next steps!
Create the deployment
In this example, we will deploy meta-llama/Llama-3.2-3B-Instruct on a General Compute (24 GB VRAM) GPU type. For larger models, you may need to choose one of the other GPU types we offer.
Create a new project or use existing one, open the project
On the left you'll see a navigation menu. Go to Containers -> New deployment. Name your deployment and select the Compute Type.
We will be using the official TGI Docker image, set Container Image to ghcr.io/huggingface/text-generation-inference:3.0.2 You can select another version from the list if you prefer, or leave the version out of the url given and select the one that you wish to use. For this example we use 3.0.2.
Toggle on the Public location for your image. You can use the Private if you have a private registry, paired with credentials. For this example we use the public registry.
Make sure your preferred tag is selected
Set the Exposed HTTP port to 80
Set the Healthcheck port to 80
Set the Healthcheck path to /health
Toggle Start Command on
Add the following parameters to CMD: --model-id meta-llama/Llama-3.2-3B-Instruct
Add your Hugging Face User Access Token to the Environment Variables as HUGGING_FACE_HUB_TOKEN. Note that in some examples you might see HF_TOKEN environment variable used. The HUGGING_FACE_HUB_TOKEN is the new name for the environment variable. The old name HF_TOKEN is still supported, but going forwards we recommend using the new name.
Deploy container
(You can leave the Scaling options to their default values, however if you wish to enable LLM batching, you can set the Concurrent requests per replica option to a value greater than 1. This number represents the number of concurrent requests the deployment accepts)
That's it! You have now created a deployment. You can check the logs of the deployment from the logs tab. When the deployment starts it'll download the model weights from Hugging Face and start the TGI server. This will take few minutes to complete.
For production use, we recommend authenticating/using private registries to avoid potential rate limits imposed by public container registries.
Accessing the deployment
Before you can connect to the endpoint, you will need to generate an authentication token, by going to Keys -> Inference API Keys, and click Create.
The base endpoint URL for your deployment is in the Containers API section in the top left of the screen. This will be in the form of: https://containers.datacrunch.io/<NAME-OF-OUR-DEPLOYMENT>/
Test Deployment
Once the deployment has been created and is ready to accept requests, you can test that it responds correctly by sending a List Models request to the endpoint.
TGI can be deployed as a server that implements the OpenAI API protocol. This allows TGI to be used as a replacement for applications using OpenAI API. More information about TGI in general and available endpoints can be found in the official documentation of TGI
Below is an example cURL command for running your test deployment:
Notice the added subpath /v1/models to the base endpoint URL
As the List Models request show us meta-llama/Llama-3.2-3B-Instruct, we are ready to send an inference requests to the model.
Generate API
Generate API /generate offers a quick way to get completions for a given prompt.
Synchronous request
Below is a Python script that calls the completions endpoint /generate with a prompt and returns the completion. Save it to a file named test_request.py and run it with python test_request.py. Remember to replace <YOUR_CONTAINERS_API_URL> and <YOUR_INFERENCE_API_KEY> with the values from your deployment.
import requests
import sys
import signal
def graceful_shutdown(signum, frame) -> None:
print(f"\nSignal {signum} received at line {frame.f_lineno} in {frame.f_code.co_filename}")
sys.exit(0)
def do_test_request() -> None:
url = '<YOUR_CONTAINERS_API_URL>/generate'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer <YOUR_INFERENCE_API_KEY>',
}
data = {
"inputs": "Solar wind is a curious phenomenon. By studying it's origins",
"parameters": {
"max_new_tokens": 128,
"temperature": 0.7,
"top_p": 0.9
}
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
try:
print(response.json())
except ValueError:
print("Response content is not valid JSON.", file=sys.stderr)
print("Response body:", file=sys.stderr)
print(response.text, file=sys.stderr)
else:
print(f"Request failed with status code {response.status_code}", file=sys.stderr)
print("Response body:", file=sys.stderr)
print(response.text, file=sys.stderr)
if __name__ == "__main__":
signal.signal(signal.SIGINT, graceful_shutdown)
signal.signal(signal.SIGTERM, graceful_shutdown)
do_test_request()
Response
This returns a synchronous response with the completion of the prompt:
{
"generated_text":" and its impact on the Earth we can learn a lot about the Sun and our own planet. The Solar wind is a stream of charged particles that are constantly emitted from the Sun. This stream moves at high speeds and can be measured in the solar system as far as the Voyager spacecrafts, which are now leaving the solar system.\n\nThe solar wind is made up of electrons, protons, and heavier ions. These particles are moving at speeds of hundreds of kilometers per second. As they move away from the Sun, they carry with them the Sun's magnetic field. This field is important"
}
Streaming request
Same example as above, but streaming out the response using Generate API stream endpoint /generate_stream. Save it to a file named test_request.py and run it with python test_request.py.
import requests
import sys
import signal
def graceful_shutdown(signum, frame) -> None:
print(f"\nSignal {signum} received at line {frame.f_lineno} in {frame.f_code.co_filename}")
sys.exit(0)
def do_test_request() -> None:
url = '<YOUR_CONTAINERS_API_URL>/generate_stream'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer <YOUR_INFERENCE_API_KEY>',
'Accept': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
}
data = {
"inputs": "Solar wind is a curious phenomenon. By studying it's origins",
"parameters": {
"max_new_tokens": 128,
"temperature": 0.7,
"top_p": 0.9
}
}
try:
with requests.post(url, headers=headers, json=data, stream=True) as response:
if response.status_code == 200:
print("Stream started. Receiving events...\n")
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)
else:
print(f"Request failed with status code {response.status_code}", file=sys.stderr)
print("Response body:", file=sys.stderr)
print(response.text, file=sys.stderr)
except requests.RequestException as e:
print(f"An error occurred: {e}", file=sys.stderr)
if __name__ == "__main__":
signal.signal(signal.SIGINT, graceful_shutdown)
signal.signal(signal.SIGTERM, graceful_shutdown)
do_test_request()
Response
This returns a streaming response with the completion of the prompt:
The chat completions API /v1/chat/completions is a more dynamic, interactive way to communicate with the model, allowing back-and-forth exchanges that can be stored in the chat history. Notice that the prompt format is different from the completions API.
Synchronous request
Below is a Python script that calls the chat completions endpoint /v1/chat/completions with a prompt and returns the completion. Save it to a file named test_request.py and run it with python test_request.py. . Remember to replace <YOUR_CONTAINERS_API_URL> and <YOUR_INFERENCE_API_KEY> with the values from your deployment.
import requests
import sys
import signal
def graceful_shutdown(signum, frame) -> None:
print(f"\nSignal {signum} received at line {frame.f_lineno} in {frame.f_code.co_filename}")
sys.exit(0)
def do_test_request() -> None:
url = '<YOUR_CONTAINERS_API_URL>/v1/chat/completions'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer <YOUR_INFERENCE_API_KEY>',
}
data = {
"model": "meta-llama/Llama-3.2-3B-Instruct",
"messages":[
{"role": "user", "content": "What is deep learning?"}
],
"stream":False
}
try:
with requests.post(url, headers=headers, json=data, stream=False) as response:
print(response.json())
except requests.RequestException as e:
print(f"An error occurred: {e}", file=sys.stderr)
if __name__ == "__main__":
signal.signal(signal.SIGINT, graceful_shutdown)
signal.signal(signal.SIGTERM, graceful_shutdown)
do_test_request()
Response
This returns a synchronous response with the completion of the prompt.
{
"object":"chat.completion",
"id":"",
"created":1737406192,
"model":"meta-llama/Llama-3.2-3B-Instruct",
"system_fingerprint":"3.0.2-sha-b70f29d",
"choices":[
{
"index":0,
"message":{
"role":"assistant",
"content":"Deep learning is a subset of machine learning which is based on artificial neural networks. It\\'s a type of algorithm inspired by the structure and function of the brain. These neural networks are designed to simulate the learning process that occurs in a human brain, using layers of interconnected \\'neurons\\' to process and learn from large amounts of data.\n\nDeep learning algorithms can be used in a variety of contexts, such as image and speech recognition, natural language processing, and even in industries like healthcare, finance, and autonomous vehicles. The term \"deep\" refers to the deep neural networks with many layers, allowing them to learn increasingly complex patterns and abstractions from the data."
},
"logprobs":"None",
"finish_reason":"stop"
}
],
"usage":{
"prompt_tokens":8,
"completion_tokens":141,
"total_tokens":149
}
}
Streaming request
Same example as above, but streaming out the response. Save it to a file named test_request.py and run it with python test_request.py.
import requests
import sys
import signal
def graceful_shutdown(signum, frame) -> None:
print(f"\nSignal {signum} received at line {frame.f_lineno} in {frame.f_code.co_filename}")
sys.exit(0)
def do_test_request() -> None:
url = '<YOUR_CONTAINERS_API_URL>/v1/chat/completions'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer <YOUR_INFERENCE_API_KEY>',
'Accept': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
}
data = {
"model": "meta-llama/Llama-3.2-3B-Instruct",
"messages":[
{"role": "user", "content": "What is deep learning?"}
],
"stream":True,
"stream_options": {
"include_usage": True
},
"temperature": 0.8,
"top_p": 0.95
}
try:
with requests.post(url, headers=headers, json=data, stream=True) as response:
if response.status_code == 200:
print("Stream started. Receiving events...\n")
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)
else:
print(f"Request failed with status code {response.status_code}", file=sys.stderr)
print("Response body:", file=sys.stderr)
print(response.text, file=sys.stderr)
except requests.RequestException as e:
print(f"An error occurred: {e}", file=sys.stderr)
if __name__ == "__main__":
signal.signal(signal.SIGINT, graceful_shutdown)
signal.signal(signal.SIGTERM, graceful_shutdown)
do_test_request()
Response
This returns a streaming response with the completion of the prompt.
This concludes our tutorial how to call the TGI endpoint with meta-llama/Llama-3.2-3B-Instruct model. You can now use the TGI endpoint to generate completions for your prompts.
Also check out also other TGI standard endpoints such as /health, /info or /metrics to monitor the health of the deployment.