AWS Trainium & Inferentia documentation
Sentence Transformers on AWS Inferentia with Optimum Neuron
Sentence Transformers on AWS Inferentia with Optimum Neuron
Text Models
There is a notebook version of that tutorial here.
This guide explains how to compile, load, and use Sentence Transformers (SBERT) models on AWS Inferentia2 with Optimum Neuron, enabling efficient calculation of embeddings. Sentence Transformers are powerful models for generating sentence embeddings. You can use this Sentence Transformers to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similarity, semantic search, or paraphrase mining.
Convert Sentence Transformers model to AWS Inferentia2
First, you need to convert your Sentence Transformers model to a format compatible with AWS Inferentia2. You can compile Sentence Transformers models with Optimum Neuron using the optimum-cli or NeuronSentenceTransformers class. Below you will find an example for both approaches. We have to make sure sentence-transformers is installed. That’s only needed for exporting the model.
pip install sentence-transformers
Here we will use the NeuronSentenceTransformers, which can be used to convert any Sentence Transformers model to a format compatible with AWS Inferentia2 or load already converted models. When exporting models with the NeuronSentenceTransformers you need to set export=True and define the input shape and batch size. The input shape is defined by the sequence_length and the batch size by batch_size.
from optimum.neuron import NeuronSentenceTransformers
# Sentence Transformers model from HuggingFace
model_id = "BAAI/bge-small-en-v1.5"
input_shapes = {"batch_size": 1, "sequence_length": 384} # mandatory shapes
# Load Transformers model and export it to AWS Inferentia2
model = NeuronSentenceTransformers.from_pretrained(model_id, export=True, **input_shapes)
# Save model to disk
model.save_pretrained("bge_emb_inf2/")Here we will use the optimum-cli to convert the model. Similar to the NeuronSentenceTransformers we need to define our input shape and batch size. The input shape is defined by the sequence_length and the batch size by batch_size. The optimum-cli will automatically convert the model to a format compatible with AWS Inferentia2 and save it to the specified output directory.
optimum-cli export neuron -m BAAI/bge-small-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_inf2/Load compiled Sentence Transformers model and run inference
Once we have a compiled Sentence Transformers model, which we either exported ourselves or is available on the Hugging Face Hub, we can load it and run inference. For loading the model we can use the NeuronSentenceTransformers class, which is an abstraction layer for the SentenceTransformer class. The NeuronSentenceTransformers class will automatically pad the input to the specified sequence_length and run inference on AWS Inferentia2.
from optimum.neuron import NeuronSentenceTransformers
model_id_or_path = "bge_emb_inf2/"
# Load model and tokenizer
model = NeuronSentenceTransformers.from_pretrained(model_id_or_path)
# Run inference
token_embeddings = model.encode(output_value="token_embeddings")
sentence_embedding = model.encode(output_value="sentence_embedding")Production Usage
For deploying these models in a production environment, refer to the Amazon SageMaker Blog.
CLIP
Compile CLIP for AWS Inferentia2
You can compile CLIP models with Optimum Neuron either by using the optimum-cli or NeuronSentenceTransformers class. Adopt one approach that you prefer:
- With the Optimum CLI
optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb/- With the
NeuronSentenceTransformersclass
from optimum.neuron import NeuronSentenceTransformers
model_id = "sentence-transformers/clip-ViT-B-32"
# configs for compiling model
input_shapes = {
"num_channels": 3,
"height": 224,
"width": 224,
"text_batch_size": 3,
"image_batch_size": 1,
"sequence_length": 64,
}
emb_model = NeuronSentenceTransformers.from_pretrained(
model_id, subfolder="0_CLIPModel", export=True, library_name="sentence_transformers", dynamic_batch_size=False, **input_shapes
)
# Save locally or upload to the HuggingFace Hub
save_directory = "clip_emb/"
emb_model.save_pretrained(save_directory)Load compiled Sentence Transformers model and run inference
from PIL import Image
from sentence_transformers import util
from transformers import CLIPProcessor
from optimum.neuron import NeuronSentenceTransformers
save_directory = "clip_emb"
emb_model = NeuronSentenceTransformers.from_pretrained(save_directory)
processor = CLIPProcessor.from_pretrained(save_directory)
inputs = processor(
text=["Two dogs in the snow", 'A cat on a table', 'A picture of London at night'], images=Image.open("two_dogs_in_snow.jpg"), return_tensors="pt", padding=True
)
outputs = emb_model(**inputs)
# Compute cosine similarities
cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds)
print(cos_scores)
# tensor([[0.3072, 0.1016, 0.1095]])Caveat
Since compiled models with dynamic batching enabled only accept input tensors with the same batch size, we cannot set
dynamic_batch_size=Trueif the input texts and images have different batch sizes. And asNeuronSentenceTransformersclass pads the inputs to the batch sizes (text_batch_sizeandimage_batch_size) used during the compilation, you could use relatively larger batch sizes during the compilation for flexibility with the trade-off of compute.eg. if you want to encode 3 or 4 or 5 texts and 1 image, you could set
text_batch_size = 5 = max(3, 4, 5)andimage_batch_size = 1during the compilation.