Set Up
The mannequin that we’ll experiment with is the Alibaba-NLP/gte-Qwen2-7B-instruct
from Transformers. The mannequin card is here.
To carry out this experiment, I’ve used Python 3.10.8 and put in the next packages:
torch==2.3.0
transformers==4.41.2
xformers==0.0.26.post1
flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/obtain/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
speed up==0.31.0
I bumped into some difficulty in putting in flash-attn
required to run this mannequin and so needed to set up the precise model listed above. If anybody has a greater workaround please let me know!
The Amazon SageMaker occasion I used for this experiment is the ml.g5.2xlarge
. It has a 24GB NVIDIA A10G GPU and 32GB of CPU reminiscence and it prices $1.69/hour. The under screenshot from AWS reveals all the small print of the occasion
Really to be exact in the event you run nvidia-smi
you will notice that the occasion solely has 23GB of GPU reminiscence which is barely lower than marketed. The CUDA model on this GPU is 12.2.
How you can Run — In Element
In the event you have a look at the mannequin card, one of many instructed methods to make use of this mannequin is through the sentence-transformers
library as present under
from sentence_transformers import SentenceTransformer# This is not going to run on our 24GB GPU!
mannequin = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
embeddings = mannequin.encode(list_of_examples)
Sentence-transformers is an extension of the Transformers bundle for computing embeddings and could be very helpful as you may get issues working with two traces of code. The draw back is that you’ve much less management on load the mannequin because it hides away tokenisation and pooling particulars. The above code is not going to run on our GPU occasion as a result of it makes an attempt to load the mannequin in full float32 precision which might take 28GB of reminiscence. When the sentence transformer mannequin is initialised it checks for accessible gadgets (cuda for GPU) and mechanically shifts the Pytorch mannequin onto the system. Because of this it will get caught after loading 5/7ths of the mannequin and crashes.
As an alternative we want to have the ability to load the mannequin in float16 precision earlier than we transfer it onto the GPU. As such we have to use the decrease degree Transformers library. (I’m not certain of a solution to do it with sentence-transformers however let me know if one exists!) We do that as follows
import transformers
import torchmodel_path = "Alibaba-NLP/gte-Qwen2-7B-instruct"
mannequin = transformers.AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
With the torch_dtype
parameter we specify that the mannequin ought to be loaded in float16 precision immediately, thus solely requiring 14GB of reminiscence. We then want to maneuver the mannequin onto the GPU system which is achieved with the to
methodology. Utilizing the above code, the mannequin takes nearly 2min to load!
Since we’re utilizing transformers
we have to individually load the tokeniser to tokenise the enter texts as follows:
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
The subsequent step is to tokenise the enter texts which is finished as follows:
texts = ["example text 1", "example text 2 of different length"]
max_length = 32768
batch_dict = tokenizer(texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt").to(DEVICE)
The utmost size of the Qwen2 mannequin is 32678, nonetheless as we’ll see later we’re unable to run it with such a protracted sequence on our 24GB GPU as a result of extra reminiscence necessities. I’d suggest lowering this to not more than 24,000 to keep away from out of reminiscence errors. Padding ensures that every one the inputs within the batch have the identical size while truncation ensures that any inputs longer than the utmost size will likely be truncated. For extra info please see the docs. Lastly, we be sure that we return PyTorch tensors (default could be lists as an alternative) and transfer these tensors onto the GPU to be accessible to go to the mannequin.
The subsequent step is to go the inputs by our mannequin and carry out pooling. That is performed as follows
with torch.no_grad():
outputs = mannequin(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])
with the last_token_pool
which appears to be like as follows:
def last_token_pool(last_hidden_states: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
# checks whether or not there may be any padding (the place consideration masks = 0 for a given textual content)
no_padding = attention_mask[:, -1].sum() == attention_mask.form[0]
# if no padding - solely would occur if batch measurement of 1 or all sequnces have the identical size, then take the final tokens because the embeddings
if no_padding:
return last_hidden_states[:, -1]
# in any other case use the final non padding token for every textual content within the batch
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.form[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengthsLet’s break down what happened in the above code snippets!
- The
torch.no_grad()
context manager is used to disable gradient calculation, since we are not training the model and hence to speed up the inference. - We then pass the tokenised inputs into the transformer model.
- We retrieve the outputs from the last layer of the model with the
last_hidden_state
attribute. This is a tensor of shape (batch_size, max_sequence_length, embedding dimension). Essentially for each example in the batch the transformer outputs embeddings for all the tokens in the sequence. - We now need some way of combining all the token embeddings into a single embedding to represent the input text. This is called
pooling
and it is done in the same way as during training of the model. - In older BERT based models the first token was typically used (which represented the special classification [CLS] token). Nonetheless, the Qwen2 mannequin is LLM-based, i.e. transformer decoder primarily based. Within the decoder, the tokens are generated auto regressively (one after one other) and so the final token incorporates all the knowledge encoded concerning the sentence.
- The objective of the
last_token_pool
perform is to due to this fact choose the embedding of the final generated token (which was not the padding token) for every instance within the batch. - It makes use of the
attention_mask
which tells the mannequin which of the tokens are padding tokens for every instance within the batch (see the docs).
Annotated Instance
Let’s have a look at an instance to know it in a bit extra element. Let’s say we wish to embed two examples in a single batch:
texts = ["example text 1", "example text 2 of different length"]
The outputs of the tokeniser (the batch_dict
) will look as follows:
>>> batch_dict
{'input_ids': tensor([[ 8687, 1467, 220, 16, 151643, 151643, 151643],
[ 8687, 1467, 220, 17, 315, 2155, 3084]],
system='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1]], system='cuda:0')}
From this you may see that the primary sentence will get cut up into 4 tokens (8687, 1467, 220, 16), whereas the second sentence get cut up into seven tokens. Because of this, the primary sentence is padded (with three padding tokens with id 151643) as much as size seven — the utmost within the batch. The eye masks displays this — it has three zeros for the primary instance equivalent to the situation of the padding tokens. Each the tensors have the identical measurement
>>> batch_dict.input_ids.form
torch.Measurement([2, 7])
>>> batch_dict.attention_mask.form
torch.Measurement([2, 7])
Now passing the batch_dict
by the mannequin we are able to retrieve the fashions final hidden state of form:
>>> outputs.last_hidden_state.form
torch.Measurement([2, 7, 3584])
We are able to see that that is of form (batch_size, max_sequence_length, embedding dimension). Qwen2 has an embedding dimension of 3584!
Now we’re within the last_token_pool
perform. The primary line checks if padding exists, it does it by summing the final “column” of the attention_mask and evaluating it to the batch_size (given by attention_mask.form[0]
. This may solely lead to true if there exists a 1 in the entire consideration masks, i.e. if all of the examples are the identical size or if we solely have one instance.
>>> attention_mask.form[0]
2
>>> attention_mask[:, -1]
tensor([0, 1], system='cuda:0')
If there was certainly no padding we’d merely choose the final token embedding for every of the examples with last_hidden_states[:, -1]
. Nonetheless, since now we have padding we have to choose the final non-padding token embedding from every instance within the batch. With a view to choose this embedding we have to get its index for every instance. That is achieved through
>>> sequence_lengths = attention_mask.sum(dim=1) - 1
>>> sequence_lengths
tensor([3, 6], system='cuda:0')
So now we have to merely index into the tensor, with the proper indices within the first two dimensions. To get the indices for all of the examples within the batch we are able to use torch.arange
as follows:
>>> torch.arange(batch_size, system=last_hidden_states.system)
tensor([0, 1], system='cuda:0')
Then we are able to pluck out the proper token embeddings for every instance utilizing this and the indices of the final non padding token:
>>> embeddings = last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
>>> embeddings.form
torch.Measurement([2, 3584])
And we get two embeddings for the 2 examples handed in!
How you can Run — TLDR
The total code separated out into features appears to be like like
import numpy as np
import numpy.typing as npt
import torch
import transformersDEVICE = torch.system("cuda")
def last_token_pool(last_hidden_states: torch.Tensor, attention_mask: torch.Tensor) -> torch.Tensor:
# checks whether or not there may be any padding (the place consideration masks = 0 for a given textual content)
no_padding = attention_mask[:, -1].sum() == attention_mask.form[0]
# if no padding - solely would occur if batch measurement of 1 or all sequnces have the identical size, then take the final tokens because the embeddings
if no_padding:
return last_hidden_states[:, -1]
# in any other case use the final non padding token for every textual content within the batch
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.form[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths
def encode_with_qwen_model(
model: transformers.PreTrainedModel,
tokenizer: transformers.tokenization_utils.PreTrainedTokenizer | transformers.tokenization_utils_fast.PreTrainedTokenizerFast,
texts: list[str],
max_length: int = 32768,
) -> npt.NDArray[np.float16]:
batch_dict = tokenizer(texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt").to(DEVICE)
with torch.no_grad():
outputs = mannequin(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])
return embeddings.cpu().numpy()
def principal() -> None:
model_path = "Alibaba-NLP/gte-Qwen2-7B-instruct"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
mannequin = transformers.AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.float16).to(DEVICE)
print("Loaded tokeniser and mannequin")
texts_to_encode = ["example text 1", "example text 2 of different length"]
embeddings = encode_with_qwen_model(mannequin, tokenizer, texts_to_encode)
print(embeddings.form)
if __name__ == "__main__":
principal()
The encode_with_qwen_model
returns a numpy array. With a view to convert a PyTorch tensor to a numpy array we first have to maneuver it off the GPU again onto the CPU which is achieved with the cpu()
methodology. Please observe that in case you are planning to run lengthy texts you need to cut back the batch measurement to 1 and solely embed one instance at a time (thus lowering the record texts_to_encode
to size 1).