LLMOps
The transformer structure is arguably one of the vital impactful improvements in fashionable deep studying. Proposed within the well-known 2017 paper “Attention Is All You Need,” it has develop into the go-to method for many language-related modeling, together with all Massive Language Fashions (LLMs), such because the GPT family, in addition to many pc imaginative and prescient duties.
Because the complexity and measurement of those fashions develop, so does the necessity to optimize their inference velocity, particularly in chat purposes the place the customers anticipate fast replies. Key-value (KV) caching is a intelligent trick to just do that — let’s see the way it works and when to make use of it.
Earlier than we dive into KV caching, we might want to take a brief detour to the eye mechanism utilized in transformers. Understanding the way it works is required to identify and recognize how KV caching optimizes transformer inference.
We are going to concentrate on autoregressive fashions used to generate textual content. These so-called decoder fashions embrace the GPT family, Gemini, Claude, or GitHub Copilot. They’re educated on a easy activity: predicting the subsequent token in sequence. Throughout inference, the mannequin is supplied with some textual content, and its activity is…