On this article, we concentrate on the privateness dangers of huge language fashions (LLMs), with respect to their scaled deployment in enterprises.
We additionally see a rising (and worrisome) development the place enterprises are making use of the privateness frameworks and controls that they’d designed for his or her information science / predictive analytics pipelines — as-is to Gen AI / LLM use-cases.
That is clearly inefficient (and dangerous) and we have to adapt the enterprise privateness frameworks, checklists and tooling — to keep in mind the novel and differentiating privateness features of LLMs.
Allow us to first think about the privateness assault eventualities in a standard supervised ML context [1, 2]. This consists of nearly all of AI/ML world as we speak with principally machine studying (ML) / deep studying (DL) fashions developed with the objective of fixing a prediction or classification job.
There are primarily two broad classes of inference assaults: membership inference and property…