Large Language Models
Reducing Memorization in LLMs: A technique that masks tokens in large language models, protecting data privacy
Studies have established that large language models can memorize the text passages they’ve been trained on repeatedly and regurgitate them when prompted in adversarial and, though rarely, in benign ways.