This content originally appeared on NN/g latest articles and announcements and was authored by Tanner Kohler
Summary: Training modern LLMs is a costly process that shapes the model’s outputs and involves unsupervised, supervised, and reinforcement learning.
By this point, you've undoubtedly heard that the large language model (LLM) behind your favorite AI tool has been “trained on the whole internet.” To some extent, that’s true, but after training hundreds of UX professionals on how to use AI in their work , it’s clear that many don’t understand how the AI is trained. This is crucial for forming an accurate mental model of how these LLMs work, their limitations, and their capabilities.
This article discusses four basic types of training, when these are performed within an LLM, and how they impact the role of AI in user experience.
1. The Pretraining Phase: Unsupervised Learning
When you've heard that large language models have been "trained on the whole internet," people are typically talking about the pretraining phase, which involves unsupervised learning.
Read Full Article
This content originally appeared on NN/g latest articles and announcements and was authored by Tanner Kohler

Tanner Kohler | Sciencx (2025-05-02T21:00:00+00:00) How AI Models Are Trained. Retrieved from https://www.scien.cx/2025/05/02/how-ai-models-are-trained/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.