At the core of most AI tools you’re using (ChatGPT, Claude, Gumloop) underneath those is a model. This is what’s
processing the text, image, or audio you’re sending and giving you a response.
How these models work is actually pretty simple to understand. Let’s be clear: extremely hard to build and vastly
more complicated than what I’m about to explain. But LLMs are fundamentally next-word predictors.
You write something like “Who is the first president of the United States?”
And then the model takes that, maps it against all the vast data it’s analyzed, and tries to predict the right next
word, word by word.
Every time it picks a word, it looks at your prompt plus what it’s written so far and predicts the next word.
And it does that until it’s answered your question.
This is how chatbots are built. You prompt them and they go word by word responding to you.
How do they go from one answer to a conversation? It’s actually just more of the same: you feed the whole
conversation—every message so far—back to the model and it predicts the next word. When you start a new
conversation, the model starts fresh. It has no memory of what came before.
Now in Gumloop, you can pick from most of the models out there. How are they different?
Models sit on a spectrum: intelligence and speed, inversely related. The more capable the model, the slower you
should expect a response—but with fewer mistakes. And the further up the curve, the more you should expect to pay.
Anthropic, the creator of the Claude models, has three options. Opus sits at one end: thinks deeply, responds
slowly, like your grandpa. Then there’s Sonnet in the middle—your capable coworker. And finally Haiku, your eager
teenager ready with a quick answer.
OpenAI and Google’s Gemini models sit in similar places.
What should you pick in Gumloop? Well, that depends on your task but what I always recommend is start with an
advanced model and then move down to simpler models as long as you’re happy with the results. Keep going until you
find the perfect balance: quality you’re satisfied with.
So now we understand what models are: they’re next-word predictors—we can go back and forth with them. And chatbots
are simply LLMs that keep the conversation going.
But how do we give AI access to our day-to-day tools so these LLMs can actually do things for us? So we don’t just
get a text. That’s in the next lesson.