25 essential terms explained in plain English · click any card to expand
A type of AI model trained on massive amounts of text data to understand and generate human language. LLMs …
The input text you give to an AI model. A well-crafted prompt provides context, instructions, constraints, …
The practice of designing and refining prompts to get better, more reliable outputs from an AI model. Techn…
The basic units of text that LLMs process. Tokens are roughly 3–4 characters or about ¾ of a word. LLMs hav…
The maximum amount of text (measured in tokens) an LLM can process in a single interaction — including both…
When an AI model generates information that sounds plausible but is factually incorrect or entirely fabrica…
A parameter (usually 0–2) that controls the randomness of an LLM's output. Lower temperature (closer to 0) …
A hidden instruction given to an LLM before the user's message. Used to set the model's persona, tone, outp…
A technique that combines an LLM with a search system. When a user asks a question, relevant documents are …
A numerical representation (a vector of numbers) that captures the semantic meaning of text. Text that mean…
A database optimised for storing and searching vector embeddings. Unlike traditional databases that match e…
Taking a pre-trained foundation model and continuing to train it on a smaller, task-specific dataset. Fine-…
A large AI model trained on broad, general-purpose data that serves as a base for many downstream tasks. GP…
A training technique used to align LLMs with human preferences. Human raters compare model outputs and rank…
An AI system that can take actions autonomously to achieve a goal — not just generate text. Agents use LLMs…
The ability for an LLM to request the execution of external functions or APIs. The model decides when to ca…
Machine Learning Operations — the practice of reliably deploying, monitoring, and maintaining ML models in …
The degradation of a model's performance over time as the real-world data it encounters diverges from the d…
The process of running a trained model to generate predictions or outputs. Inference is what happens every …
AI models that can process and generate multiple types of data — text, images, audio, and video — in a sing…
Zero-shot means asking a model to perform a task with no examples — relying entirely on its training. Few-s…
The process of connecting an LLM's outputs to verified, external information sources — reducing hallucinati…
The challenge of ensuring that an AI system's goals and behaviours are consistent with human values and int…
Rules, filters, or classifiers applied to an AI system's inputs or outputs to prevent harmful, off-topic, o…
The neural network architecture that powers virtually all modern LLMs. Introduced in the 2017 paper "Attent…