A type of AI model trained on massive text datasets to understand and generate human language. Examples include GPT-4, Claude, and Llama.
A neural network architecture that uses self-attention mechanisms to process sequential data. The foundation of modern language models.
The basic unit of text that language models process. A token can be a word, part of a word, or a character. English text averages about 4 characters per token.
The input text or instructions given to an AI model to generate a response. Prompt engineering is the practice of crafting effective prompts.
The process of further training a pre-trained model on a specific dataset to adapt it for a particular task or domain.
A technique that combines information retrieval with text generation, allowing AI models to access external knowledge bases for more accurate responses.
When an AI model generates information that sounds plausible but is factually incorrect or fabricated.
The maximum amount of text (measured in tokens) that a language model can process in a single interaction.
A parameter that controls the randomness of AI model outputs. Lower values produce more deterministic responses, higher values more creative ones.
A numerical representation of text in a high-dimensional vector space, used for semantic search and similarity comparisons.
A hypothetical AI system that can understand, learn, and apply knowledge across any intellectual task that a human can perform.
A computing system inspired by biological neural networks, consisting of interconnected nodes that process information in layers.
A type of generative model that creates data by gradually removing noise from a random signal. Used in DALL-E, Stable Diffusion, and Midjourney.
A component in neural networks that allows the model to focus on relevant parts of the input when producing output.
The ability of an AI model to perform a task it was not explicitly trained on, using only its general knowledge and the task description.
A technique where an AI model learns to perform a task from just a few examples provided in the prompt.
Reinforcement Learning from Human Feedback. A training technique where human preferences are used to fine-tune AI models for more helpful and safe responses.
AI models that can process and generate multiple types of data, such as text, images, audio, and video.
The process of using a trained AI model to generate predictions or outputs from new input data.
A compressed representation of data learned by a model, where similar items are positioned close together in the mathematical space.