Artificial General Intelligence (AGI)
Future AIA hypothetical AI that can learn and reason across any subject as well as a human — still a goal, not a reality today.
Your friendly guide to AI terms — short, plain-English definitions so you can learn fast and use them with confidence.
A hypothetical AI that can learn and reason across any subject as well as a human — still a goal, not a reality today.
An autonomous system that perceives its environment, makes decisions, and takes actions to achieve goals — the next frontier beyond chatbots.
A clear set of steps or rules a computer follows to solve a problem. Think of it as a recipe for a task.
Making sure an AI's goals and behavior match human values and intentions — so it does what people want in safe ways.
Adding labels to data (like tagging images or marking text) so AI models can learn what things are.
A technique that lets models focus on the most relevant parts of their input, enabling breakthroughs in translation, generation, and understanding.
A method neural networks use to learn from errors: the model adjusts its internal numbers so future guesses get closer to the right answer.
When an AI makes unfair or skewed decisions because the data it learned from was unbalanced or flawed.
A system that's hard to inspect or explain — you can see inputs and outputs, but not the internal reasoning easily.
A program that chats with users in text or voice — from simple scripted bots to advanced conversational AIs.
Assigning inputs into categories, like sorting emails into "spam" or "not spam."
Teaching computers to see and interpret images or videos — used for faces, objects, driving, and more.
Raw facts (text, images, numbers) we feed to models so they can learn — quality data matters a lot.
A type of machine learning that uses layered neural networks to learn complex patterns from large datasets.
A generative technique that makes images by gradually turning random noise into a clear picture guided by learned patterns.
Numbers that represent words, sentences, or images so a model can compare meaning and find similarities easily.
A part of a model that converts input (like words or pixels) into a compact internal representation the AI can use.
Testing a model to see how well it performs — usually using held-back data not seen during training.
An individual measurable property or input (like age or pixel brightness) used by a model to make predictions.
Taking a pre-trained model and training it further on specific data so it performs better for a certain task.
A very large pre-trained model that can be adapted to many tasks — like a reusable base model.
Two networks compete — one makes images, the other checks them — improving realism over time.
AI that creates new content — like text, images, or music — rather than just analyzing what's already there.
Fast processors used to train and run AI models, especially where many calculations happen in parallel.
When an AI confidently produces incorrect or made-up information — a common issue with generative models.
Intermediate layers in a neural network where features are transformed between input and output.
Using a trained model to make predictions or generate outputs from new input data.
How easy it is for humans to understand why a model made a certain decision.
A single pass of training or improvement — models often need many iterations to learn well.
A simple method to measure how similar two strings are — useful for fuzzy matching names or text.
Small random changes applied to inputs (like images) during training to help models generalize better.
In some models, a function that measures similarity between data points (common in methods like SVMs).
A way to test model performance by splitting data into K parts and rotating which part is used for testing.
A model trained on huge amounts of text to understand and generate natural language (e.g., writing or summarizing).
A setting that controls how big each update is during training — too big or too small can cause problems.
A formula that measures how wrong a model's predictions are — training tries to make this number small.
Building systems that learn from data to make predictions or decisions without being explicitly programmed for every case.
The trained result of a learning process — the thing you use to make predictions or generate outputs.
Models that can understand and combine different kinds of data — like text, images, and audio — at the same time.
The area of AI focused on making computers understand and generate human language.
A set of connected layers that pass information and learn patterns — loosely inspired by the brain's neurons.
An algorithm (like Adam or SGD) that updates a model's parameters to reduce the loss during training.
When a model learns training details too closely and performs poorly on new, unseen data.
Training a model on broad data first so it learns general patterns before you fine-tune it for a specific job.
The instruction or question you give to a generative model to guide its output.
Crafting prompts in a way that helps models give better, more useful answers.
A reinforcement learning method where an agent learns which actions give the best future rewards.
Making a model smaller and faster by using fewer bits for its numbers — useful for running models on phones or edge devices.
A prediction task where the model estimates continuous values, like predicting house prices.
Methods to prevent overfitting, such as adding penalties or randomly dropping parts of the model during training.
Training agents by reward and punishment so they learn to make sequences of good decisions over time.
A training technique where human preferences guide an AI model's behavior, making outputs more helpful, harmless, and honest.
A speculative idea where AI growth accelerates to a point of dramatic change — debated and uncertain.
A basic optimization method that updates model parameters using small batches of data at a time.
Training models on labeled examples so they learn to map inputs to known outputs.
A piece of text the model processes (often a word or part of a word). Models read tokens one by one.
The examples used to teach a model — better, more diverse data usually means a better model.
Reusing a model trained on one task as the starting point for a different task — saving time and data.
A neural structure great at handling sequences (like text) using attention — it's behind most modern language models.
Letting models find patterns in unlabeled data on their own, like clustering similar items together.
A measure used in some AI approaches (notably reinforcement learning) to score how desirable outcomes are.
A slice of data used during training to check how well the model is learning and to tune settings.
A specialized database designed to store and quickly search high-dimensional vectors (embeddings), powering semantic search and RAG systems.
A transformer adapted for image tasks — it treats patches of an image like tokens of text.
Numbers inside a model that get updated during training — they determine how inputs map to outputs.
A vector that represents a word's meaning so similar words are close together in math space.
Tools and methods that help people understand how AI makes decisions — important for trust and verification.
A fast, powerful machine learning library for structured data — often used in competitions and industry.
A human-friendly text format often used to store configuration for ML experiments and deployments.
A real-time approach to identify objects in images quickly — popular for fast vision tasks.
When a model performs a task it wasn't explicitly trained on by generalizing from related knowledge.
In AI, refers to the current trends, tools, and public discussion shaping how AI is understood and used.