A model that learns how to de-noise images. Images in the training set have noise added to them and the model learns how to remove that noise. New images are then created by starting from random noise and running it through the de-noising process to produce a stable image.
discriminative AI
A class of machine learning algorithm that focuses on finding a boundary that separates different classes in the data.
emergence
A property of foundation models in which the model exhibits behaviors that were not explicitly constructed.
emergent behavior
A behavior exhibited by a foundation model that was not explictly constructed.
few shot prompting
A prompting technique in which multiple examples are provided to the model to demonstrate how to complete the task.
fine tuning
The process of conducting additional training on a pre-trained model with a smaller dataset, focused on a specific task.
foundation model
An AI model that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale generative models that are trained on unlabeled data using self-supervision.
generative AI
A class of AI algorithms that can produce various types of content including text, imagery, audio and synthetic data.
generative variability
The characteristic of generative models to produce varied outputs, even when the input to the model is held constant.
homogenization
The trend in machine learning research in which a small number of deep neural net architectures, such as the transformer, are achieving state-of-the-art results across a wide variety of tasks.
inferencing
The process of using a trained machine learning model to make predictions on new data.
large language model
A language model with a large number of parameters, trained on a large quantity of text.
mental model
An individual’s understanding of how a system works and how their actions affect system outcomes. These expectations often do not match the actual capabilities of a system which may lead to frustration, abandonment or misuse.
one-shot prompting
A prompting technique in which a single example is provided to the model to demonstrate how to complete the task.
pretraining
The process of training a machine learning model on a large dataset before fine-tuning it for a specific task.
prompt
Data, such as text or an image, that prepares, instructs, or conditions a foundation model’s output.
prompt engineering
The process of designing prompts for a language model to elicit specific kinds of outputs.
prompt tuning
Including AI-generated words or numbers in a prompt to influence the model’s output or adapt it to new tasks.
reinforcement learning
A learning paradigm that learns to optimize sequential decision making (decisions made across time steps). For example, reinforcement learning can be used to learn a policy for determining daily stock replenishment amounts in an inventory control scenario.
reinforcement learning on human feedback
A method of aligning a language learning model’s responses to the instructions given in a prompt. RLHF requires human annotators rank multiple outputs from the model. These rankings are then used to train a reward model using reinforcement learning. The reward model is then used to fine-tune the large language model’s output.
self-supervised learning
A machine learning training method in which a model learns from unlabeled data by masking tokens in an input sequence and then trying to predict them (e.g. “I like ____ sprouts”).
supervised learning
A machine learning training method in which a model is trained on a labeled dataset to make predictions on new data.
token
A discrete unit of meaning or analysis in a text, such as a word or subword (part of a word).
tokenization
The process used in NLP to split a string of text into smaller units, such as words or subwords.
transformer
A neural network architecture that uses positional encodings and the self-attention mechanism to predict the next token in a sequence of tokens.
unsupervised learning
A machine learning training method in which a model is not provided with labeled data and must find patterns or structure in the data on its own.
zero shot prompt
A prompting technique in which the model completes a task without being given a specific example of how.