**One-shot prompting** is a technique in [[Prompt engineering|prompt engineering]] where a single example is provided to a [[Large language model|large language model]] to demonstrate the desired format, style, or task before requesting a response. It is one of several [[In-context learning (natural language processing)|in-context learning]] methods used to guide [[Artificial intelligence|AI]] model behavior without modifying the model's underlying parameters. The technique occupies a middle position between [[Zero-shot prompting|zero-shot prompting]], which provides no examples, and [[Few-shot prompting|few-shot prompting]], which provides multiple examples. In one-shot prompting, the user includes a single input-output pair that illustrates the expected pattern, followed by a new input for which the model generates a corresponding output. This approach is particularly useful when the task requires a specific format or reasoning style that may not be apparent from instructions alone. One-shot prompting emerged as a practical application following research demonstrating that large language models could learn from examples presented in context, notably documented in the 2020 paper introducing [[GPT-3]]. The technique offers a balance between the simplicity of zero-shot approaches and the improved accuracy of few-shot methods, while consuming less of the model's [[Context window|context window]] than multi-example prompts. It is widely used in applications requiring structured outputs, [[Text classification|classification]] tasks, and scenarios where computational resources or context length are constrained.