← Glossary
Agentic AI concepts

Prompt Engineering

The practice of designing, iterating, and optimizing text inputs to language models—including instructions, examples, context, and formatting—to reliably elicit desired outputs without modifying model weights.

Prompt engineering spans a wide range of techniques from simple instruction formatting to elaborate multi-step scaffolds. Core principles include: be specific about output format, provide worked examples for complex tasks (few-shot), give the model space to reason before answering (chain-of-thought), and use a system prompt to establish stable persona and constraints.

As models have grown more capable, raw prompt engineering has become less critical for simple tasks but remains essential for reliably unlocking complex capabilities—long-horizon reasoning, structured output, multi-hop tool use. Anthropic's prompt engineering guide and OpenAI's cookbook are the canonical references.

The field is shifting from hand-written prompts toward automated prompt optimization (DSPy, TextGrad) and self-improving prompts generated by a meta-LLM. However, human intuition about failure modes still outperforms automated search for safety-critical applications.

Related terms
system-promptfew-shot-promptingchain-of-thoughtreact-promptinginstruction-tuningprompt-caching