3. Prompting Basics
A comprehensive guide to the fundamentals of prompt engineering, techniques for effective prompting, and best practices for getting optimal results from large language models
In the rapidly evolving landscape of artificial intelligence, large language models (LLMs) have emerged as powerful tools capable of generating human-like text, answering questions, writing code, and performing a wide range of language-based tasks. However, the quality and usefulness of an LLM’s output heavily depend on how you communicate with it. This is where prompt engineering comes in—the art and science of crafting effective instructions to guide AI models toward generating the desired responses.
What is Prompt Engineering?
Prompt engineering is a relatively new discipline focused on developing and optimizing prompts to efficiently use language models for various applications. It involves designing, refining, and implementing effective prompting techniques that help users get the most out of AI systems.
At its core, prompt engineering is about communication—learning how to “speak” to AI models in ways they can understand and respond to appropriately. Just as human communication benefits from clarity, context, and structure, so too does communication with AI.
Prompt engineering skills help users to:
- Better understand the capabilities and limitations of LLMs
- Improve the quality and relevance of AI-generated outputs
- Reduce instances of hallucination or factual errors
- Guide models toward specific formats, styles, or approaches
- Solve complex problems by breaking them down into manageable steps
The Anatomy of a Prompt
A well-crafted prompt typically contains several key elements that help guide the model’s response:
1. Instruction
The instruction is the specific task or request you want the model to perform. Clear, specific instructions help the model understand exactly what you’re asking for.
Examples:
- “Summarize the following text in three sentences.”
- “Translate this paragraph from English to French.”
- “Write a product description for a wireless headphone.”
2. Context
Context provides background information or sets the scene for the model. This helps the AI understand the broader situation or domain in which it should operate.
Examples:
- “You are a financial advisor helping a client plan for retirement.”
- “The following is an excerpt from a scientific paper about climate change.”
- “This conversation is between a customer service representative and a customer with a technical issue.”
3. Input Data
Input data is the specific information the model needs to work with to complete the task. This could be text to summarize, a question to answer, or content to transform.
Examples:
- “Customer review: ‘The product arrived on time but was damaged during shipping.’”
- “Patient symptoms: fever, cough, fatigue, and loss of taste.”
- “Raw data: [2.3, 4.5, 6.7, 8.9, 10.1]“
4. Output Indicators
Output indicators specify the format, style, length, or other characteristics of the desired response. These help shape how the model presents its output.
Examples:
- “Format your answer as a bulleted list.”
- “Respond in the style of Shakespeare.”
- “Keep your explanation simple enough for a 10-year-old to understand.”
5. Examples (Few-shot Learning)
Examples demonstrate the expected input-output pattern, helping the model understand the task through demonstration rather than just description.
Example:
Input: "The weather is nice today."
Output: "El clima está agradable hoy."
Input: "Where is the nearest restaurant?"
Output: "¿Dónde está el restaurante más cercano?"
Input: "I need to buy groceries."
Output:
Basic Prompting Techniques
Several fundamental techniques form the foundation of effective prompt engineering:
Zero-shot Prompting
Zero-shot prompting involves asking the model to perform a task without providing any examples. This approach relies on the model’s pre-trained knowledge to understand and execute the request.
Example:
Explain quantum computing in simple terms.
Zero-shot prompting works well for straightforward tasks or when the model has been extensively trained on similar tasks. However, for more complex or specific requests, other techniques may yield better results.
Few-shot Prompting
Few-shot prompting provides the model with a small number of examples demonstrating the expected input-output pattern. This helps the model understand the specific format, style, or approach you want it to take.
Example:
Classify the sentiment of the following reviews as positive, negative, or neutral.
Review: "The food was delicious and the service was excellent."
Sentiment: Positive
Review: "The movie was neither particularly good nor bad."
Sentiment: Neutral
Review: "I waited for an hour and the customer service was unhelpful."
Sentiment: Negative
Review: "The hotel room was spacious but the bathroom was dirty."
Sentiment:
Few-shot prompting is particularly useful when you need the model to follow a specific pattern or when the task might be ambiguous without examples.
Chain-of-Thought Prompting
Chain-of-thought prompting encourages the model to break down complex problems into intermediate steps, showing its reasoning process. This technique significantly improves performance on tasks requiring logical reasoning or multi-step problem-solving.
Example:
Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
Let's think through this step by step:
1. Initially, Roger has 5 tennis balls.
2. He buys 2 cans of tennis balls.
3. Each can contains 3 tennis balls.
4. So from the cans, he gets 2 × 3 = 6 tennis balls.
5. In total, he now has 5 + 6 = 11 tennis balls.
Therefore, Roger has 11 tennis balls.
By demonstrating this step-by-step reasoning, you encourage the model to approach problems methodically rather than jumping directly to conclusions.
Role Prompting
Role prompting involves assigning a specific role or persona to the AI model. This technique helps frame the context and can significantly influence the style, tone, and content of the response.
Example:
You are an experienced pediatrician with 20 years of experience. Explain how parents should handle common childhood fevers.
Different roles can elicit different perspectives, levels of detail, or specialized knowledge, making this a versatile technique for various applications.
Advanced Prompting Strategies
Beyond the basics, several advanced strategies can help you get even more sophisticated and targeted responses from language models:
Prompt Chaining
Prompt chaining involves breaking down complex tasks into a sequence of simpler prompts, where the output of one prompt becomes the input for the next. This allows for more controlled and refined outputs, especially for multi-stage tasks.
Example:
Step 1: "Generate five potential names for a coffee shop that specializes in organic, fair-trade coffee."
Step 2: "For each of the coffee shop names generated, create a brief tagline that emphasizes the organic and fair-trade aspects."
Step 3: "Select the best name and tagline combination and expand it into a short mission statement for the coffee shop."
Self-Consistency
Self-consistency involves generating multiple independent responses to the same prompt and then selecting the most common or consistent answer. This technique can improve accuracy, especially for reasoning or problem-solving tasks.
Example:
Generate 5 different solutions to this math problem, showing your work each time:
If a rectangle has a perimeter of 30 units and a width of 5 units, what is its area?
Now, identify which answer appears most consistently and explain why it's correct.
Retrieval-Augmented Generation (RAG)
RAG combines the generative capabilities of language models with the ability to retrieve and reference specific information from external sources. This helps ground the model’s responses in factual information and reduces hallucinations.
While implementing full RAG systems typically requires additional technical infrastructure, you can simulate this approach in your prompts by including relevant information:
Example:
Based on the following information about climate change, answer the question below:
[Insert factual information about climate change from reliable sources]
Question: What are the three most significant contributors to global warming according to current scientific consensus?
Common Prompting Pitfalls and How to Avoid Them
Even with a solid understanding of prompting techniques, certain common mistakes can limit the effectiveness of your interactions with language models:
Being Too Vague
Vague prompts lead to unpredictable responses. Without clear guidance, the model must make assumptions about what you want, often resulting in outputs that miss the mark.
Instead of:
Tell me about cars.
Try:
Provide a comprehensive overview of electric vehicle technology, including current battery limitations, charging infrastructure challenges, and recent innovations in the field.
Overloading the Prompt
Cramming too many requests or too much information into a single prompt can overwhelm the model, leading to incomplete responses or missed instructions.
Instead of:
Explain quantum computing, compare it to classical computing, list its applications, discuss its limitations, predict its future, and provide a code example in Python, all in a format suitable for beginners.
Try: Breaking this into multiple, focused prompts that build on each other.
Neglecting to Specify Format
Without format guidance, the model will choose how to structure its response, which may not align with your needs.
Instead of:
List the benefits of regular exercise.
Try:
Create a numbered list of 5 evidence-based benefits of regular exercise. For each benefit, provide a brief one-sentence explanation and cite a specific health outcome.
Forgetting to Provide Context
Without context, the model lacks the background information needed to generate relevant and accurate responses.
Instead of:
How should I fix this error?
Try:
I'm developing a React application and encountering the following error when trying to update state in a functional component:
[Error message]
Here's the relevant code:
[Code snippet]
How should I fix this error?
Optimizing Prompts for Different Tasks
Different types of tasks benefit from different prompting approaches. Here’s how to optimize your prompts for common use cases:
Creative Writing
For creative tasks, provide clear parameters while leaving room for the model’s creativity:
Write a short story about a time traveler with the following specifications:
- Setting: Victorian London
- Main character: A botanist from the year 2150
- Theme: The unintended consequences of changing the past
- Length: Approximately 500 words
- Style: Blend of steampunk and hard science fiction
- Must include: A paradox, a rare plant, and a moral dilemma
Technical Explanations
For technical content, specify the audience’s expertise level and the desired depth:
Explain how public key cryptography works to a computer science undergraduate. Include:
- The fundamental mathematical principles
- A simple example using small numbers
- Common implementations (RSA, ECC)
- Security considerations
- Practical applications
Use analogies where helpful, but don't oversimplify the core concepts.
Data Analysis
For analytical tasks, clearly define the analytical approach and desired insights:
Analyze the following sales data for Q1-Q4 2024:
[Data]
Please provide:
1. Key trends across quarters
2. The best and worst performing product categories
3. Recommendations for Q1 2025 based on seasonal patterns
4. Any anomalies that require further investigation
Format your analysis as a structured report with sections and include specific numbers from the data to support your conclusions.
Code Generation
For programming tasks, specify language, style preferences, and performance considerations:
Write a Python function that efficiently finds the longest palindromic substring in a given string. Requirements:
- Include type hints
- Add comprehensive docstrings with examples
- Optimize for time complexity (analyze the complexity in your comments)
- Handle edge cases (empty strings, single characters, etc.)
- Follow PEP 8 style guidelines
- Include unit tests for the function
The Role of Model Settings in Prompting
Beyond the prompt itself, various model settings can significantly impact the quality and nature of the responses you receive:
Temperature
Temperature controls the randomness or creativity of the model’s responses. Lower values (closer to 0) make responses more deterministic and focused, while higher values (closer to 1 or above) introduce more variability and creativity.
- Low temperature (0.1-0.3): Best for factual questions, technical explanations, or tasks requiring precision
- Medium temperature (0.4-0.7): Suitable for balanced responses that combine accuracy with some creativity
- High temperature (0.8-1.0+): Ideal for creative writing, brainstorming, or generating diverse alternatives
Top-p (Nucleus) Sampling
Top-p sampling (also called nucleus sampling) controls diversity by considering only the most likely tokens whose cumulative probability exceeds the specified value of p.
- Lower values (0.1-0.5): More focused and conservative outputs
- Higher values (0.6-0.9): More diverse and unpredictable outputs
Maximum Length
This setting limits the length of the model’s response, which can be useful for controlling verbosity or ensuring concise answers.
Presence and Frequency Penalties
These settings help control repetition in the model’s outputs:
- Presence penalty: Reduces the likelihood of repeating any token that has appeared in the text so far
- Frequency penalty: Reduces the likelihood of repeating tokens proportionally to how often they’ve already appeared
Iterative Prompt Refinement
Prompt engineering is rarely a one-and-done process. Instead, it typically involves iterative refinement based on the model’s responses:
- Start with a basic prompt: Begin with a straightforward formulation of your request.
- Evaluate the response: Assess whether the output meets your needs and identify specific shortcomings.
- Refine the prompt: Adjust your prompt to address the identified issues, adding specificity, examples, or constraints as needed.
- Test again: Generate a new response with the refined prompt.
- Repeat as necessary: Continue this cycle until you achieve satisfactory results.
Example of iterative refinement:
Initial prompt:
Write a cover letter.
Response: [Generic, untargeted cover letter]
Refined prompt:
Write a cover letter for a senior software engineer position at a cybersecurity startup. I have 7 years of experience in full-stack development with a focus on secure authentication systems and have led two development teams in my previous roles.
Response: [Better but still lacking specific achievements and company research]
Further refined prompt:
Write a cover letter for a senior software engineer position at ThreatGuard, a cybersecurity startup specializing in threat intelligence. Incorporate these elements:
1. My 7 years of experience in full-stack development with a focus on secure authentication systems
2. My achievement of reducing authentication-related security incidents by 87% at my previous company
3. My experience leading a team of 6 developers to deliver a zero-trust security framework ahead of schedule
4. My excitement about ThreatGuard's recent launch of their AI-powered threat detection platform
5. My relevant certifications: CISSP and OSCP
Keep the tone professional but conversational, and limit the letter to one page.
Conclusion
Effective prompt engineering is both an art and a science. It requires understanding the capabilities and limitations of language models, as well as the specific techniques that can elicit the best responses for different types of tasks.
By mastering the fundamentals of prompt construction, applying appropriate techniques, and iteratively refining your approach, you can significantly enhance your ability to leverage AI language models for a wide range of applications. Whether you’re using these models for creative writing, technical problem-solving, data analysis, or any other purpose, thoughtful prompt engineering is the key to unlocking their full potential.
As language models continue to evolve, so too will the field of prompt engineering. Staying curious, experimenting with different approaches, and sharing knowledge with the broader community will help you stay at the forefront of this rapidly developing discipline.
References
-
Prompting Guide. (2025). Introduction to Prompting. https://www.promptingguide.ai/introduction/basics
-
OpenAI. (2024). GPT Best Practices. https://platform.openai.com/docs/guides/gpt-best-practices
-
Anthropic. (2025). Prompt Engineering Guide. https://www.anthropic.com/prompt-engineering
-
Google. (2025). Gemini API Prompting Guide. https://ai.google.dev/docs/prompting
-
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv preprint arXiv:2201.11903.
-
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
-
Reynolds, L., & McDonell, K. (2021). Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. CHI Conference on Human Factors in Computing Systems (CHI ‘21).
-
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. arXiv preprint arXiv:2205.11916.
Note: this article is AI generated.