0% found this document useful (0 votes)
39 views26 pages

Advanced Prompting in AI Programming

The document covers advanced prompting patterns in computer programming, focusing on specification-based development and handling edge cases. Key concepts include zero-shot prompting, self-consistency, and iterative refinement to improve the quality of prompts for applications like a Travel Planner and research paper summarizer. It emphasizes the importance of defining inputs, outputs, and constraints to enhance the reliability and accuracy of AI-generated outputs.

Uploaded by

anuhesh20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views26 pages

Advanced Prompting in AI Programming

The document covers advanced prompting patterns in computer programming, focusing on specification-based development and handling edge cases. Key concepts include zero-shot prompting, self-consistency, and iterative refinement to improve the quality of prompts for applications like a Travel Planner and research paper summarizer. It emphasizes the importance of defining inputs, outputs, and constraints to enhance the reliability and accuracy of AI-generated outputs.

Uploaded by

anuhesh20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CISC 101: Introduction to Computer

Programming
5 Advanced Prompting Patterns, Specification-Based Development, AI4SE
Topic 3 and 4: Recap
Topics 1& 2 Wrap-up: LLMs, FMs, Prompt Engineering, and
Computational Thinking
Section Key Takeaways and Concepts
I. Foundations (Topic 1) We learned the foundational ideas that underpin prompt-based programming.

SE4AI & AI4SE Using SE principles for prompts; using AI to assist SE

Computational Four pillars: decomposition, abstraction, pattern recognition, algorithms.


Thinking:

Prompt Engineering Good vs. bad prompts (clarity, specificity, structure).


Basics Hallucinations + context window constraints, and mitigation strategies.
II. Decomposition and Basic
We practiced breaking down problems into smaller parts and crafting effective prompts.
Prompting (Topic 2)

Decomposition Break complex problems into manageable components: inputs, processes and outputs

Variables Placeholders for reusable, flexible prompts

Pitfalls Vague prompts, over-decomposition, overtrusting LLMs


Topics 3 & 4 Wrap-up: Control Flow, Patterns, and Context
Management
Section Key Takeaways and Concepts
I. Control Flow (Topic 3) We learned how to program LLMs using algorithmic logic:
Directing the flow of a program using if-then-else statements within prompts (e.g., specifying budget-
Conditionals
based activity selection in the Travel Planner). This enables decision-making in the LLM's output.

Instructing the LLM to repeat fixed actions (counted loops) or actions until a condition is met (unbounded
Loops
loops). For example, listing one item (museum, restaurant, park) for each day of a trip.
The process of testing a prompt, analyzing the output (checking for hallucinations), and refining the
Iterative Refinement
prompt by adding constraints or clarifying instructions.
II. Prompt Patterns (Topic 4) We mastered structured ways to interact with LLMs:
Asking the LLM to reason step-by-step to improve transparency, accuracy, and aid in debugging. In the
Chain-of-Thought (CoT)
Travel Planner, this means explaining activity choices.
Providing examples to guide the LLM's response, ensuring a consistent format for outputs, such as
Few-Shot Prompting
structured itinerary layouts.
III. Context & Limitations We understood crucial LLM constraints:
Recognizing the limited input/output size (maximum tokens) an LLM can process. Strategies include
Context Window
breaking prompts into smaller parts or prioritizing constraints.
Addressing issues like Hallucinations (fabricated information) and biases by validating outputs and using
Mitigation
specific, constrained prompts.
Topic 5
Topic 5 Overview
• Goal:
• Deepen prompting skills with advanced patterns
• Design specifications for prompt features and handle edge cases in prompts to ensure robustness.
• Use AI4SE to debug and critique prompts for Travel Planner.
• Key Concepts:
• Spec-Based Development: Define inputs, outputs, and constraints before prompting.
• Edge Cases: Address invalid inputs (e.g., negative budget, missing paper sections).
• Travel Planner: Specify budget, activities; handle invalid budgets or cities.
• Research Paper Summarizer: Specify sections, output format; handle missing abstracts.
• Focus: Zero-shot, self-consistency, edge cases, prompt debugging
• Why It Matters: LLMs as “pair programmers” improve prompt quality and reliability.
• Context: Travel Planner app
• Learning Outcomes:
• Use advanced prompting patterns
• Handle edge cases in prompts
• Debug and refine prompts
Zero-Shot Prompting
• Definition: Prompting without examples

• When to Use: For simple or well-known tasks, or as the initial baseline prompt before applying
advanced patterns like CoT or Few-Shot during iterative refinement

• Example:

Generate a 3-day Paris itinerary with a $500 budget, including one museum, one restaurant, and
one park per day.

• Pros: Fast, simple

• Cons: Less control, risk of errors

• Activity: Write a zero-shot prompt


Zero-Shot Prompting
• Simple Zero-Shot Example (Good Use):

⁃ Prompt: "Translate the following sentence into French: 'Computational thinking is vital for prompt
engineering.’”

⁃ Reasoning: This is a simple, well-defined task where the LLM relies entirely on its pre-trained knowledge,
making it fast and simple.

• Complex Zero-Shot Example (Bad Use/Needs CoT):

⁃ Prompt: " Summarize this research paper and how its methodology supports its conclusions.”

⁃ LLM Output (Zero-Shot): “The paper studies a topic and finds positive results.” (vague and ungrounded)

⁃ Reasoning: This task requires multi-step reasoning: understanding the research question, parsing
methods, and connecting results to conclusions. Zero-shot prompting often misses logical links, so Chain-
of-Thought prompting improves output accuracy and clarity.
Self-Consistency Prompting
• Purpose: Generate multiple outputs independently and select the best (or aggregate answers)

• Same prompt, sampled multiple times independently > make choice

• Filter out hallucinations, eliminate errors (e.g., code), etc

• Example: We ask the model multiple times, independently, to generate a 3-day Paris itinerary with
a $500 budget. Then we compare the different versions and we choose the best one (or aggregate
answers).

• Output: Three itineraries, we choose the most fit

• "LLM Selection Reasoning Example: We selected Itinerary 2 because Itinerary 1 exceeded the constraint by
spending $550, and Itinerary 3 included a nonexistent restaurant (hallucination). Itinerary 2 adhered perfectly to
the $500 budget and included verifiable activities."

• Activity: Try self-consistency for summarizer


A Comparative Summary of Advanced Patterns
Pattern Purpose Control Level Trade-offs Example Phrase

For simple or well-known Fast, simple, but less “Translate this sentence
Zero-Shot Low
tasks. control, risk of errors. into French.”

“Here are two examples of


Provide examples to shape
Consistent format, reduces summaries. Now
Few-Shot output/ensure consistent High
ambiguity. summarize this text in the
format.
same style…..”
a Comparative Summary of Advanced Patterns
Ask LLM to reason step-by- Improves output clarity “Let’s think step by step to
Chain-of-Thought (CoT) Medium/High
step. and aids debugging. solve this math problem..”

Generate multiple outputs Improves reliability A prompt that is run


Self-Consistency and select the best or Medium through multiple times
aggregate outputs. selection/aggregation. independently..

Assign a specific role or Enhances relevance and “You are a financial analyst,
Role-Based Prompting perspective to guide style, Medium/High depth but may bias the explain this market
tone, and reasoning. response. trend….”
Handling Edge Cases

• Task: Anticipating and Managing Edge Cases


• Content:
• Definition: Edge cases are unexpected inputs (e.g., negative budget, missing abstract).
• Travel Planner Example:
⁃ Edge Case: Negative budget.
⁃ Solution: Prompt: “If budget < 0, return ‘Invalid budget.’”
• Summarizer Example:
⁃ Edge Case: Missing abstract.
⁃ Solution: Prompt: “If no abstract, summarize introduction instead.”
• Strategy: Add conditional logic in prompts, test edge cases with Grok.
• Discussion Question: What other edge cases might occur in trip planning or paper summarization?
Handling Edge Cases
• Strategy 1 (Conditional Input Handling):

⁃ Address invalid or extreme inputs (e.g., nonexistent city or negative budget

• Strategy 2 (Conditional Output Handling):

⁃ Address missing data (e.g., If the paper has no abstract, summarize introduction instead).

• Prompt:

If the destination is invalid, suggest a nearby valid city. If budget is $0, list only free activities.

• Activity: Test a prompt with an edge case


Zero-Shot with Edge Cases
• A zero-shot prompt Case is where you provide a task or question without giving the model any examples of
how to solve it. The model relies entirely on its pre-trained knowledge to generate a
response ([Link]
• Edge Case refers to a specific situation, input, or condition that is at the extreme or boundary of what is
considered typical or expected, potentially causing unexpected behavior or errors in the AI's response.
([Link]
• Prompt:
Generate a 3-day itinerary for [destination = Narnia] with a $500 budget. If [destination = Narnia ]is invalid, suggest
[another valid destination = London] instead.
• Output Example:
Narnia is invalid. Suggested itinerary for London:
Day Museum Restaurant Park
1 British Museum The Ivy Hyde Park

• Discussion: Did it handle the edge case?


Applying to Research Paper Summarizer
• Zero-Shot Prompt:

Summarize a research paper’s related work in a list format.

• Edge Case Handling in Zero-shot Prompt

If the paper has no related work section, summarize citations instead.

• Activity: Write a zero-shot summarizer prompt


Spec-Based Development
• Objective : Writing Clear Specifications
• Purpose: documentation and evaluation.
• Content:
• Definition: Specifications outline inputs, outputs, and constraints for a prompt.
• Travel Planner Example:
⁃ Input: City = Paris, Budget = $750,Duration = 5 days.
⁃ Output: Bullet-point list with one museum, restaurant, park per day, including costs.
⁃ Constraints: Total cost ≤ $750, duration = 5 days, clear activity descriptions.
• Ethical constraints: For instance: Define constraints not just for cost, but for bias mitigation (e.g., "Constraint: Ensure
itinerary does not exclude activities based on cultural biases").
• Summarizer Example:
⁃ Input: Text of "Attention Is All You Need."
⁃ Output: Bullet-point list with one key point per section.
⁃ Constraints: Accurate summaries, bullet-point consistent format.
• Why It Matters: Guides prompt design, ensures verifiable outputs.
• Discussion Question: How do specs prevent errors in itinerary or summary outputs?
• Activity: Compare inputs, outputs, constraints for both projects.
Testing Specifications

• Task: Validating Specs with Prompts

• Content:

• Process: Write specs, design prompt, test with Grok, refine based on outputs.

• Travel Planner Example: Test prompt for budget compliance, activity types, and edge case
(negative budget).

• Summarizer Example: Test prompt for section coverage, format, and edge case (missing abstract).

• Tips: Use small test cases, verify outputs against specs.

• Discussion Question: How can testing ensure specs are met for both projects?
Hands-on Designing Specs and Edge Case Prompts

• Task:
• Write specifications for Travel Planner (Paris itinerary) and Summarizer (key points).
• Design a prompt for each, handling one edge case (negative budget, missing abstract).
• Test with your AI Tool.
• Submit specs, prompts, outputs, and a 50-word explanation of edge case handling.

• Travel Planner Example:


• Specs: Input: Paris, $750; Output: List with costs; Constraints: Cost ≤ $750.
• Prompt: “Generate a 5-day Paris itinerary with $750 budget. If budget < 0, return ‘Invalid budget.’”
• Output: Day 1: Louvre ($15), Café de Flore ($20), Tuileries (Free).

• Summarizer Example:
• Specs: Input: Paper text; Output: Key point list; Constraints: One point/section.
• Prompt: “Summarize key points. If no abstract, use introduction.”
• Output: Introduction: RNNs are slow.
AI4SE Principles
• AI for Software Engineering: Use LLMs to:

• Critique prompts for clarity and completeness.

• Debug outputs for errors.

• Suggest improvements (e.g., handle edge cases).

• Example: Critique a vague prompt: “Plan a trip” → Suggest specifying city, budget, format.
Prompt Debugging
• Steps:
1. Test prompt on your AI Tool, analyze output.
2. Ask the Tool to critique prompt (e.g., “Critique this prompt for clarity…”).
3. Example Critique: “Prompt lacks format and edge case handling. Add table output and negative
budget check.”
4. Identify errors (e.g., wrong format, hallucinations)
5. Refine prompt (add constraints, clarify)
• Example:
• Initial: “Plan a Paris trip”
• Refined: “Plan a 3-day Paris trip with a $500 budget in a table”
• Activity: Debug a faulty prompt
Hands-on - Prompt Critiquing
• Activity: Use your AI Tool to critique Research Paper Summarizer prompt.

• Steps:

1. Write a prompt to summarize a paper.

2. Ask Grok to critique it.

3. Revise and test revised prompt.

• Goal: Practice AI4SE, prepare for Week 5 modularity.


Connecting to CT

CT Pillar Key Lecture Concepts How the Pillar Connects to Prompt Engineering

- Breaking problems into components (destination, budget,


activities)
- Breaking problems/prompts using IPO flowcharts
Prompts are built from smaller sub-tasks.
- Variable identification
Decomposition Decomposition helps structure prompts into clear
- Edge-case isolation
input–output blocks and test each part separately.
- Spec design
- - Testing prompts by part- Modular refinement (iterative design)
- - Control flow components (if/then, loops)

- Reusable prompt templates


- Prompting patterns (Zero-Shot, Few-Shot, CoT, Self-Consistency)
Recognizing patterns in prompts and outputs
- Context management strategies
allows students to reuse strong structures, avoid
Pattern Recognition - Output format consistency (bullet lists, tables)
prior errors, and generalize solutions across
- Common error patterns (hallucinations, format drift)
domains.
- Debugging patterns (identify recurring mistakes)
- Edge case categories (negative budget, missing section)
Connecting to CT
How the Pillar Connects to Prompt
CT Pillar Key Lecture Concepts
Engineering

-
Defining relevant inputs and ignoring noise
- Simplifying tasks (e.g., “Summarize key points”) Abstraction ensures prompts focus only on
- Zero-Shot Prompting (baseline tasks) essential information, reducing cognitive and
Abstraction
- Spec Constraints to limit scope computational overload for the LLM and
- Context Window Management ensuring context adherence.
- Format simplification (bullet vs paragraph)

- Step-by-step prompt logic and IPO flowcharts


- Control flow (if conditions, loops)
- Edge case handling (“If budget < 0 → Invalid budget”)
Algorithmic thinking turns prompting into a
- Spec-Based Development
Algorithmic structured, testable process—defining rules,
- Testing specifications (verification logic)
Thinking branching logic, and validation steps for the
- - Prompt debugging (AI4SE feedback loop)
AI.
- Iterative refinement cycle (Test → Fix → Retest)
- Prompting patterns
Topic 5 Learning Outcomes
• Use zero-shot and self-consistency prompting

• Handle edge cases in prompts

• Debug and refine prompts

• At-Home Tasks:

• Write a zero-shot summarizer prompt

• Test a Travel Planner prompt with an edge case

• Debug a faulty prompt

• Deliverable: Zero-shot prompt and edge case analysis

• This week's tutorial: Advance Travel Planner with zero-shot prompting, and self-consistency, and
spec-design
Tips for Success
• Experiment: Try multiple prompt versions

• Validate: Check outputs for accuracy

• Next Topic: Software engineering practices


Readings & Resources
• Books:

• “Writing Effective Use Cases” by Alistair Cockburn – Guides spec writing.

• Video: “AI-Assisted Programming” by GitHub (YouTube, ~20 min).

• Blog: “Debugging Prompts with LLMs” by Towards Data Science.

• Paper: Vaswani et al. (2017). “Attention Is All You Need.” arXiv:1706.03762 (sample paper).

• Tutorials:

• Udemy: “Software Requirements Engineering” ([Link]) – Beginner-friendly course.

• YouTube: “Writing Software Specifications” by Tech With Tim ([Link]).

• Task: Review resources to prepare for version control.

You might also like