INTRODUCTION TO
AI AND APPLICATIONS
1BAIA103/203 – Module 2
DEEPAK D
Assistant Professor, Dept. of AI&ML
Canara Engineering College, Mangaluru.
Outlin
Module 2: Introduction to Prompt
e
Engineering
[Link] to Prompt Engineering 3. Prompts for Creative Thinking
[Link] of Prompt Engineering Introduction to Creative Thinking with
[Link] Evolution of Prompt Engineering Prompts Unlocking Imagination and
[Link] of Prompts Innovation
[Link] Does Prompt Engineering Work?
4. Prompts for Effective Writing
[Link] Role of Prompt Engineering in
Introduction to Writing with Prompts
Communication
Igniting the Writing Process with
[Link] Advantages of Prompt Engineering
Prompts
[Link] Future of Large Language Model
(LLM) Communication
2. Prompt Engineering Techniques for ChatGPT
Introduction to Prompt Engineering
Techniques Instructions Prompt Technique
Zero, One, and Few Shot
Prompting Self-Consistency
Prompt
Outlin
Module 2: Introduction to Prompt Engineering
e
Ajantha Devi Vairamani and Anand Nayyar, Prompt Engineering: Empowering Communication, 1st Edition, CRC Press, Taylor & Francis Group
[Link] to Prompt Engineering
[Link] of Prompt Engineering
[Link] Evolution of Prompt Engineering
[Link] of Prompts
[Link] Does Prompt Engineering Work?
[Link] Role of Prompt Engineering in Communication
[Link] Advantages of Prompt Engineering
[Link] Future of Large Language Model (LLM) Communication
LLM (Large Language Model)
LLMs are a type of AI model designed to understand, process, and generate human-like
language.
How It Works: Trained on huge amounts of text (books, articles, websites) so it learns
patterns of language.
Examples:
ChatGPT → Answers questions like a human.
Google Gemini → Summarizes long documents.
Simple Analogy:
Think of an LLM like a very smart English dictionary + storyteller.
If you ask, “Explain photosynthesis,” it writes an answer as if a teacher explained it.
Top 10 Open-Source LLMs
The open-source LLM landscape is rapidly evolving, making powerful AI accessible to
everyone. The provided text highlights several key models:
• Llama (Meta AI): A versatile family of models from 7B to 70B parameters, known for
efficient architecture (Grouped-Query Attention) and strong performance in various
tasks like text generation and coding. Its license allows for commercial use with some
restrictions.
• GPT-J and GPT-NeoX (EleutherAI): Smaller, fully open-source models (6B and 20B
parameters, respectively) ideal for systems with limited resources. They are known for
generating coherent English text and are fully customizable.
• BLOOM (BigScience): A massive 176B parameter, multilingual model designed for
ethical and transparent AI development. It supports over 46 languages and is used for
translation, summarization, and other multilingual tasks.
• Mistral 7B (Mistral AI): A compact but powerful model that outperforms larger
competitors while being resource-efficient. It's available under a permissive Apache 2.0
license.
• Falcon LLM (Technology Innovation Institute): A series of high-performing
models (40B and 180B parameters) released under the Apache 2.0 license, making
them suitable for business applications.
• Command R+ (Cohere AI): A model designed for retrieval-augmented generation
(RAG), focusing on using external data for more accurate responses. It's primarily
API-based and not fully open-source.
• BERT (Google): A foundational model known for its bidirectional approach to
language understanding, which significantly improved tasks like search and
sentiment analysis. It was one of the first major transformer models to be open-
sourced.
• Qwen 1.5 14B (Alibaba): A multilingual model with a strength in both Chinese and
English, making it valuable for global applications.
• Zephyr 7B (AI21 Labs): A model optimized for fast, high-quality text generation,
primarily accessed via API, and not fully open-source.
How to Choose the Right Open-Source LLM
• Define Your Use Case: The first step is to identify the specific task (e.g., chatbot, code
generation, summarization) and domain (e.g., finance, healthcare) for your project. A
model's performance can vary significantly depending on the task it's been optimized for.
• Evaluate Model Capabilities: Assess performance metrics like accuracy, fluency, and
coherence on relevant benchmarks. Consider task-specific performance and the model's
ability to handle complex instructions.
• Consider Hardware and Cost: Open-source models are "free" to use, but running them
incurs costs. The model's size (parameter count) directly impacts the required
computational resources and memory (VRAM). Smaller models like Phi-3 or Gemma can
run on consumer hardware, while larger models require significant infrastructure.
• Examine Licensing: Not all "open source" models have the same licensing. Look for
permissive licenses like Apache 2.0 that allow for commercial use. Some licenses may
have restrictions that limit their use in business applications.
• Look for Community Support: An active community and comprehensive documentation
on platforms like Hugging Face are invaluable. They provide support, pre-trained variants,
and fine-tuning examples, which can accelerate development.
• Experiment and Benchmark: It's crucial to test multiple models on your own datasets to
validate their performance and compare them against each other and your project
requirements
GenAI (Generative AI)
A special branch of AI that can create new content like text, images, audio, or video.
How It Works: Uses models like LLMs to generate something new based on patterns it
has learned.
Examples:
Text → ChatGPT writes essays or poems.
Images → DALL·E or MidJourney creates paintings from descriptions. Music → AI
composes songs.
Simple Analogy:
If AI is the brain, and LLM is the language expert, GenAI is the artist that creates new
things.
GenAI (Generative AI): Imagine you're in a classroom:
AI = The entire school → Many teachers for different subjects.
LLM = The English teacher → Specializes in understanding and writing language.
GenAI = A creative writing teacher → Not only explains grammar but also writes poems,
stories, and essays for you.
nds your re
sta qu
r
de AI
es
Un
t.
tter ns to cr
pa ea
ge t
a
es
LLM
gu
en
Uses l an
tenc e
he entire n
est e
t
s
a
sto
Gener
ry.
GenAI
1. Overview of Prompt Engineering
A prompt is simply the input instruction you give to an AI model. It can be a question, command, or example that
guides the model to produce the desired response.
Key idea: The better and clearer the prompt, the better the output.
Why is a Prompt Required?
Guides the AI: A prompt helps the AI understand the context and generate a relevant response.
Directs the Output: The quality of the AI's response depends on how well the prompt is crafted. A good prompt leads
to better, more accurate results.
Contextual Clarity: It ensures the model responds to a specific task, reducing ambiguity in what is expected.
Example of a Good Prompt: Example of a Wrong Prompt:
Good Prompt: Wrong Prompt:
"Write a short story about a dragon "Tell me about
and a knight who become dragons."
friends." This prompt is too vague, and the AI might give a
This prompt is clear, specific, and gives the general response about dragons without focusing on the
model a clear task, so it can generate a specific task of creating a story or narrative.
Think of an LLM
relevant as a knowledgeable chef.
story.
If you say: "Make food" → The chef is confused.
If you say: "Make a vegetarian pasta with mushrooms, spinach, and cheese" → The chef gives you
exactly what you want.
1. Overview of Prompt Engineering
Why Are Prompts Important?
Guides the AI: Prompts tell the AI what you want it to do. Without a clear prompt, the AI might
give you something completely unrelated.
Helps Generate Accurate Results: The more specific and detailed your prompt is, the more accurate
and useful the AI's output will be.
How is Prompt Engineering Useful?
Optimizing Output: By crafting better prompts (known as prompt engineering), we can improve
the quality of the AI's responses.
Control the AI: You can direct the AI to write stories, summarize text, generate images, and
more, based on how well you ask for it.
Simple Example:
Bad Prompt: "Tell me about cats." - (Because the prompt is: 1. Too Vague, 2. Lack of Focus & 3.
Can Lead to an Unfocused Answer)
Good Prompt: "Write a fun short story about a cat who travels to the moon."
Instructions Prompt Technique
The Instructions Prompt Technique involves providing specific guidelines to guide the AI model’s output. This helps ensure that
the generated text aligns with the desired objectives and meets the task’s requirements.
This technique ensures that AI responses are focused and appropriate for the task at hand by giving clear instructions.
Examples:
Customer Service:
Instruction: "Responses should be professional and provide accurate information."
Result: The AI will generate responses that are formal and factually correct.
Example Prompt: User Query: "How can I reset my password?"
Legal Document:
Instruction: "The document should comply with relevant laws and regulations."
Result: Ensures the document follows legal standards.
Example Prompt: User Query: "Please create a non-disclosure agreement."
Product Review:
Instruction: "The review should be unbiased and informative."
Combined with role prompting (tech expert) and seed-word prompting (smartphone features), it generates a detailed and
balanced review.
Example Prompt: User Query: "Write a product review for the new smartphone."
Combining Techniques:
Instructions can be paired with other techniques (like role prompting or seed-word prompting) to increase the precision and
control of the model's output.
Types of Prompt
Prompts
2 3
1
Natural Language System Prompts Conditional Prompts
Prompts
These prompts are written in System prompts are pre- Conditional prompts involve setting
natural human language, making written instructions or specific conditions or limitations
them more intuitive for templates that guide the AI that direct the model's behavior
developers to interact with the model on how to respond. based on certain criteria. These
model. The that
instructions goal mimic
is to provide
human These prompts can direct the conditions can be framed as logical
speech, making the format, style, or tone of the statements, like "If X, then Y",
communication more natural. output, providing a clear guiding the model on what to
Prompt: "Can you explain how
framework for the model to produce in different scenarios.
photosynthesis works?" follow. Prompt: "If the user asks about weather,
Prompt: "Write a formal email explaining provide a weather update. If the user asks
Output: "Photosynthesis is the process by Output: about sports, provide the latest sports news."
the delay in project delivery."
which plants convert light energy into "Subject: Project Delivery Input: "What's the weather like today?"
chemical energy stored in glucose." Delay Dear [Recipient], Output: "The weather today is sunny with a
I regret to inform you that high of 25°C."
due to
How Does Prompt Engineering Work?
Prompt engineering is a systematic process that involves crafting effective
instructions for Large Language Models (LLMs) to get the best output. The process is
iterative, meaning you might need to refine your prompts multiple times to achieve the
desired results.
Steps for Effective Prompt Engineering
• Understand the Task
• Use Precise Language
• Add Specific Details
• Provide Examples
• Experiment and Refine
Understand the Task have a clear idea of what you want the Vague Prompt: "Tell me about cars.
AI to achieve.
Clearer Objective: "I need a list of the top 5 most fuel-efficient hybrid cars
from 2024.
Use Precise Language use specific and unambiguous words Vague Prompt: "Write a short summary of a book.“
in your prompt. Avoid jargon or open-
ended terms that could be interpreted Precise Prompt: "Summarize the plot of The Hobbit in exactly 150 words."
in multiple ways.
Add Specific Details The more context and constraints Basic Prompt: "Give me some travel tips for Mangaluru."
you provide, the better the AI can Prompt with Details: "You are a local tour guide in Mangaluru. Give me a
tailor its response. list of three non-touristy restaurants and describe why they are unique, in a
friendly and casual tone."
Provide Examples few-shot prompting. By providing Sentence: "I love this new phone!" Category: Positive
one or more examples of a correct Sentence: "The delivery was late, and the product was broken."
input-output pair, you teach the Category:
model the exact pattern and style you
expect.
Experiment and Refine an iterative process Initial Prompt: "Write a short description of a sci-fi character."
Refined Prompt: "Write a detailed profile of a cybernetic detective in a
futuristic city. Describe his appearance, his primary motivation, and a
unique piece of his technology in three paragraphs."
The Advantages of Prompt Engineering
Task Benefit
Question Answering Increases accuracy in providing factual responses.
Creative Writing Enhances creativity and produces engaging,
imaginative content.
Machine Translation
Improves precision and ensures context- aware translations.
Coding Helps in generating accurate and effective code.
The Future of Large Language Model (LLM Communication)
As LLM technology progresses, prompt engineering will continue to
evolve, shaping the way we interact with AI models.
Ongoing research into adaptive prompts and AI behavior control
will make communication with LLMs more sophisticated and
contextually aware than ever before.
The future holds exciting possibilities for AI-powered communication,
improving
everything from language translation to creative collaboration.
7. The Future of Large Language Model (LLM
Communication)
Trend Description Example
AI-Powered AI assists in real-time negotiations, AI suggesting compromises during a
Negotiation Tools providing balanced solutions for win- business meeting to ensure a fair deal.
Real-Time Language win agreements. Seamless cross-lingual
Translation LLMs translating languages with communication in a global business
Customized News understanding of cultural nuances and meeting.
Updates context. A personalized news feed focusing
Automated Prompt Personalized news feeds tailored to on environmental and
Generation individual interests and preferences. technological topics.
AI systems that generate optimized Automated generation of prompts
Adaptive Prompts prompts for tasks. based on task descriptions.
Prompts adjust dynamically based on user An AI assistant adapting its suggestions
Subtle Cues for
interactions, improving personalization over based on past interactions.
Complex Tasks
time. AI handling multi-step creative
AI that responds to subtle cues, managing projects with minimal input.
multi- step tasks intuitively.
Prompt Engineering Techniques for ChatGPT
[Link] to Prompt Engineering Techniques
[Link] Prompt Technique
[Link], One, and Few Shot Prompting
[Link] Consistency Prompt
Key Techniques in Prompt Engineering
• Standard Prompts: Simple prompts that directly ask the AI to perform a specific task.
• Role Prompting: The model is given a specific role to play, influencing the tone, style, or
behavior of the response.
• Seed-Word Prompting: The prompt provides a seed word or phrase that the
model should build upon, guiding its response based on that initial cue.
• Conditional Prompting: Conditions or limitations are set for the model, such as "If this,
then that".
3. Zero, One, and Few Shot Prompting
Zero, One, and Few Shot Prompting are techniques in prompt engineering that
help guide AI models, like ChatGPT, to generate responses based on different
amounts of context or examples. These techniques are useful when you want
the model to produce responses with limited information.
[Link] Shot Prompting
[Link] Shot Prompting
[Link] Shot Prompting
[Link] Shot Prompting
The model generates a response without any examples or context. You simply give
a prompt or instruction, and the model uses its pre-existing knowledge to generate
an answer.
Example:
Prompt: "Write a poem about nature."
Output: A poem generated by the model based on its general knowledge.
When to Use:
When you want the model to think creatively without any specific examples.
Useful for open-ended tasks or when you want to explore the model’s
generalization abilities.
Limitation:
Sometimes results may be less accurate or relevant without context.
2. One Shot Prompting
What It Is:
The model generates a response based on one example you provide. This single
example helps guide the model in understanding the context or task.
Example:
Prompt: "Summarize the following paragraph: 'The sun rises in the east and sets
in the west.'"
Output: "The sun moves across the sky from east to west."
When to Use:
When you have one clear example but still need the model to generate a
response with some level of accuracy.
Limitation:
If the task is complex, one example might not be enough, and you may need
more examples.
3. Few Shot Prompting
The model generates a response based on a few examples you provide. This gives
the model more context and helps it generate more accurate and relevant
responses.
Example:
Prompt: *"Write a product review based on these
features: Battery life: 10 hours
Camera: 12 MP
Screen: 6.5 inches"*
Output: A detailed product review based on the
provided features.
When to Use:
When you need more context or examples to help the model generate precise and
relevant responses.
Limitation:
You need to ensure that the examples are relevant and complete to get the best
4. Self-Consistency Prompt
Self-Consistency Prompting is a technique used in large language models (LLMs) to
improve reasoning tasks such as math word problems, logical reasoning, or multi-step
problem solving.
Instead of generating just one answer directly, the model is asked to produce multiple
different reasoning paths (chains of thought). Then, the most common or most
consistent answer across those reasoning paths is selected as the final output.
You have 10 apples. You give 3 to your friend and eat 2 yourself.
How many apples are left with you?
You have 10 apples. You give 3 to your friend and eat 2 yourself.
How many apples are left with you?
Key Features of Self-Consistency Prompts:
Incorporates Key Information:
The prompt includes critical facts, statements, or context that guide the model
to maintain consistency.
Guides Logical Flow:
Ensures that the model's responses remain logicallyaligned
withthe established context throughout the conversation.
Balances Detail and Flexibility:
While it’s important to provide enough context, it’s crucial not to over-restrict
the prompt, as that could limit the model's creativity and diversity in responses.
Self-Consistency Prompt
When to Use Self-Consistency
Prompts: Educational Content:
Ensures the information presentedis accurate
and aligned with established knowledge,
making it useful for creating educational materials.
Technical Discussions:
When discussing specialized topics,
like engineering or medicine, where
accuracy and adherence to known facts are critical.
Decision-Making:
In scenarios where decisions rely on consistent data or logical
reasoning, like business strategies or technical choices
Thank You
M o d u l e 2 -C o m p l e
ted