0% found this document useful (0 votes)
13 views6 pages

Prompt Engineering 2

Uploaded by

harshman115577
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
13 views6 pages

Prompt Engineering 2

Uploaded by

harshman115577
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

[Prompt engineering]

Prompt engineering session – 2


Summary 22-07-2024

 Every large language model (LLM) have a fixed context window for
example if we write a prompt in chat gpt about a python code , then it
remembers it , the duration of time where it remembers the context , it is
known as context window.
 When writing a prompt for a large language model (LLM), it's essential
to ensure that the AI maintains consistency with the context to produce
relevant responses, if it goes out of context then we have to make it
remember again by giving it information related to the context.
 If you go to openAI website you will see it has a lot of models in which
the latest model they have is GPT-4o or GPT-4o mini which has a context
window or context length of 128k , if you go to older models such as
GPT-3.5-Turbo it has a context length of nearly 16k .
 It is generative AI not search engine or database who will remember your
name, but if we say that hello my name is vimal daga then it remembers
it.
 It knows our name because it has its own memory, which is known
as long short term memory(LSTM).
 The generative AI model will always start forgetting information in
sequence which begins from the start after the context window
limit is reached.
 When we speak it gives answer then we speak again and it gives
answer again ,this kind of models are known as chat models , there
are many types of models like chat models,image creation models,
video creation models etc.
 To make AI model remember or recall all the context we can just
say “hey can you create a summary of all above” after this it will
create a summary and all the information will be switched to its
current memory.
 If you want the AI to forget the context just type “forget
everything till now” and it will forget.

 You don t have to start a new chat , if you write the prompt saying forget
everything then you can start new conversation in your current chat.
 Whenever humans get a problem they solve it step-by-step and we can
only give answer after solving it, same way chat GPT has solved it, but
challenge is this model of chat GPT has given answer correctly , but if we
ask same problem to another model or any old model , there is possibility
it will give you a different answer.
 Chat GPT does not give you information , whatever you ask it will
connect things from here and there and will give you an answer.
 Whenever you ask a question it will always give you an answer but if it
does not know something then it will create an answer it is known as
hallucination so we can’t rely on the information given by the chat GPT
because it will never say I don’t know.
 If you add a keyword in your prompt which is “lets think step by step”
then it will give you a proper step by step answer it is very useful for any
DSA question , maths question or python question.
 If you are going to give information to the chat GPT then use INPUT:
tag , it is not a hard code tag but it understands this tag.
 In the format we have told that all the places should be separated by
comma
 You can adjust the desired format of your output as you like.
 You can ask what you want , in what output you want and can give input
as you want the answers from the given input.

 Here the output came as a list or an array.


 Here from a prompt it has given an output in our desired format which
was json
 Now we can store this data in mongodb or any other database and use it
as we want.

You might also like