0% found this document useful (0 votes)
105 views85 pages

AI Unit 3

The document discusses knowledge representation in artificial intelligence. It covers different types of knowledge that need to be represented such as objects, events, facts, and meta-knowledge. It also discusses various techniques for knowledge representation including logical representation, semantic networks, frames, production rules, and propositional logic. Knowledge representation is important for modeling intelligent behavior and allowing computers to understand and utilize knowledge to solve real-world problems.

Uploaded by

barkmeow2069
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
105 views85 pages

AI Unit 3

The document discusses knowledge representation in artificial intelligence. It covers different types of knowledge that need to be represented such as objects, events, facts, and meta-knowledge. It also discusses various techniques for knowledge representation including logical representation, semantic networks, frames, production rules, and propositional logic. Knowledge representation is important for modeling intelligent behavior and allowing computers to understand and utilize knowledge to solve real-world problems.

Uploaded by

barkmeow2069
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 85

ARTIFICIAL INTELLIGENCE

UNIT-03
Ms. Neha Yadav
Assistant professor
Applied computational science and engineering
Knowledge Representation
• Knowledge Representation in AI describes the representation of knowledge.
Basically, it is a study of how the beliefs, intentions, and judgments of an
intelligent agent can be expressed suitably for automated reasoning.
• One of the primary purposes of Knowledge Representation includes modeling
intelligent behavior for an agent.
• It is responsible for representing information about the real world so that a
computer can understand and can utilize this knowledge to solve the complex real
world problems such as diagnosis a medical condition or communicating with
humans in natural language.
What to Represent?
Following are the kind of knowledge which needs to be represented in AI systems:
Object: All the facts about objects in our world domain. E.g., Guitars contains strings, trumpets are
brass instruments.
Events: Events are the actions which occur in our world.
Performance: It describe behavior which involves knowledge about how to do things.
Meta-knowledge: It is knowledge about what we know.
Facts: Facts are the truths about the real world and what we represent.
Knowledge-Base: The central component of the knowledge-based agents is the knowledge base. It
is represented as KB. The Knowledgebase is a group of the Sentences (Here, sentences are used as a
technical term and not identical with the English language).
Different Types of Knowledge
Different Types of Knowledge
1. Declarative Knowledge:
• Declarative knowledge is to know about something.
• It includes concepts, facts, and objects.
• It is also called descriptive knowledge and expressed in declarativesentences.
• It is simpler than procedural language.

2. Procedural Knowledge
• It is also known as imperative knowledge.
• Procedural knowledge is a type of knowledge which is responsible for knowing how to do something.
• It can be directly applied to any task.
• It includes rules, strategies, procedures, agendas, etc.
• Procedural knowledge depends on the task on which it can be applied.
Different Types of Knowledge
3. Meta-knowledge:
Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
• Heuristic knowledge is representing knowledge of some experts in a filed or subject.
• Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
5. Structural knowledge:
• Structural knowledge is basic knowledge to problem-solving.
• It describes relationships between various concepts such as kind of, part of, and grouping
of something.
• It describes the relationship that exists between concepts or objects.
AI knowledge cycle
Artificial Intelligent Systems usually consist of various components to display their
intelligent
behavior. Some of these components include:

1. Perception
2. Learning
3. Knowledge Representation & Reasoning
4. Planning
5. Execution
AI knowledge cycle
AI knowledge cycle
1. Perception Block

• This will help the AI system gain information regarding its surroundings through various sensors, thus
making the AI system familiar with its environment and helping it interact with it.
• These senses can be in the form of typical structured data or other forms such as video, audio, text,
time, temperature, or any other sensor-based input.

2. Learning Block

• The knowledge gained will help the AI system to run the deep learning algorithms.
• These algorithms are written in the learning block, making the AI system transfer the necessary
information from the perception block to the learning block for learning (training).
AI knowledge cycle
3. Knowledge and Reasoning Block

• As mentioned earlier, we use the knowledge, and based on it, we reason and then take any decision.
• Thus, these two blocks are responsible for acting like humans go through all the knowledge data and
find the relevant ones to be provided to the learning model whenever it is required.

4. Planning and Execution Block

• These two blocks though independent, can work in tandem.


• These blocks take the information from the knowledge block and the reasoning block and, based on it,
execute certain actions.
• Thus, knowledge representation is extremely useful for AI systems to work intelligently.
Techniques of knowledge representation
Logical Representation
• Logical representation is a language with some definite rules which deal with propositions and has no
ambiguity in representation.
• It represents a conclusion based on various conditions and lays down some important communication
rules.
• Also, it consists of precisely defined syntax and semantics which supports the sound inference. Each
sentence can be translated into logics using syntax and semantics.
Advantages:
• Logical representation helps to perform logical reasoning.
• This representation is the basis for the programming languages.
Disadvantages:
• Logical representations have some restrictions and are challenging to work with.
• This technique may not be very natural, and inference may not be very efficient.
Semantic Network Representation
• Semantic networks work as an alternative of predicate logic for knowledge representation.
• In Semantic networks, you can represent your knowledge in the form of graphical networks.
• This network consists of nodes representing objects and arcs which describe the relationship between
those objects. Also, it categorizes the object in different forms and links those objects.

• This representation consist of two types of relations:

• IS-A relation (Inheritance)


• Kind-of-relation
Semantic Network Representation
Advantages:

• Semantic networks are a natural representation of knowledge.


• Also, it conveys meaning in a transparent manner.
• These networks are simple and easy to understand.

Disadvantages:

• Semantic networks take more computational time at runtime.


• Also, these are inadequate as they do not have any equivalent quantifiers.
• These networks are not intelligent and depend on the creator of the system.
Frame Representation
• A frame is a record like structure that consists of a collection of attributes and values to describe an entity
in the world.
• These are the AI data structure that divides knowledge into substructures by representing stereotypes
situations.
• Basically, it consists of a collection of slots and slot values of any type and size. Slots have names and
values which are called facets.
Advantages:
• It makes the programming easier by grouping the related data.
• Frame representation is easy to understand and visualize.
• It is very easy to add slots for new attributes and relations.
• Also, it is easy to include default data and search for missing values.

Disadvantages:
• In frame system inference, the mechanism cannot be easily processed.
• The inference mechanism cannot be smoothly proceeded by frame representation.
• It has a very generalized approach.
Production Rules
• In production rules, agent checks for the condition and if the condition exists then production rule fires
and corresponding action is carried out.
• The condition part of the rule determines which rule may be applied to a problem. Whereas, the action
part carries out the associated problem-solving steps. This complete process is called a recognize-act
cycle.

The production rules system consists of three main parts:

The set of production rules


Working Memory
The recognize-act-cycle
Approaches to Knowledge Representation in AI
Simple Relational Knowledge
This is a relational method of storing facts which is among the simplest of the method. This method helps in
storing facts where each fact regarding an object is providing in columns. This approach is prevalent in DBMS
(database management systems).

Inheritable Knowledge
Knowledge here is stored hierarchically. A well-structured hierarchy of classes is formed where data is stored,
which provides the opportunity for inference. Here we can apply inheritance property, allowing us to have
inheritable knowledge. This way, the relations between instance and class (aka instance relation) can be
identified. Unlike Simple Relations, here, the objects are represented as nodes.

Inferential Knowledge
In this method, logics are used. Being a very formal approach, facts can be retrieved with a high level of
accuracy.

Procedural Knowledge
This method uses programs and codes that use simple if-then rules. This is the way many programming
languages such as LIST, Prolog save information. We may not use this method to represent all forms of
knowledge, but domain-specific knowledge can very efficiently be stored in this manner.
Propositional logic in Artificial intelligence
• Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions.
• A proposition is a declarative statement which is either true or false.
• It is a technique of knowledge representation in logical and mathematical form.
• An statement is a proposition if it is either true or false. Examples of propositions include "2+2=4," and
"The sky is blue."
• Logical connectives are used to combine propositions to form more complex statements.
• Truth tables are used to represent the truth values of propositions and the logical connectives that
combine them.
• In propositional logic, inference rules are used to derive new propositions from existing ones.
• Propositional logic is a limited form of logic that only deals with propositions that are either true or false.

Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.
Syntax of Propositional Logic
• Syntax of propositional logic refers to the formal rules for constructing statements in propositional logic.
• Propositional logic deals with the study of propositions, which are declarative statements that are either
true or false.
• The syntax of propositional logic consists of two main components: atomic propositions and compound
propositions.
Atomic Propositions
• Atomic propositions are simple statements that cannot be broken down into simpler statements. They are
the building blocks of propositional logic. An atomic proposition can be represented by a letter or symbol,
such as p, q, r, or s. For example, the following are atomic propositions:
• p: The sky is blue. q: The grass is green. r: 2+2=4. s: The Earth orbits the Sun.
Compound Propositions
• Compound propositions are formed by combining atomic propositions using logical operators. There are
several logical operators in propositional logic, including negation, conjunction, disjunction, implication, and
equivalence.
• Example:
• "It is raining today, and state it is wet."
• "Ankit is a doctor, and his clinic is in Mumbai
Logical Conectives
When connecting two simpler assertions or logically expressing a statement, logical connectives are used. Using
logical connectives, we can build compound propositions. The following list of connectives includes the main five:
Negation
The negation of a proposition p is denoted by ¬p and is read as "not p". For example: ¬p: The sky is not blue.

Conjunction
The conjunction of two propositions p and q is denoted by p ∧ q and is read as "p and q's". The conjunction
is true only if both p and q are true. For example: p ∧ q`: The sky is blue and the grass is green.

Disjunction
The disjunction of two propositions p and q is denoted by p ∨ q and is read as "p or q". The disjunction is
true if at least one of p and q is true. For example: p ∨ q: The sky is blue or the grass is green.
Logical Conectives
Implication
The implication of two propositions p and q is denoted by p → q and is read
as "if p then q". The implication is false only if p is true

Biconditional
A sentence such as P⇔ Q is a Biconditional sentence, example I will eat
lunch if and only if my mood improves. P= I will eat lunch, Q= if my
mood improves, it can be represented as P ⇔ Q.
Summarized table for Propositional Logic Connectives
Truth table with Three Propositions
Precedence of Connectives
Logical equivalence
 Logical equivalence is one of the features of propositional logic. Two
propositions are said to be logically equivalent if and only if the columns in the
truth table are identical to each other.
 Let's take two propositions A and B, so for logical equivalence, we can write it
as A⇔B. In below truth table we can see that column for ¬A ∨ B and A→B, are
identical hence A is Equivalent to B
Properties of Operators:
Commutativity:
• P∧ Q= Q ∧ P, or
• P ∨ Q = Q ∨ P.
Associativity:
• (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
• (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
Identity element:
• P ∧ True = P,
• P ∨ True= True.
Distributive:
• P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
• P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
DE Morgan's Law:
• ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
• ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
Double-negation elimination:
• ¬ (¬P) = P.
Limitations of Propositional logic:
• We cannot represent relations like ALL, some, or none with propositional logic.
Example:
• All the girls are intelligent.
• Some apples are sweet.
• Propositional logic has limited expressive power.
• In propositional logic, we cannot describe statements in terms of their properties
or logical relationships.
First Order Logic
 To represent complicated phrases or natural language statements, PL is insufficient.
 The expressive power of propositional logic is quite restricted.
 Take a look at the following sentence, which cannot be represented using PL logic.

1."Some humans are intelligent", or


2."Sachin likes cricket."

 PL logic is insufficient to represent the above statements, so we required some more powerful
logic, such as first-order logic.
 First-order logic is another method of knowledge representation. It's a variant of propositional
logic.
 FOL has enough expressiveness to convey natural language statements succinctly.
 Predicate logic or First-order predicate logic are other names for first-order logic.
First Order Logic
Like propositional logic, first-order logic (like natural language) implies that the world contains
facts, but it also assumes the following things in the world.
Objects: A, B, people, numbers, colors, squares, pits, wars, theories, wumpus, ......
Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such as: the
sister of, brother of, has color, comes between
Function: Father of, best friend, third inning of, end of, ......

First-order logic also has two main parts as a natural language:


• Syntax
• Semantics
Syntax of First-Order logic:
In first-order logic, the syntax of FOL determines which set of symbols represents logical expression.
Symbols are the core syntactic constituents of first-order logic. In FOL, we use short-hand notation to
write statements.
The basic elements of FOL syntax are as follows:
Constant 1, 2, A, John, Mumbai, cat,....

Variables x, y, z, a, b,....
Predicates Brother, Father, >,....

Functions sqrt, LeftLegOf, ....

Connectives ∧, v, ¬, ⇒, ⇔

Equality ==
Quantifier ∀, ∃
Atomic sentences:
• Atomic sentences are the most fundamental first-order logic sentences.
These sentences are made up of a predicate symbol, a parenthesis, and a
series of terms.
• Predicate can be used to represent atomic sentences (term1, term2, ......,
term n).
• Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
 Connectives are used to join atomic sentences to form complex sentences.
 The following are two types of first-order logic statements:
Subject: The major component of the sentence is the subject.
Predicate: A predicate is a relationship that ties two atoms together in a
sentence.
 Consider the following statement: "x is an integer." It has two parts: the
first component, x, is the statement's subject, and the second half, "is an
integer," is a predicate.
Quantifiers in First-order logic:
• Quantification describes the quantity of specimen in the universe of
discourse, and a quantifier is a linguistic element that generates
quantification.
• These are the symbols that allow you to determine or identify the
variable's range and scope in a logical expression. There are two
different kinds of quantifiers:
Universal Quantifier, (for all, everyone, everything)
Existential quantifier, (for some, at least one).
Universal Quantifier:
A universal quantifier is a logical symbol that indicates that a statement inside its range is true for
everything or every
instance of a specific thing.
Note: the implication of universal quantifier is "→".
If x is a variable, then ∀x is read as:
For all x
For each x
For every x.
Existential Quantifier:
The Existential Quantifier (∃) is a key idea in logic that allows you to say that at least one
member in a group or
domain meets a certain condition.
Existential quantifiers are a sort of quantifier that expresses that a statement is true for at least one
instance of something
within its scope.
Note: we always use the AND or Conjunction symbol (∧) in Existential quantifiers.
If x is a variable, the existential quantifier is either x or (x). And it will be written like follows:
There exists a 'x.'
For some 'x.'
For at least one 'x.'
Some Examples of FOL using quantifier:
1. All birds fly.
The predicate in this question is "fly(bird)."
Because all birds are able to fly, it will be portrayed as follows.
∀x bird(x) →fly(x).
2. Every man respects his parent.
The predicate in this question is "respect(x, y)," where x=man, and y= parent.
Because there is every man so will use ∀, and it will be portrayed as follows:
∀x man(x) → respects (x, parent).
3. Some boys play cricket.
In this question, the predicate is "play(x, y), " where x= boys, and y= game. Because there are some
boys so we will use ∃, and it will be portrayed as:
∃x boys(x) → play(x, cricket).
Inference Rules In AI:
Inference:
In artificial intelligence, we need intelligent computers which can create
new logic from old logic or by evidence, so generating the conclusions
from evidence and facts is termed as Inference.

Inference rules:
Inference rules are the templates for generating valid arguments. Inference
rules are applied to derive proofs in artificial intelligence, and the proof is a
sequence of the conclusion that leads to the desired goal.
In inference rules, the implication among all the connectives plays an important role.
Following are some
terminologies related to inference rules:
1. Implication: It is one of the logical connectives which can be represented as P → Q.
It is a Boolean expression.
2. Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.
3. Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
4. Inverse: The negation of implication is called inverse. It can be represented as ¬ P
→ ¬ Q.
The compound statements are equivalent to each other, which we can prove using truth
table:
Types of Inference rules:
1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it states that
if P and P → Q is true, then we can infer that Q will be true. It can be represented as:

[(p->q^p)]->q
Example:
Given: If it's raining (P), then I'll take an umbrella (Q).
Statement 1: It's raining (P).
Conclusion: I'll take an umbrella (Q).
2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will
also true. It can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not sleepy" => ~P
3.Hypothetical Syllogism:
The Hypothetical Syllogism rule state that if P→R is true whenever P→Q is
true, and Q→R is true. It can be represented as the following notation:
Example:
Statement-1: If you have my home key then you can unlock my
home. P→Q
Statement-2: If you can unlock my home then you can take my
money. Q→R
Conclusion: If you have my home key then you can take my money. P→R
4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q
will be true. It can be represented as:

Example:
Statement-1: Today is Sunday or Monday. P∨Q
Statement-2: Today is not Sunday.  ¬P
Conclusion: Today is Monday.  Q
5. Addition:
The Addition rule is one the common inference rule, and
it states that If P is true, then P∨Q will be true.

Example:
Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.==> Q
Conclusion: I have vanilla or chocolate ice-cream. ==> (P ∨Q)
6. Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will also
be true. It can be represented as
7. Simplification:
The simplification rule state that if P∧ Q is true, then Q
or P will also be true. It can be represented as:

Rules of Inference in Artificial intelligence Proof by Truth-Table:


Inference in First-Order Logic
FOL inference rules for quantifier:
As propositional logic we also have inference rules in first-order logic, so following are
some basic inference rules in FOL:
 Universal Generalization
 Universal Instantiation
 Existential Instantiation
 Existential Generalization
1. Universal Generalization:
• Universal generalization is a valid inference rule which states that if
premise P(c) is true for any arbitrary element c in the universe of discourse,
then we can have a conclusion as ∀ x P(x).
• It can be represented as:

This rule can be used if we want to show that every element has a similar property.
In this rule, x must not appear as a free variable.
Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All
bytes contain 8 bits.", it will also be true.
2. Universal Instantiation:
• Universal instantiation is also called as universal elimination or UI is a valid
inference rule.
• It can be applied multiple times to add new sentences.
• As per UI, we can infer any sentence obtained by substituting a ground term
for the variable.
• The UI rule state that we can infer any sentence P(c) by substituting a ground term
c (a constant within domain x) from ∀ x P(x) for any object in the universe of
discourse.
• It can be represented as:
Example:1. IF "Every person like ice-cream"=> ∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
3. Existential Instantiation
• Existential instantiation is also called as Existential Elimination, which is a valid inference rule in first-order logic.
• It can be applied only once to replace the existential sentence.
• This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new constant symbol c.
• The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
• It can be represented as:

Ex: From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),


So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge base.
The above used K is a constant symbol, which is called Skolem constant.
4. Existential introduction
• An existential introduction is also known as an existential generalization, which is a
valid inference rule in first-order logic.
• This rule states that if there is some element c in the universe of discourse which has a
property P, then we can infer that there exists something in the universe which has the
property P.
It can be represented as:
Example: Let's say that,
"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
Unification
• Unification is a process of making two different logical atomic expressions identical by finding a substitution.
• Unification depends on the substitution process.
• It takes two literals as input and makes them identical using substitution.

Conditions for Unification


• The predicate symbol must be the same.
• The number of arguments in both expressions must be identical.
• If two similar variables are present in the same expression, then unification fails.
Example
Consider the following phrases, which must be unified:

"Eats(x, Apple)" is an expression.

"Eats(Riya, y)" is an expression.

The following is an explanation of the unification process:

Comparison
Expression A ("Eats(x, Apple)") is compared to Expression B ("Eats(Riya, y)"

Substitution Variable
We can see that Expression A's first parameter is a variable "x," and the second argument is a constant
"Apple."

The first parameter in Expression B is a constant "Riya," while the second argument is a variable "y."
Unifying Variables
We unify the variable "x" in Expression A with "Riya" in Expression B. This results in the substitution: x =
Riya.

We also unify the variable "y" in Expression B with the constant "Apple." This gives us the substitution: y =
Apple.

Applying Substitutions
After substitutions, Expression A becomes "Eats(Riya, Apple)."

Expression B remains the same: "Eats(Riya, Apple)."

Unified Expression
Both expressions are now identical: "Eats(John, Apple)."

Unification is successful, and the unified expression is "Eats(Riya, Apple)."


Resolution in FOL
 Resolution is a theorem proving technique that proceeds by building proofs by contradictions. It was
invented by a Mathematician John Alan Robinson in the year 1965.
 Resolution is used, if there are various statements are given, and we need to prove a conclusion of
those statements.
 Unification is a key concept in proofs by resolutions. Resolution is a single inference rule which can
efficiently operate on the conjunctive normal form or clausal form.
 Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit
clause.
 Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to
be conjunctive normal form or CNF
 The resolution rule for first-order logic is simply a lifted version of the propositional rule. Resolution
can resolve two clauses if they contain complementary literals, which are assumed to be standardized
apart so that they share no variables.
Steps for Resolution:
1. Conversion of facts into first-order logic.
2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by contradiction)
4. Draw resolution graph (unification).
Example:
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.

Step-1: Conversion of Facts into FOL


In the first step we will convert all the given statements into its first order logic.

Step-2: Conversion of FOL into CNF


Eliminate all implication (→) and rewrite
d. ∀x ¬ food(x) V likes(John, x)
e. food(Apple) Λ food(vegetables)
f. ∀x ∀y ¬ [eats(x, y) Λ ¬ killed(x)] V food(y)
Move negation (¬)inwards and rewrite
a. ∀x ¬ food(x) V likes(John, x)
b. food(Apple) Λ food(vegetables)
c. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
Rename variables or standardize variables
d. ∀x ¬ food(x) V likes(John, x)
e. food(Apple) Λ food(vegetables)
f. ∀y ∀z ¬ eats(y, z) V killed(y) V food(z)
Eliminate existential instantiation quantifier by elimination.
In this step, we will eliminate existential quantifier ∃, and this process is known as Skolemization. But in
this example problem since there is no existential quantifier so all the statements will remain same in this
step.
Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the statements are not implicitly quantified so we
don't need it.
g. ¬ food(x) V likes(John, x)
h. food(Apple)
i. food(vegetables)
Distribute conjunction ∧ over disjunction ¬.
This step will not make any change in this problem.
Step-3: Negate the statement to be proved

In this statement, we will apply negation to the conclusion statements, which will be written as ¬likes(John,
Peanuts)
Step-4: Draw Resolution graph:

Hence the negation of the conclusion has been proved as a complete


contradiction with the given set of statements.
First-Order Logic VS Propositional Logic
What is Prolog?
 Prolog is a declarative programming language designed for developing logic-based AI
applications. Developers can set rules and facts around a problem, and then Prolog’s
interpreter will use that information to automatically infer solutions.
 One of the key features of Prolog is its ability to handle uncertain or incomplete information.
 In Prolog, a programmer can specify a set of rules and facts that are known to be true, but
they can also specify rules and facts that might be true or false.
Prolog Program Basics to Know:
 In Prolog, programs are made up of two main components: facts and rules.
 Facts are statements that are assumed to be true, such as “John is a man” or “the capital of
France is Paris.”
 Rules are logical statements that describe the relationships between different facts, such as “If
John is a man and Mary is a woman, then John is not Mary.”
Prolog Program Basics to Know:
 Prolog programs are written using a syntax that is similar to natural
language. For example, a simple Prolog program might look like this:
man(john).
woman(mary).
capital_of(france, paris).

not(X,Y) :- man(X), woman(Y).


 In this example, the first three lines are facts, while the fourth line is a rule.
The rule uses the not/2 predicate to state that if X is a man and Y is a woman,
then X is not Y.
Basic elements of Prolog syntax
There is no single “syntax” for Prolog, as the language allows for a wide range of different programming
styles and approaches. However, here are some basic elements of Prolog syntax that are commonly used:
1. Facts are statements that are assumed to be true. In Prolog, facts are written using a predicate name
followed by a list of arguments enclosed in parentheses. For example: man(john).
2. Rules are logical statements that describe the relationships between different facts. In Prolog, rules are
written using the predicate name followed by a list of arguments enclosed in parentheses, followed by a
colon and a hyphen (:-) and the body of the rule. For example: not(X,Y) :- man(X), woman(Y).
3. Variables are used to represent values that can change or be determined by the interpreter. In Prolog,
variables are written using a name that begins with an uppercase letter. For example: X.
4. Queries are used to ask the interpreter to find solutions to problems based on the rules and facts in the
program. In Prolog, queries are written using the same syntax as facts followed by a question mark (?).
For example: not(john, mary)?
The main features of Prolog are :
1. Rule-based programming: The rule-based programming
allows the program code to be written in the form which is
more declarative than procedural.
2. Built-in pattern matching : It has an important feature of built-
in pattern matching.
3. Backtracking execution : Backtracking provides the means
for the flow of control in the program.
Forward and Backward Chaining in AI
• Forward and backward chaining are two important fields of artificial intelligence. These two processes are used by
expert systems in order to mimic human intelligence.
• Both of these mechanisms are used to derive a conclusion based on a given set of rules.
• Expert System: Expert systems in AI are interactive computer software. They are designed and developed to work
with the ability of a human expert in a particular domain. This knowledge base is mainly contributed by humans
who are experts in their specific fields.
• There are five components in the expert system which are:
Knowledge Base
Inference Engine
User Interface
Explanation Module
Knowledge Acquisition System.
Inference Engine
• An inference engine is a system which applies logical reasoning to draw
conclusions and solve a problem based on given facts.
• It consists of algorithms that bring useful info from the knowledge base and use
it to conclude new facts to the user's issues.
• The inference engine uses two mechanisms, forward and backward chaining, in
order to extract data from the knowledge base.
Forward Chaining
• Forward chaining is a data-driven reasoning approach used by inference engines.
• It starts with given facts and applies rules to derive new conclusions or facts
from them.
• The engine keeps applying the rules until it reaches a conclusion or cannot apply
any more rules.
• It is based on logical prediction methodology. One of its examples is the
prediction of trends in the stock market.
Properties of Forward-Chaining:
1. It is a down-up approach, as it moves from bottom to top.
2. It is a process of making a conclusion based on known facts or data, by starting
from the initial state and reaches the goal state.
3. Forward-chaining approach is also called as data-driven as we reach to the
goal using available data.
4. Forward -chaining approach is commonly used in the expert system, such as
CLIPS, business, and production rule systems.
Backward Chaining
Backward-chaining is also known as a backward deduction or backward reasoning method when using an
inference engine. A backward chaining algorithm is a form of reasoning, which starts with the goal and works
backward, chaining through rules to find known facts that support the goal.

Properties of backward chaining:


• It is known as a top-down approach.
• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
• It is called a goal-driven approach, as a list of goals decides which rules are selected and used.
• Backward -chaining algorithm is used in game theory, automated theorem proving tools, inference engines,
proof assistants, and various AI applications.
• The backward-chaining method mostly used a depth-first search strategy for proof.
Forward Chaining vs Backward Chaining
Introduction to Ontologies
• Ontologies are formal definitions of vocabularies that allow us to define difficult or complex structures
and new relationships between vocabulary terms and members of classes that we define.
• Ontologies generally describe specific domains such as scientific research areas.
• Ontology - branch of philosophy studying entities that exist, their classification, and the relations between
them.
• Types of entities: physical objects, abstract objects, time, locations, actions, events, beliefs.
• Decisions made on imperfect representations can be wrong. We must choose the representation with this
in mind.
• Selecting a particular representation means making an ontological commitment.
• With the help of ontological engineering, the representation of the general concepts such as actions, time,
physical objects, performance, meta-data, and beliefs becomes possible on a large-scale.
Example
Categories and Objects
• The organization of objects into categories is a vital part of knowledge representation.
• Although interaction with the world takes place at the level of individual objects, much reasoning takes place at
the level of categories.
For example, a shopper would normally have the goal of buying a basketball, rather than a particular basketball such
as
BB9.
There are two choices for representing categories in first-order logic: predicates and objects.
That is, we can use the predicate Basketball (b), or we can reify1 the category as an object, Basketballs.
Member(b, Basketballs ), which we will abbreviate as b ∈Basketballs, to say that b is a member of the category of
basketballs. We say Subset(Basketballs, Balls), abbreviated as Basketballs ⊂ Balls, to say that Basketballs is a
subcategory of Balls. Categories serve to organize and simplify the knowledge base through inheritance.
EVENTS
• Events are described as instances of event categories. The event E1 of
Shankar flying from San Francisco to Washington, D.C. is described as E1
∈Flyings∧ Flyer (E1, Shankar ) ∧ Origin(E1, SF) ∧ Destination (E1,DC)
• we can define an alternative three-argument version of the category of flying
events and say E1 ∈Flyings(Shankar, SF,DC) We then use Happens(E1, i) to
say that the event E1 took place over the time interval i, and we say the same
thing in functional form with Extent(E1)=i. We represent time intervals by a
(start, end) pair of times; that is, i = (t1, t2) is the time interval that starts at t1
and ends at t2.
MENTAL EVENTS AND MENTAL OBJECTS
• What we need is a model of the mental objects that are in someone’s head (or something’s
knowledge base) and of the mental processes that manipulate those mental objects. The model
does not have to be detailed.
• We do not have to be able to predict how many milliseconds it will take for a particular agent to
make a deduction. We will be happy just to be able to conclude that mother knows whether or
not she is sitting. We begin with the propositional attitudes that an agent can have toward
mental objects: attitudes such as Believes, Knows, Wants, Intends, and Informs.
• The difficulty is that these attitudes do not behave like “normal” predicates. For example,
suppose we try to assert that Lois knows that Superman can fly: Knows(Lois,
CanFly(Superman))
REASONING SYSTEMS FOR CATEGORIES
There are two closely related families of systems: semantic networks provide
graphical aids
for visualizing a knowledge base and efficient algorithms for inferring properties of
an object
on the basis of its category membership; and description logics provide a formal
language for
constructing and combining category definitions and efficient algorithms for deciding
subset
and superset relationships between categories.
SEMANTIC NETWORKS
• There are many variants of semantic networks, but all are capable of representing
individual objects, categories of objects, and relations among objects.
• A typical graphical notation displays object or category names in ovals or boxes,
and connects them with labeled links.
• The semantic network notation makes it convenient to perform inheritance
reasoning .
• Inheritance becomes complicated when an object can belong to more than one
category or when a category can be a subset of more than one other category; this is
called multiple inheritance.
• The drawback of semantic network notation, compared to first-order logic: the fact
that links between bubbles represent only binary relations.
Reasoning:
The reasoning is the mental process of deriving logical conclusion and making predictions from available knowledge, facts, and beliefs. Or
we can say, "Reasoning is a way to infer facts from existing data." It is a general process of thinking rationally, to find valid conclusions

Deductive reasoning:
Deductive reasoning is deducing new information from logically related known information. It is the form of valid reasoning, which means
the argument's conclusion must be true when the premises are true.

Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is sometimes referred to as top-down
reasoning, and contradictory to inductive reasoning.

In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.

Example:Premise-1: All the human eats veggies

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.


Inductive Reasoning:
Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the process of generalization. It starts with
the series of specific facts or data and reaches to a general statement or conclusion.

Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up reasoning.

In inductive reasoning, we use historical data or various premises to generate a generic rule, for which premises support the conclusion.

Example:

Premise: All of the pigeons we have seen in the zoo are white.

Conclusion: Therefore, we can expect all the pigeons to be white.


Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then seeks to find the most likely
explanation or conclusion for the observation.

Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do not guarantee the conclusion

Common Sense Reasoning


Common sense reasoning is an informal form of reasoning, which can be gained through experiences.

Common Sense reasoning simulates the human ability to make presumptions about events which occurs on every day.

It relies on good judgment rather than exact logic and operates on heuristic knowledge and heuristic rules
Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other information to existing
information in our knowledge base. In monotonic reasoning, adding knowledge does not decrease the set of prepositions that can be derived.

To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be affected by new facts.

Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot use monotonic reasoning.

Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is monotonic.

Any theorem proving is an example of monotonic reasoning.

Example:

○ Earth revolves around the Sun.


Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our knowledge base.

Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into our knowledge base.

Non-monotonic reasoning deals with incomplete and uncertain models.

"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.

Example: Let suppose the knowledge base contains the following knowledge:

○ Birds can fly


○ Penguins cannot fly
○ Pitty is a bird

So from the above sentences, we can conclude that Pitty can fly.
Bayes' theorem:
Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which determines the probability of an event with
uncertain knowledge.

In probability theory, it relates the conditional probability and marginal probabilities of two random events.

Bayes' theorem allows updating the probability prediction of an event by observing new information of the real world.

Example: If cancer corresponds to one's age then by using Bayes' theorem, we can determine the probability of cancer more accurately with
the help of age.

Bayes' theorem can be derived using product rule and conditional probability of event A with known event B:

It shows the simple relationship between joint and conditional probabilities. Here,

P(A|B) is known as posterior, which we need to calculate, and it will be read as Probability of hypothesis A when we have occurred an
evidence B.
P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we calculate the probability of evidence.

P(A) is called the prior probability, probability of hypothesis before considering the evidence

P(B) is called marginal probability, pure probability of an evidence

Question: From a standard deck of playing cards, a single card is drawn. The probability that the card is king is 4/52, then calculate
posterior probability P(King|Face), which means the drawn face card is a king card.

Solution:

P(king): probability that the card is King= 4/52= 1/13

P(face): probability that a card is a face card= 3/13

P(Face|King): probability of face card when we assume it is a king = 1

Putting all values in equation (i) we will get:


Bayesian belief network

Bayesian belief network is key computer technology for dealing with probabilistic events and to solve a problem which has uncertainty. We
can define a Bayesian network as:

"A Bayesian network is a probabilistic graphical model which represents a set of variables and their conditional dependencies using a
directed acyclic graph."

Bayesian networks are probabilistic, because these networks are built from a probability distribution, and also use probability theory for
prediction and anomaly detection.

Bayesian Network can be used for building models from data and experts opinions, and it consists of two parts:

○ Directed Acyclic Graph


○ Table of conditional probabilities.

The generalized form of Bayesian network that represents and solve decision problems under uncertain knowledge is known as an
Influence diagram.

A Bayesian network graph is made up of nodes and Arcs (directed links), where:
○ Each node corresponds to the random variables, and a variable can be continuous or
discrete.
○ Arc or directed arrows represent the causal relationship or conditional probabilities
between random variables. These directed links or arrows connect the pair of nodes in
the graph.
○ In the above diagram, A, B, C, and D are random variables represented by the
nodes of the network graph.
○ If we are considering node B, which is connected with node A by a directed arrow,
then node A is called the parent of Node B.
○ Node C is independent of node A.

The Bayesian network graph does not contain any cyclic graph. Hence, it is known
as a directed acyclic graph or DAG.

The Bayesian network has mainly two components:

○ Causal Component
○ Actual numbers

You might also like