Unit 1
Unit 1
Horizon of AI includes,
Knowledge Transmission
Knowledge Representation
Automated Reasoning
–to reason,
–to plan,
–to solve problems,
–to think abstractly,
–to comprehend ideas,
–to use language, and
–to learn.
Intelligence can be defined as the ability for solving problems
• For any given class of environments and tasks, we seek the agent (or class of agents) with the best
performance
• computational limitations make perfect rationality unachievable
design best program for given machine resources
In 1940 Zuse developed artificial chess playing using high level
language called Plankalkul.
He even created the Turing test, which is still used today, as a
benchmark to determine a machine’s ability to “think” like a human.
Though his ideas were ridiculed at the time, they set the wheels in
motion, and the term “artificial intelligence” entered popular
awareness in the mid- 1950s, after Turing died.
AI – History and Foundations
Isaac Asimov, was an American writer and professor of biochemistry at Boston
University.
The Three Laws of Robotics. The rules were introduced in his 1950 short story
"Runaround" and "I,Robot“
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would
conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.
AI – History and Foundations
1951 – First AI based program was written
a checkers-playing program written by Christopher Strachey and a chess-
playing program written by Dietrich Prinz.
1955 – First self learning game playing
competing against human players in the game of Checkers
1959 – MIT – AI based lab setup
1961 – First Robot is introduced into GM’s assembly line
AI – History and Foundations
1964 – First demo of AI program which understand natural language
1965 – First chat bot Eliza was invented
1974 – First Autonomous vehicle is created
1989 – Carnegie Mellon created the first autonomous vehicle using
neural networks
ALVINN, which stands for Autonomous Land Vehicle In a Neural Network
AI – History and Foundations
1996 – IBM’s deep blue – chess playing game
Deep Blue won its first game against world champion Garry Kasparov in game
one of a six-game match on 10 February 1996.
1999 – Sony introduces AIBO – self learning entertaining robot
AI – History and Foundations
1999 – MIT AI lab – first emotional AI is demonstrated
Big data
Computing Power
Birth of AI
Initially, AI dealt with simple reasoning and reaction problems. It
requires only very less knowledge base
Washing machines
Stops water after reaching particular level
Fuzzy logic takes necessary amount of water only
Traffic control
Automatically, dynamically adjust signal timing, info to nearby signals, etc…
Examples
So, the basics of AI,
12
The Disadvantages
• increased costs
• difficulty with software development - slow and expensive
• few experienced programmers
• few practical products have reached the market as yet.
13
AI Techniques
Types of problem solved,
State Space
Search based method – state space
State Space
S
Search based method – state space
State Space
S
Search based method – state space
State Space
S
Search based method – state space
State Space
D
Move
S
Search based method – state space
• Movegen(S) – find all possible neighbours
D
Move
S
PROBLEM SOLVING
Problem solving – area of finding answers for unknown situation
Understanding
Representation
Formulation
Solving
Types,
Simple – Can be solved using deterministic approach
Complex – Lack of full information
Humans?
Able to perceive, learn, use statistical methods, mathematical modelling to
solve
AI do the same for the machine
PROBLEM SOLVING PROCESS
Problem? – desired objective is not obvious
Problem solving?
process of generating solution for given situation
Sequence of well defined methods that can handle doubts, inconsistency,
uncertainty and ambiguity
DIRT DIRT
Representation
• Two rooms – with dirt
VC DIRT DIRT
• State representation
8 possible states
1 – Dirt – both rooms – Vacuum cleaner – Left room
VC DIRT DIRT
DIRT VC DIRT
8 possible states
3 – Dirt - right room – Vacuum cleaner – Left room
VC DIRT
VC DIRT
8 possible states
5 – Dirt – left room – Vacuum cleaner – Left room
VC DIRT
DIRT VC
8 possible states
7 – No Dirt – both rooms – Vacuum cleaner – Left room
VC
VC
Formulation
• Possible action
• Move Left
• Move Right
• Clean Dirt
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Solving
Problem
• In AI, formally define a problem as
• a space of all possible configurations where each configuration is called a state
• The state-space is the configuration of the possible states and how they connect
to each other e.g. the legal moves between states.
• an initial state
• one or more goal states
• a set of rules/operators which move the problem from one state to the next
• In some cases, we may enumerate all possible states
• but usually, such an enumeration will be overwhelmingly large so we only
generate a portion of the state space, the portion we are currently examining
• we need to search the state-space to find an optimal path from a start state to a
goal state.
17
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
State space: Tic-Tac-Toe
78
Problem Types
1. Deterministic or observable(Single-state problems)
• Each state is fully observable and it goes to one
definite state after any action.
• Here , the goal state is reachable in one single action
or sequence of actions.
• Deterministic environments ignore uncertainty.
• Ex- Vacuum cleaner with sensor.
79
Problem Types
2. Non-observable(Muliple-state problems) / conformant
problems
• Problem – solving agent does not have any information about
the state.
• Solution may or may not be reached.
• Ex- In case of vacuum cleaner , the goal state is to clean the floor rather clean floor. Action is
to suck if there is dirt. So , in non-observable condition , as there is no sensor , it will have to
suck the dirt , irrespective of whether it is towards right or left . Here , the solution space is
the states specifying its movement across the floor.
80
Problem Types
3. Non-deterministic(partially observable) /
contingency problem
• The effect of action is not clear.
• Percepts provide new information about the
current state.
• Ex- If we take Vacuum cleaner , and now assume that the sensor is
attached to it , then it will suck if there is dirt. Movement of the
cleaner will be based on its current percept.
81
Problem Types
4. Unknown state space problems
• Typically exploration problems
• States and impact of actions are not known
• Ex- online search that involves acting without compete knowledge of
the next state or scheduling without map.
82
Problem Solving with AI
“ Formulate , Search , Execute “ design for agent
• Ability to learn
7. Pour water from 3-gal jug (x,y) → (4, y - (4 - x)) to fill 4-gal jug x+y ≥ 4
and y > 0
8. Pour water from 4-gal jug (x,y) → (x - (3-y), 3) to fill 3-gal-jug x+y ≥ 3
and x > 0
Production rules - Formulation
9. Pour all of water from 3-gal jug (x,y) → (x+y, 0) into 4-gal jug 0 < x+y
≤ 4 and y ≥ 0
10. Pour all of water from 4-gal jug (x,y) → (0, x+y) into 3-gal jug 0 < x+y
≤ 3 and x ≥0
One solution
Gals in 4-gal jug Gals in 3-gal jug Rule Applied
0 0
1. Fill 4
4 0
8. Pour 4 into 3 to fill
1 3
4. Empty 3
1 0
10. Pour all of 4 into 3
0 1
1. Fill 4
4 1
8. Pour into 3
2 3
Problem types
• single-state problem-Agent knows exactly what each of its actions
does and it can calculate exactly which state it will be in after any
sequence of actions.
• multiple-state problem-when the world is not fully accessible, the
agent must reason about sets of states that it might get to, rather
than single states.
• contingency problem-the agent may be in need to now calculate a
whole tree of actions, rather than a single action sequence in which
each branch of the tree deals with a possible contingency that might
arise.
• exploration problem-the agent learns a "map“ of the environment,
which it can then use to solve subsequent problems.
Problem Characteristics
• To choose the most appropriate method
• Its necessary to analyse the problem
Problem Characteristics
1. Is the problem Decomposable?
2. Can solution steps to be ignored or undone?
3. Is the problem’s universe predictable?
4. Is the good solution is absolute or relative?
5. Is the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require interaction with a person?
Problem Characteristics
1. Is the problem Decomposable?
2. Can solution steps to be ignored or undone?
3. Is the problem’s universe predictable?
4. Is the good solution is absolute or relative?
5. Is the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require interaction with a person?
Is the problem Decomposable?
• Decomposable problem:
AND logic
Is the problem Decomposable?
• Non - Decomposable problem:
Is the problem Decomposable?
• Non - Decomposable problem:
Problem Characteristics
1. Is the problem Decomposable?
2. Can solution steps to be ignored or undone?
3. Is the problem’s universe predictable?
4. Is the good solution is absolute or relative?
5. Is the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require interaction with a person?
Can solution steps to be ignored or undone?
• Consider following 3 problems
2: 8 puzzle problem
- Recoverable problem
Can solution steps to be ignored or undone?
• Consider following 3 problems
3: chess problem
- Irrecoverable problem
Can solution steps to be ignored or undone?
• Proving a theorem or lemma
• Ignorable
• 8 puzzle problem
• Recoverable
• Chess game
• Irrecoverable
Recovery of the problem plays an important role in determining the complexity
of the control structure
Problem Characteristics
1. Is the problem Decomposable?
2. Can solution steps to be ignored or undone?
3. Is the problem’s universe predictable?
4. Is the good solution is absolute or relative?
5. Is the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require interaction with a person?
Is the problem’s universe predictable?
• 8 puzzle problem – next step is always predictable – normal planning -
certain outcome
Solution: state
Is the solution a state or a path?
Consideration 2 : Path problem
21
Parag Kulkarni, Prachi Joshi, Artificial Intelligence –Building Intelliegent Systems
Informed Search
• Does not guarantee a solution
• But it ensures high probability of arriving at solution
• Heuristic is a problem specific knowledge or guidance used to
constrain the search and lead to the goal
• Heuristic is based on common sense or rule of thumb, educated
guesses or intuitive judgement
• It helps us to choose the right path when multiple path exist for a
problem
Uninformed Search
• Uninformed search is also referred as blind search
• Generates all possible states in state space and checks for goal state
• It will always find a solution if it exists
• But the method is time consuming since search space is huge
• It is used to benchmark results of other algorithms
Problems in design of search programs
• State representation and identifying relationships between states
• Rule selection
Toy problems
1. 8 puzzle problem
• 3x3 board with eight numbered tiles and a blank space.
• A tile adjacent to the blank space can slide into the space.
• objective-to reach the configuration shown on the right of the figure.
Toy problems
1. 8 puzzle problem – search space
Toy problems
1. 8 puzzle problem
Problem formulation:
• States: a state description specifies the location of each of the eight tiles in one of the nine
squares. For efficiency, it is useful to include the location of the blank.
• Initial state: Numbers are not arranged in clockwise order
• Operators: blank moves left, right, up, or down.
• Goal state: state matches the goal configuration shown in previous Figure
• Path cost: each step costs 1, so the path cost is just the length of the path.
Toy problems
2. Tic-tac-toe problem
• Each player marks a 3*3 grid by 'x' and 'o' in turn.
• The player who puts respective mark in a horizontal, vertical or diagonal line
wins the game
•If both players fail to reach above criteria and all boxes in the grid are filled,
then the game is draw
Toy problems
2. Tic-tac-toe problem
Toy problems
2. Tic-tac-toe problem
Formulating Tic-tac-toe problem
• Initial state – state in previous figure
• States – Next figure with 'x' and 'o' positions constitutes the states in space
• Operators – Adding 'x' or 'o' in cells one by one
• Goal – To reach final/winning position
• Path cost – Each step costs 1 so that path cost is length of path
Toy problems
3. Missionaries and Cannibals
• Three missionaries and cannibals
• Need to move all the six people from one bank of river to the other
• The boat has a capacity of one of two people
Toy problems
3. Missionaries and Cannibals
Formulating the problem in state space search
• States – sequence of 3 numbers representing number of missionaries,
cannibals and boat.
•Goal state - missionaries and cannibals have reached other side of river
(3,3,1)
•Initial state - (3,3,0)
•Operator - Putting missionary and cannibal in boat such that cannibal does
not outnumber missionary and one/two people in boat
• Path cost – number of crossings
Toy problems
4. The 8-queens problem
• Place eight queens on a chessboard such that no queen attacks any other.
• There are two main kinds of formulation
• The incremental formulation involves placing queens one by one
• The complete-state formulation starts with all 8 queens on the board and moves them around.
•Goal test: 8 queens on board, none attacked
Toy problems
4. The 8-queens problem
Formulating the problem in state space search
Consider the following for incremental formulation:
• States: Any arrangement of 0 to 8 queens on board.
• Initial state: No queens in the board
• Goal state: Queens in each column without targeting
other queens
• Operators: add a queen to any square.
• Path cost: Number of moves
Toy problems
5. Vacuum cleaner problem
Assume that the agent knows its location and the locations of all the pieces of dirt,
and the suction is still in good working order.
• States: It is based on Vacuum cleaner location and dirt location
• Initial state: Any state can be assumed as initial state
• Operators: move left, move right, suck.
• Goal state: no dirt left in any square.
• Path cost: each action costs 1.
Toy problems
5. Vacuum cleaner problem
Real-world problems
• Route finding:
• Defined in terms of locations and transitions along links between
them
• Applications: routing in computer networks, automated travel
advisory systems, airline travel planning systems
Real-world problems
• Touring and traveling salesperson problems:
• “Visit every city on the map at least once and end in Bucharest”
• Needs information about the visited cities
• Goal: Find the shortest tour that visits all cities
• NP-hard, but a lot of effort has been spent on improving the capabilities of
TSP algorithms
Real-world problems
• VLSI layout:
• Place cells on a chip so they don’t overlap and there is room for connecting
wires to be placed between the cells
• Robot navigation:
• Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
Real-world problems
• Assembly sequencing:
• Automatic assembly of complex objects
• The problem is to find an order in which to assemble the parts of some object