0% found this document useful (0 votes)
30 views

Lecture3 Searching

The document discusses problem solving through search algorithms. It describes problem formulation, including defining the initial state, goal states, actions, and transition models. It provides examples of search problems like navigating a vacuum world and route planning. It then covers uninformed search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search which solve problems without heuristics about state distances to goals. These algorithms are evaluated based on completeness, optimality, time complexity, and space complexity.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Lecture3 Searching

The document discusses problem solving through search algorithms. It describes problem formulation, including defining the initial state, goal states, actions, and transition models. It provides examples of search problems like navigating a vacuum world and route planning. It then covers uninformed search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search which solve problems without heuristics about state distances to goals. These algorithms are evaluated based on completeness, optimality, time complexity, and space complexity.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Solving Problems by

Searching
Dr. Sandareka Wickramanayake

1
Outline
• Problem-solving agents
• Problem formulation
• Example problems
• Uninformed Search Algorithms
• Informed Search Algorithms
• Heuristic Functions

2
Problem-Solving Agents
• Problem-Solving Agent - An agent that plans ahead: considers a
sequence of actions that form a path to a goal state.
• Search – The computational process undertaken by a problem-
solving agent.
• Use atomic representations.
• Only the simplest environments: episodic, single agent, fully
observable, deterministic, static, and discrete.

Search Algorithms

Uninformed Algorithms Informed Algorithms 3


A Vacation in Romania

The agent has access to


information about the
world, such as a map

4
A simplified road map of part of Romania, with road distances in miles.
The Problem-Solving Process
• GOAL FORMULATION: Goals organize behavior by limiting the
objectives and hence the actions to be considered.
• PROBLEM FORMULATION: The agent devises a description of
the states and actions necessary to reach the goal.
• SEARCH: Before taking any action in the real world, the agent
simulates sequences of actions in its model, searching until it
finds a sequence of actions that reaches the goal (solution).
• EXECUTION: The agent can now execute the actions in the
solution, one at a time.

5
A Vacation in Romania
• Agent on holiday in Romania; currently in Arad.
• Needs to catch a flight taking off from Bucharest
• Formulate goal:
• be in Bucharest
• Formulate problem:
• states: various cities
• actions: drive between cities
• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest.

6
Search Problems and Solutions
• A search problem has the following components:

• State space - A set of possible states that the environment can


be in.
• Initial state – The state that the agent starts in.
• E.g., Arad
• Goal states
• One goal state (e.g., Bucharest)
• A small set of alternative goal states (e.g., The goal of a vacuum
cleaner is to have no dirt in any location.)
• The goal is defined by a property that applies to many states
7
Search Problems and Solutions
• A search problem has the following components:

• Actions - Actions available to the agent. Given a state 𝑠,


𝐴𝑐𝑡𝑖𝑜𝑛(𝑠) returns a finite set of actions that can be executed in
𝑠. We say that each of these actions is applicable in 𝑠.
• 𝐴𝐶𝑇𝐼𝑂𝑁𝑆(𝐴𝑟𝑎𝑑) = {𝑇𝑜𝑆𝑖𝑏𝑖𝑢, 𝑇𝑜𝑇𝑖𝑚𝑖𝑠𝑜𝑎𝑟𝑎, 𝑇𝑜𝑍𝑒𝑟𝑖𝑛𝑑}
• Transition model – Describes what each action does.
𝑅𝐸𝑆𝑈𝐿𝑇(𝑠, 𝑎) returns the state that results from doing action 𝑎
in state 𝑠.
• E.g., 𝑅𝐸𝑆𝑈𝐿𝑇 𝐴𝑟𝑎𝑑, 𝑇𝑜𝑍𝑒𝑟𝑖𝑛𝑑 = 𝑍𝑒𝑟𝑖𝑛𝑑

8
Search Problems and Solutions
• A search problem has the following components:

• Action cost function – The numeric cost of applying action 𝑎


in state 𝑠 to reach 𝑠 ′ , 𝐴𝐶𝑇𝐼𝑂𝑁_𝐶𝑂𝑆𝑇(𝑠, 𝑎, 𝑠 ′ ).
• E.g., lengths in miles/ time it takes to complete the action
• Path – A sequence of states connected by a sequence of
actions.
• Solution – A path from the initial state to a goal state.
• Optimal solution – The lowest path cost among all solutions.
• Graph – A representation of state space in which vertices are
states and the directed edges between them are actions.

Assumptions - Action costs are additive, and all action costs will be positive 9
Search Problems and Solutions
• A search problem has the following components:

• Model - An abstract mathematical description.


• E.g., our formulation of the problem of getting to Bucharest.
• Abstraction – Removing details from a representation.
• Real world is quite complex.
• A good problem formulation has the right level of detail.

10
Example Problems
• Vacuum world

11
Example Problems
• Vacuum world
• STATES – 8 states (Agent in cell 1,
cell 1 has dirt, cell 2 has dirt, etc.)
• INITIAL STATE - Any state can be
designated as the initial state.
• ACTIONS - Suck, move Left, and move Right.
• TRANSITION MODEL - Suck removes any dirt from the agent’s cell;
Forward moves the agent ahead of one cell in the direction it is facing
unless it hits a wall, in which case the action has no effect. Backward
moves the agent in the opposite direction, while TurnRight and TurnLeft
change the direction it is facing by 900.
• GOAL STATES: The states in which every cell is clean.
• ACTION COST: Each action costs 1.
12
Example Problems
• Route-finding problem – Travel-planning website
• STATES - Each state includes a location (e.g., an airport) and the
current time.
• INITIAL STATE - The user’s home airport.
• ACTIONS - Take any flight from the current location, in any seating
class, leaving after the current time, leaving enough time for within-
airport transfer if needed.
• TRANSITION MODEL: The state resulting from taking a flight will have
the flight’s destination as the new location and the flight’s arrival time
as the new time.
• GOAL STATE: A destination city. Sometimes the goal can be more
complex, such as “arrive at the destination on a nonstop flight.”
• ACTION COST: Monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane,
frequent-flyer reward points, etc.
13
Root (Corresponds to
Search Algorithms Parent Node of Sibiu
the initial state)

Reached states
Child Node of Arad

Node (Corresponds to a state Edge (Corresponds to


in the state space) an action)

State Space
All possible transitions among all the
states.

Search Tree
Interior region
Describes paths between the states toward the
goal state.

Expand – Expand the node considering Frontier of the search tree


available actions. 14
Exterior region
Redundant Paths
• How do we decide which node from the frontier to expand next?
• Redundant paths
• Repeated state – Meeting a top node again.
• Redundant path
• We can get to Sibiu via the path Arad–Sibiu (140 miles long) or the path Arad–
Zerind–Oradea–Sibiu (297 miles long).
• Eliminating redundant paths leads to faster solutions.

15
Measuring Problem-Solving Performance
• Algorithms are evaluated along the following dimensions:
• Completeness: Is the algorithm guaranteed to find a solution when
there is one, and to correctly report failure when there is not?
• Cost optimality: Does it find a solution with the lowest path cost of all
solutions?
• Also referred to as admissibility or optimality.
• Time complexity: How long does it take to find a solution?
• Can be measured in seconds, or more abstractly by the number of states and
actions considered.
• Space complexity: How much memory is needed to perform the
search?

16
Measuring Problem-Solving Performance
• Time and space complexity are measured in terms of
• b: maximum branching factor of the search tree (number of successors
of a node that need to be considered)
• d: depth of the least-cost solution
• m: maximum number of actions in any path (maybe ∞)

17
Uninformed Search Algorithms
• Have access only to the problem definition.
• No clue about how close a state is to the goal(s).
• Build a search tree to find a solution.

18
Uninformed Search Algorithms
• Algorithms differ based on which node they expand first.
• Algorithms
• Breadth-first search – Expands the shallowest nodes first.
• Complete
• Optimal for unit action costs.
• Exponential space complexity.
• Uniform-cost search – Expands the node with the lowest path cost.
• Optimal for general action costs.

19
Uninformed Search Algorithms
• Algorithms
• Depth-first Search - Expands the deepest unexpanded node first.
• Neither complete nor optimal
• Linear time complexity
• Iterative Deepening Search - Calls DFS with increasing depth limits
until a goal is found.
• Complete when full cycle checking is done
• Optimal for unit action costs
• Time complexity is comparable to BFS
• Space complexity is linear.
• Bidirectional Search - Expands two frontiers, one around the initial
state and one around the goal, stopping when the two frontiers meet.

20
Breadth-first Search
• Appropriate when all the actions have the same cost.

22
Breadth-first Search - Evaluation
• Complete? Yes (if b is finite)
• Time? 1+b+b2+ b3 +… + bd = O(bd), where 𝑑 is the depth of the
solution.
• Space? O(bd) (keeps every node in memory)
• Cost Optimal? Yes (Only if path costs are identical)
• Space is the bigger problem (more than time)

• Exponential complexity search problems cannot be solved by


uninformed search for any but the smallest instances.

23
Breadth-first Search - Evaluation

24
Uniform-cost Search or Dijkstra’s Algorithm
• Appropriate when the actions have different costs.

Source -
https://en.wikipedia.org/wiki/Dijkstra%27s_alg
orithm
25
Uniform-cost Search - Evaluation
• Complete? Yes (if b is finite)
1+ 𝐶 ∗ /𝜖
• Time? 𝑂(𝑏 )
• Where 𝐶 ∗ - The cost of the optimal solution and 𝜖 – lower bound on the
cost of each action, with 𝜖 > 0.
1+ 𝐶 ∗ /𝜖
• Space? 𝑂(𝑏 )
• Cost Optimal? Yes

26
Depth-first Search

27
Depth-first Search - Evaluation
• Complete? Yes, for finite state spaces. No for infinite state
spaces and spaces with loops.
• Time? O(bm), where m is the maximum depth.
• Space? O(bm)
• Cost Optimal? No: It returns the first solution it finds, even if it
is not the cheapest.

28
Depth-limited Search
• Keep DFS from wandering down an infinite path.
• A version of DFS which has a depth limit, 𝑙, and treats all nodes
at depth 𝑙 as if they had no successors.
• Evaluation
• Complete? No, a poor choice for 𝑙 makes the algorithm fail to reach
the solution.
• Time? O(bl)
• Space? O(bl)
• Cost Optimal? No: It returns the first solution it finds, even if it is not
the cheapest.

29
Uninformed Search Algorithms Comparison

30
Informed Search Algorithms
• Uses domain-specific hints about the location of goals.
• Finds solutions more efficiently than an uninformed strategy.
• The hints come in the form of a heuristic function, ℎ 𝑛 .
• ℎ 𝑛 = estimated cost of the cheapest path from the state at
node n to a goal state.
• In route-finding problems - the straight-line distance on the map
between the current state and a goal.

31
Informed Search Algorithms
• Algorithms
• Greedy Best-First Search – Expands nodes with minimal ℎ 𝑛
• Not optimal
• Efficient
• A* Search – Expands nodes with minimal 𝑓 𝑛 = 𝑔 𝑛 + ℎ 𝑛
• Complete and optimal provided that ℎ 𝑛 is admissible.
• Bad space complexity.
• Bidirectional A* Search
• More efficient than A*
• Iterative Deepening A* Search – An Iterative version of A*
• Address the space complexity issue.
• Beam Search – Puts a limit on the size of the frontier.
• Incomplete and suboptimal
• Efficient with reasonably good solutions.

32
Greedy Best-First Search
• Best-First Search
• Idea: use an evaluation function f for each node n
• f(n) estimates the "desirability“ of node n
• Expand the most desirable unexpanded node

• Greedy Best-First Search


• Evaluation function f(n) = h(n)
• where h(n) is some heuristic estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to Bucharest
• Expands the node that appears to be closest to the goal
• E.g., node n, such that hSLD(n) is minimum

33
Greedy Best-First Search Straight-line distances to Bucharest.

34
Greedy Best-First Search

35
Greedy Best-First Search - Evaluation
• Complete? No. Can lead to dead ends and the tree search version
(not the graph search version) can go into infinite loops.

36
Greedy Best-First Search - Evaluation
• Worst case time? O(bm) - can generate all nodes at depth m before
finding the solution.
• Worst case space? O(bm) - can generate all nodes at depth m
before finding the solution
• But a good heuristic can dramatically improve the time and space needed
• In our example, a solution was found without expanding any node not on the path to
goal: Which very efficient in this case
• Optimal? No
• Path found: Arad->Sibiu->Fagaras->Bucharest. Actual cost =
140+99+211=450
• But the actual cost of, Arad->Sibiu->Rimnicu->Pitesti =
140+80+97+101=418

37
A* Search
• Idea: avoid expanding paths that are already expensive.
• Evaluation function 𝑓 𝑛 = 𝑔 𝑛 + ℎ(𝑛), where 𝑔 𝑛 is the cost to
reach the node 𝑛.
• 𝑓 𝑛 - Estimated cost of the cheapest solution through 𝑛.
• A* is identical to Uniform-cost search except A* uses 𝑔 𝑛 + ℎ(𝑛)
instead of 𝑔 𝑛 .

38
A* Search Straight-line distances to Bucharest.

39
A* Search

40
A* Search

41
A* Search - Evaluation
• Complete? Yes
• Optimal?
• Depends on certain properties of the heuristics
• Admissibility: an admissible heuristic never overestimates the cost of
reaching a goal. (An admissible heuristic is therefore optimistic.)
• If the heuristic is admissible, A* is optimal.
• Consistency: A heuristic ℎ(𝑛) is consistent if for every node 𝑛 and every
successor 𝑛’ of 𝑛 generated by an action 𝑎, we have:
ℎ 𝑛 ≤ 𝑐 𝑛, 𝑎, 𝑛′ + ℎ(𝑛′)
• Every consistent heuristic is admissible.
• If the heuristic is consistent, A* is optimal.
• With an inadmissible heuristic, A* may or may not be cost-optimal.

42
A* Search - Evaluation
• Time? Exponential in the worst case.
• Space? Exponential in the worst case.
• A good heuristic can reduce time and space complexity considerably.

43
Heuristic Functions
• The performance of heuristic search
algorithms depends on the quality of
the heuristic function.
• One can sometimes construct good
heuristics by
• Relaxing the problem definition
• Storing precomputed solution costs for
subproblems in a pattern database h1 = the number of misplaced tiles (blank not
included). (An admissible heuristic)
• Defining landmarks h2 = the sum of the distances of the tiles from
• Learning from the experience with the their goal positions. (An admissible heuristic)
problem class

44
Generating Heuristics from Relaxed
Problems
• A problem with fewer restrictions on the actions is called a
relaxed problem.
• The cost of an optimal solution to a relaxed problem is an
admissible heuristic for the original problem.
• If the rules of the 8-puzzle are relaxed so that a tile can move
anywhere, then the shortest solution gives h1(n)

45
Generating Heuristics from Subproblems:
Pattern Databases
• Admissible heuristics can also be derived from the solution cost
of a subproblem of a given problem.
• E.g., Subproblem of 8-puzzle example
• The cost of the optimal solution to this subproblem is a lower bound on
the cost of the complete problem.
• Pattern databases - Stores these exact solution costs for every
possible subproblem instance.

46

You might also like