0% found this document useful (0 votes)
92 views11 pages

Practice Question and Answers

Uploaded by

Rahul Samajpati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
92 views11 pages

Practice Question and Answers

Uploaded by

Rahul Samajpati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 11

Practice Question and Answers (AI)

3.1 Explain why problem formulation must follow goal formulation.

Problem formulation must follow a goal formulation because of a few simple reasons. We are
technically trying to figure out what we are interested in when figuring this out so that we can
mark as redundant in goal formulation. In problem formulation we are deciding how to choose
the parts that we previously figured out were not redundant. This is why we need to do goal
formulation first, so we can figure out what not to look for in problem formulation.

3.2 Your goal is to navigate a robot out of a maze. The robot starts in the center of the
maze facing north. You can turn the robot to face north, east, south, or west. You can
direct the robot to move forward a certain distance, although it will stop before hitting a
wall.

a. Formulate this problem. How large is the state space?

To formulate the problem, we should start off by setting up a coordinate system with x and y
vertices, so start at center of a “maze” and set it as (0,0). The entire maze can be a square of (-
1,1) to (1,1). The way to test this will be as long as the x and y are greater than 1 to the current
location. Successor function can be moving forward any amount of distance and the cost
function can be the total distance moved. The state space can be infinite due to robot position
as infinite.

b. In navigating a maze, the only place we need to turn is at the intersection of two or
more corridors. Reformulate this problem using this observation. How large is the state
space now?

If the navigation is going to change to only needing turn at intersection of two or more
corridors, we need to have an exit node at the end of each corridor. The initial state will now
be facing north in the centre of the maze as before. The test will be to get to an exit node. The
successor function is to move past the intersection if there is one in front of us, and the cost
function is just the total distance moved as before. The state space will have changed due to
the number if intersections and that can be 4, therefore the state space is now 4N with N being
the number of intersections.

c. From each point in the maze, we can move in any of the four directions until we reach
a turning point, and this is the only action we need to do. Reformulate the problem using
these actions. Do we need to keep track of the robot’s orientation now?

Changing the navigation to only being able to move in any four directions till we reach the wall
is what this is asking for. To do this, the initial state will be at the center of the maze (direction
won’t matter due to the fact that we’re just looking to reach a wall). The test will be to an exit
node once again, and the successor function will be to move to the next intersection (lets say
you originally got to the north wall, now we need to go to either the east,west, or south wall).
The total cost function is the total distance moved as before.
d. In our initial description of the problem we already abstracted from the real world,
restricting actions and removing details. List three such simplifications we made.

Three simplifications that we could have made for this are the following:
1) We assumed that the robot can only face 4 directions, what if it could move in other ways?
2) We ignored the other variables about the world including the temperature, wind that may
move the robot and change orientation, and other natural causes.
3) We also ignored possibility of other robots in the same area or other items in the way of the
robot moving in its “space”.

3.3 Suppose two friends live in different cities on a map, such as the Romania map shown
in Figure 3.2. On every turn, we can simultaneously move each friend to a neighboring
city on the map. The amount of time needed to move from city i to neighbor j is equal to
the road distance d (i, j) between the cities, but on each turn the friend that arrives first
must wait until the other one arrives (and calls the first on his/her cell phone) before the
next turn can begin. We want the two friends to meet as quickly as possible.

a. Write a detailed formulation for this search problem. (You will find it helpful to define
some formal notation here.

We can start the search problem by defining the state space. States are all city pairs (i,j). The
successor function would be the successors of (I,j) lets call them, (x,y), and adjacent pairs (x,i),
(y,j). The goal will be to be at some (i,i) such that both people are in the exact same location
together. The cost function in this will be to going from (i,j) to (x,y).

b. Let D(i, j) be the straight-line distance between cities i and j. Which of the following
heuristic functions are admissible? (i) D(i, j); (ii) 2 ・ D(i, j); (iii) D(i, j)/2.
The function that’s admissible would be function (iii). This is because this way it is evenly
spread out between each friend.

c. Are there completely connected maps for which no solution exists?

Yes, it is possible to have connected maps for which no solution exists. This is due to a
possibility such as if the two friends start at an odd number of steps apart, they will never be
together in the same spot.

d. Are there maps in which all solutions require one friend to visit the same city twice?

This is not a possibility due to you finding a path in which the two friends meet together. It is
not possible to have them double back and go to the same city twice if you are trying to find
the most efficient way in which two people can meet.

3.5 Consider the n-queens problem using the “efficient” incremental formulation given
on page 72. Explain why the state space has at least 3 √ n! states and estimate the largest
n for which exhaustive exploration is feasible. (Hint: Derive a lower bound on the
branching factor by considering the maximum number of squares that a queen can attack
in any column.)

The problem has a state space of at least 3 √ n! states because of the following. We only put
one queen per column and a new queen in squares where it isn’t attacked by any other queen
per say. If we took n queens, we can see that it can go into max number of 3 squares in each
column so there are n-3 choices and n-6 choices in column. The state space will then be n*n-
3*n-6. This is the same as 3 √ n!.
3.6 Give a complete problem formulation for each of the following. Choose a formulation
that is precise enough to be implemented.

a. Using only four colors, you have to color a planar map in such a way that no two
adjacent regions have the same color.

The problem formulation would the following. The initial state would be no regions colored.
The test would be all regions are colored with no adjacent regions with same color. The
successor function would be to assign a different color to a region next to one you’ve already
colored. The cost function would depend on the size of the planar map.

b. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot
ceiling. He would like to get the bananas. The room contains two stackable, movable,
climbable 3-foot-high crates.

The initial state is the monkey standing in a room with the banana’ hanging from the ceiling.
The goal will be the monkey getting the banana’s. The successor function will be to use the
crate to get a banana then to move the crate and to grab another banana. The cost function is
simply the number of actions preformed.

c. You have a program that outputs the message “illegal input record” when fed a certain
file of input records. You know that processing of each record is independent of the
other records. You want to discover what record is illegal.

The initial state for this will be all the input records. The goal will be to have it put out “illegal
message” to a record that’s illegal. The successor function will be to test all of the records
again. The cost function will be the total number of runs made.

d. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet.
You can fill the jugs up or empty them out from one to another or onto the ground. You
need to measure out exactly one gallon.

The initial state that jugs will all be empty. The goal will be to measure out exactly one gallon.
The successor function will be to set calculate one gallon by emptying each of the three jugs in
a mathematical way. You can do this by assigning x y z values to each of the jugs. The cost
function will be the number of total actions preformed.
3.7 Consider the problem of finding the shortest path between two points on a plane that
has convex polygonal obstacles as shown in Figure 3.31. This is an idealization of the
problem that a robot has to solve to navigate in a crowded environment.

a. Suppose the state space consists of all positions (x, y) in the plane. How many states
are there? How many paths are there to the goal?

There will be infinite number of states and paths when the state space is (x,y).

b. Explain briefly why the shortest path from one polygon vertex to any other in the scene
must consist of straight-line segments joining some of the vertices of the polygons.
Define a good state space now. How large is this state space?

We start this off by knowing that the shortest path from one polygon vertex to any other is a
straight line. If it is not possible to make a straight line due to things in the way, then we need
to make a few lines that are as straight as possible. Make the line go straight till the obstacle,
then deviate past the obstacle then make another straight line. The obstacle is polygonal so the
lines must go from start to tangent point and the tangent points will be vertices of the obstacle.
The state space will be the 35 set of vertices.

3.8 On page 68, we said that we would not consider problems with negative path costs. In
this exercise, we explore this decision in more depth.

a. Suppose that actions can have arbitrarily large negative costs; explain why this
possibility
would force any optimal algorithm to explore the entire state space.

We would need to try all possible paths before finding the one that is ideal, therefore any path
could possibly have arbitrarily large negative costs.

b. Does it help if we insist that step costs must be greater than or equal to some negative
constant c? Consider both trees and graphs.
It does help if we insist that step costs must be greater than or equal to negative constant c.
Now we can figure out the state space because if c is the best case scenario, then paths x can
be improved by multiplying c and x with cx. This means the worst path are anything but cx.

c. Suppose that a set of actions forms a loop in the state space such that executing the set
in
some order results in no net change to the state. If all of these actions have negative cost,
what does this imply about the optimal behavior for an agent in such an environment?

If there is no net change to the state and these actions have a negative cost, then the agent will
go in a continuous loop.

d. One can easily imagine actions with high negative cost, even in domains such as route
finding. For example, some stretches of road might have such beautiful scenery as to
far outweigh the normal costs in terms of time and fuel. Explain, in precise terms,
within the context of state-space search, why humans do not drive around scenic loops
indefinitely, and explain how to define the state space and actions for route finding so
that artificial agents can also avoid looping.

It would be impractical to drive around scenic loops all the time. We only really want to visit
scenic routes once. In order to make it so that we only visit it at most a couple of time, we alter
the state space. The state space should have a type of counter that adds plus one each time you
visit each spot. This way you can set a limit of how many times you want to visit a route.

e. Can you think of a real domain in which step costs are such as to cause looping?

A real domain looping can be going into work 9- everyday.

3.9 The missionaries and cannibals problem is usually stated as follows. Three
missionaries and three cannibals are on one side of a river, along with a boat that can
hold one or two people. Find a way to get everyone to the other side without ever leaving
a group of missionaries in one place outnumbered by the cannibals in that place. This
problem is famous in AI because it was the subject of the first paper that approached
problem formulation from an analytical viewpoint (Amarel, 1968).

a. Formulate the problem precisely, making only those distinctions necessary to ensure a
valid solution. Draw a diagram of the complete state space.

The state space can be 6 digit ordered list that represents the 3 cannibals, 3 missionaries, and
the boats. The goal will be a state where the 6 digit ordered list shows 3 cannibals, and 3
missionaries on the second side. The successor function will be moving the cannibals and
missionaries from one side to the other. The cost function will be based off the number of
actions that were required to get the missionaries to safety.

b. Implement and solve the problem optimally using an appropriate search algorithm. Is
it
a good idea to check for repeated states?

On the AI book website, it shows the optimal search algorithm. This algorithm finds the best
possible way to get all 6 of the people onto the other side. Finds the most efficient way by
making it so you don’t have to double back to any of the states that were already visited. It
checks for repeated states but doesn’t actually go back to use any repeated states.

c. Why do you think people have a hard time solving this puzzle, given that the state space
is so simple?

The reason why people have a hard time is because most people do not logically draw out the
problem. If you logically draw out the problem by coming up with a proper state space, you
can see that most possible moves are not feasible due to the state space.

3.10 Define in your own words the following terms: state, state space, search tree, search
node, goal, action, transition model, and branching factor.

State- This is instances considered and place where the agent is a part of. There are different
possible states such as a state that is real world state in which the agent is in the real world and
also there is representational states where the agent is in an imaginary state that may alter ideals
of real world.

State Space- The state space will contain all information necessary to predict the efforts of an
action to predict the goal state. It shows the coordinates or nodes of all the possible states.

Search Tree- The search tree is a tree in which we know the start state is the top node. We can
have all the children of the tree as each state that is reached by each action preformed.

Search Node- The search node is each node inside of the search tree.

Goal- The goal state is the state that the agent would ideally get to.

Action- The action is one of the functions or things that the agent can do to reach the goal state.

Successor function- The successor function is what the agent’s possible tasks are. It takes the
state that the agent is in and then preforms an action to go onto the next state the agent is
supposed to or possibly should get to.

Branching Factor- The branching factor is the number of possible actions that an agent could
possible preform in a search tree.

3.11 What’s the difference between a world state, a state description, and a search node?
Why is this distinction useful?

The world state is the state in which we take into consideration of a state in reality. A state
description would be a description of the agent’s state. This would mean where the state is,
such as where the agent is located. The search node as defined earlier, is each of the nodes
inside of a search tree, each node is a state. Each one of these nodes will contain information
that shows how to reach that specific node. The reason why there is a difference between world
state and state description is because the state description is just one part of the world state.
What the agent thinks of itself is the state description and this could be possibly just one aspect
of the world state itself. The search nodes contain more detailed information and this is
important when you are creating search nodes that have similar state spaces since the extra
detailed information will help distinguish between the nodes that have the same state space.
3.13 Prove that GRAPH-SEARCH satisfies the graph separation property illustrated in
Figure 3.9. (Hint: Begin by showing that the property holds at the start, then show that if
it holds before an iteration of the algorithm, it holds afterwards.) Describe a search
algorithm that violates the property.

Graph search satisfies the graph separation property. First we look at what the graph separation
property is. The graph separation property says that each path from where the state starts to a
state that the agent has not yet gone to has to go through a state in the frontier. This is true such
that in the beginning of a search, the frontier has the initial state, therefore in order for the agent
to go to a new state you must first go through the frontier. The property will hold true for graph
searches because as you complete iterations and go through nodes looking for the goal state,
the frontier node is always going to exist before going onto an unexplored state or node. As
you preform the graph search, by the definition of the graph separation property, each path
from the initial state to the unexplored state has a node in the frontier. There is a way to violate
the search algorithm. This is what would happen when you have an algorithm in which you
move nodes without getting all the possible successors. It is possible to create an algorithm that
will only generate certain nodes in order to get from path a to b and this would violate the graph
separation property.

3.14 Which of the following are true and which are false? Explain your answers.

a. Depth-first search always expands at least as many nodes as A∗ search with an


admissible
heuristic.

This statement is false. DFS’s do not always expand exactly the number of nodes in order to
reach the goal state. A* is better because it will always find the optimal solution.

b. h(n) = 0 is an admissible heuristic for the 8-puzzle.

This statement is true. h(n) = 0 works because the cost is 0 and non-negative.

c. A∗ is of no use in robotics because percepts, states, and actions are continuous.

This statement is true. A* search is not the ideal search that should be robotics.

d. Breadth-first search is complete even if zero step costs are allowed.

This statement is true. Breadth-first search only cares about the depth of the solution and not
what the step costs are.

e. Assume that a rook can move on a chessboard any number of squares in a straight line,
vertically or horizontally, but cannot jump over other pieces. Manhattan distance is an
admissible heuristic for the problem of moving the rook from square A to square B in
the smallest number of moves.

This statement is true. The rook can actually move across the board in just 1 move if there is
nothing in the way.
3.15 Consider a state space where the start state is number 1 and each state k has two
successors: numbers 2k and 2k + 1.

a. Draw the portion of the state space for states 1 to 15.

b. Suppose the goal state is 11. List the order in which nodes will be visited for breadthfirst
search, depth-limited search with limit 3, and iterative deepening search.

With the goal state being 11-

Breadthfirst search will go like the following:


1 2 3 4 5 6 7 8 9 10 11

Depth-limited search will go like the following:


1 2 4 8 9 5 10 11

Iterative deepening search will go like the following:


1, 1 2 3, 1 2 4 5 3 6 7, 1 2 4 8 9 5 10 11

c. How well would bidirectional search work on this problem? What is the branching
factor in each direction of the bidirectional search?

In this problem, a bi-directional search would work relatively well. It will make it that the
reverse direction is divided in half. The branching factor would be 2 going forward and only 1
in the reversal direction.

d. Does the answer to (c) suggest a reformulation of the problem that would allow you to
solve the problem of getting from state 1 to a given goal state with almost no search?

If you reformulated the problem and started at the goal state instead of the initial, you would
use the reversal until you reached the initial state of 1.

e. Call the action going from k to 2k Left, and the action going to 2k + 1 Right. Can you
find an algorithm that outputs the solution to this problem without any search at all?
Yes, you can actually find an algorithm that outputs that solution without any kind of search.
Since the problem clearly states that the left node will equal 2k and the right node will equal
2k+1 you can figure out an algorithm using the goal state as binary number without actually
doing a search. You set the goal state to a binary number and use 0 (for 2k) to go left and use
1 (for 2k+1) to go right.

3.16 A basic wooden railway set contains the pieces shown in Figure 3.32. The task is to
connect these pieces into a railway that has no overlapping tracks and no loose ends where
a
train could run off onto the floor.

a. Suppose that the pieces fit together exactly with no slack. Give a precise formulation of
the task as a search problem.

The initial state will be any piece of the track. The successor function will be adding a piece
from what is left depending on which kind of open hole it is. The goal test will be that you used
all of the pieces and created a connected track without any overlapping. The step cost will be
the same as the number of pieces.

b. Identify a suitable uninformed search algorithm for this task and explain your choice.

In order to solve this, I would choose a depth first search. This would work the best due to that
big state space. A breadth first or iterative search would take too many steps in order to achieve
the same result as a depth first search.

c. Explain why removing any one of the “fork” pieces makes the problem unsolvable.

If you remove the fork pieces, then you could not solve this problem. This would be due to the
fact that when you use a fork to create a split of two tracks. The only way to then recombine
these two tracks you would need to have a fork. In order to have an ending to the track you
would therefore need forks.

d. Give an upper bound on the total size of the state space defined by your formulation.
(Hint: think about the maximum branching factor for the construction process and the
maximum depth, ignoring the problem of overlapping pieces and loose ends. Begin by
pretending that every piece is unique.)

The way to do this would be to start off by figuring out that there are only three 3 open pegs.
This would be if there is an open peg with a fork (2 more pegs) added onto the initial one. This
create a space of (12 + (2*16) + (2*2) + (2*2*2) ) = 56 total choices on each peg. Multiply
56*3 = 168 for each of the 3 open pegs. Calculating an upper bound you would have 168^3 /
(12! * 16! * 2! * 2!). The factorials are there to account for permutations that used the pieces
twice.

3.17 On page 90, we mentioned iterative lengthening search, an iterative analog of


uniform cost search. The idea is to use increasing limits on path cost. If a node is generated
whose path cost exceeds the current limit, it is immediately discarded. For each new
iteration, the limit is set to the lowest path cost of any node discarded in the previous
iteration.
a. Show that this algorithm is optimal for general path costs.

This would be optimal for general path cost because it will get the goal with the cheapest cost.
The algorithm expands nodes in order of increasing path, this will find the first goal with
cheapest cost.

b. Consider a uniform tree with branching factor b, solution depth d, and unit step costs.
How many iterations will iterative lengthening require?

The number of d iterations where O(b^d) nodes will be created.

c. Now consider step costs drawn from the continuous range [e, 1], where 0 < e < 1. How
many iterations are required in the worst case?

The number of iterations required in the worst case would be d/e.

3.18 Describe a state space in which iterative deepening search performs much worse
than
depth-first search (for example, O(n^2) vs. O(n)).

A state in which the iterative deepening search preforms worse than depth-first search would
be one in which there is a single goal and each state has only 1 successor. This will make it so
that the depth-first search will find the goal much quicker than the iterative deepening search.

3.21 Prove each of the following statements, or give a counterexample:

a. Breadth-first search is a special case of uniform-cost search.

Breadth-first can be a special case of uniform-cost search if all step costs are equal.

b. Depth-first search is a special case of best-first tree search.

Depth-first search can be a special case of best-first tree search if you have n = -depth(n).

c. Uniform-cost search is a special case of A∗ search.

Uniform-cost search can be a special case of A* search if you have h(n) = 0.

You might also like