0% found this document useful (0 votes)
38 views25 pages

AI Agent Types and Problem Solving

The document discusses different types of artificial intelligence agents and problem solving techniques. It covers simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It also discusses problem formulation, problem types including deterministic, non-observable, and nondeterministic problems, and provides examples of problem solving using a vacuum world scenario.

Uploaded by

arslanshaheen248
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views25 pages

AI Agent Types and Problem Solving

The document discusses different types of artificial intelligence agents and problem solving techniques. It covers simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. It also discusses problem formulation, problem types including deterministic, non-observable, and nondeterministic problems, and provides examples of problem solving using a vacuum world scenario.

Uploaded by

arslanshaheen248
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Artificial Intelligence

CS-3151
Instructor: Fasiha Ashraf
Assistant Professor, Department of Computer Science
Agent
types
• Four basic types in order of increasing generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
Learning
agents
Summary
• Agents interact with environments through actuators and sensors
• The agent function describes what the agent does in all circumstances
• The performance measure evaluates the environment sequence
• A perfectly rational agent maximizes expected performance
• Agent programs implement (some) agent functions
• PEAS descriptions define task environments
• Environments are categorized along several dimensions:
• observable? deterministic? episodic? static? discrete? single-agent?
• Several basic agent architectures exist:
• reflex, reflex with state, goal-based, utility-based

8
Problem Solving Agents
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms

9
Problem-solving agents

10
Example: Romania
• On holiday in Romania; currently in Arad
• Flight leaves tomorrow from Bucharest

• Formulate goal:
• Be in Bucharest
• Formulate problem:
• States: various cities
• Actions: drive between cities
• Find solution:
• Sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

11
Example: Romania (Map)

12
Problem Types
• Deterministic, full-observable  single-state problem
• Agent knows exactly which state it will be in; solution is a sequence

13
Problem Types
• Deterministic, full-observable  single-state problem
• Agent knows exactly which state it will be in; solution is a sequence
• Non-observable  conformant problem
• Agent may have no idea where it is; solution (if any) is a sequence

14
Problem Types
• Deterministic, full-observable  single-state problem
• Agent knows exactly which state it will be in; solution is a sequence
• Non-observable  conformant problem
• Agent may have no idea where it is; solution (if any) is a sequence
• Nondeterministic and/or partially observable  contingency problem
• Percepts provide new information about current state
• Solution is a contingent plan or a policy
• Often interleave search, execution

15
Problem Types
• Deterministic, full-observable  single-state problem
• Agent knows exactly which state it will be in; solution is a sequence
• Non-observable  conformant problem
• Agent may have no idea where it is; solution (if any) is a sequence
• Nondeterministic and/or partially observable  contingency problem
• Percepts provide new information about current state
• Solution is a contingent plan or a policy
• Often interleave search, execution
• Unknown state space  exploration problem (“online”)

16
Example: vacuum world
• Single-state, start in #5. Solution?

17
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]

18
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]
•Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}.
Solution?

19
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]
•Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}.
Solution? [Right, Suck, Left,
Suck]

20
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]
•Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}.
Solution? [Right, Suck, Left,
Suck]
• Contingency, start in #5.
Murphy’s Law: Suck can dirty a clean
carpet
21
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]
•Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}.
Solution? [Right, Suck, Left,
Suck]
• Contingency, start in #5.
Murphy’s Law: Suck can dirty a clean
carpet Local sensing: dirt, location only
Solution?
22
Example: vacuum world
• Single-state, start in #5. Solution?
[Right, Suck]
•Conformant, start in {1,2,3,4,5,6,7,8}
e.g., Right goes to {2,4,6,8}.
Solution? [Right, Suck, Left,
Suck]
• Contingency, start in #5.
Murphy’s Law: Suck can dirty a clean
carpet Local sensing: dirt, location only
Solution?
23
[Right, if dirt then Suck]
Single-state problem formulation
A problem is defined by four items:

• Initial state
• Successor function
• Goal test
• Path cost

24
Acknowledgement
• I have taken help for these slides from the work of:
• Book Slides (AIMA, Berkeley)

25

You might also like