Utility Based System
Types of AI Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
o Simple Reflex Agent
o Model-based reflex agent
o Goal-based agents
o Utility-based agent
1. Simple Reflex agent:
o The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.(Not Par Ob..)
o The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
2. Model-based reflex agent
o The Model-based agent can work in a partially observable environment, and track the
situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
o Internal State: It is a representation of the current state based on percept
history.
o These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
o Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.
3. Goal-based agents
o The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
Amazon Delivery System
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
Overview
Artificial intelligence (AI) is now widely used in our daily lives, from smartphone voice
assistants to complex decision-making systems in industries like finance and healthcare. One
fundamental concept that plays a crucial role in decision-making is the utility theory in
artificial intelligence. Utility theory in artificial intelligence provides
a mathematical framework for understanding how AI systems make choices among different
options based on their perceived value or utility. In this article, we will delve into the concept
of utility theory in artificial intelligence, understanding what it is, how it works, and its
significance in decision-making.
Introduction
Decision-making is a critical aspect of human intelligence and a key component of AI
systems. However, AI decision-making is often based on mathematical algorithms and
models that are designed to optimize outcomes based on specific objectives. Utility theory in
artificial intelligence provides a formal way of incorporating preferences and subjective
values into the decision-making process of AI systems. Utility theory in artificial intelligence
allows AI systems to choose different options based on their utility or perceived value,
considering factors such as risk, uncertainty, and subjective preferences.
What is Utility Theory?
Utility theory in artificial intelligence is a mathematical framework used to model decision-
making under uncertainty. It allows one to assign subjective values or preferences to different
outcomes and helps make optimal choices based on these values. Utility theory is widely
used in various AI applications such as game theory, economics, robotics,
and recommendation systems, among others.
At its core, utility theory helps AI systems make decisions that maximize a specific goal,
referred to as utility. The concept of utility is subjective and varies from person to person or
from system to system. It represents the degree of satisfaction associated with different
outcomes. For example, in a recommendation system, the utility could describe the level of
user satisfaction with a particular recommendation. In a robotics application, a utility could
represent the cost or risk of different actions.
Utility theory also provides a way to model decisions in uncertain or probabilistic
environments, where the outcomes are associated with different probabilities. For example,
in a game of poker, the utility of a particular action may depend on the probabilities of
different cards being dealt to the player. We can use the utility function to calculate the
expected utility of each action, which is the average utility weighted by the corresponding
probabilities. The AI system can then choose the action with the highest expected utility.
What is Utility Function?
A utility function is a mathematical function used in Artificial Intelligence (AI) to represent
a system's preferences or objectives. It assigns a numerical value, referred to as utility, to
different outcomes based on their satisfaction level. The utility function is a quantitative
measure of the system's subjective preferences. It is used to guide decision-making in AI
systems. An agent or system typically defines the utility function based on
its goals, objectives, and preferences. It maps different outcomes to their corresponding
utility values, where higher utility values represent more desirable outcomes. The utility
function is subjective and can vary from one agent or system to another, depending on the
specific context or domain of the AI application.
The utility function plays a crucial role in decision-making in AI systems. It allows the AI
system to compare and rank different outcomes or actions based on their utility values and
choose the one with the highest utility. The choice of action with the highest utility depends
on the system's objectives, as reflected in the utility function.
Utility Function Representation (denoted by U)
The utility function is typically denoted as U. It is a mathematical function that takes as input
the different features of an outcome and maps them to a real-valued utility value. We can
represent the utility function mathematically as U(x), where x represents the attributes or
features of an outcome. How we define the utility function can vary depending on the
application and the type of decision problem we are trying to solve.
Decision Making
One common approach in AI decision-making is to maximize the expected utility, which
considers the probability of different outcomes occurring. The expected utility is calculated
by multiplying the utility of each outcome by its corresponding probability and summing up
the results. The AI system chooses the action with the highest expected utility as
the optimal choice.
Examples:
1. Self-Driving Cars: In the self-driving cars application, the utility function may
consider factors such as time taken, fuel consumption, safety, and comfort, and
assign utility values to different routes based on these factors. The self-driving car can
then use the utility values to calculate the expected utility of each route, taking into
account the probabilities of different traffic conditions or road obstacles, and choose
the route with the highest expected utility to reach the destination.
2. Recommendation Systems: Consider a recommendation system that suggests
movies to users based on their preferences. The utility function of the
recommendation system may assign higher utility values to movies that match
the user's preferred genre, actors, or directors and lower utility values to films that
do not match these preferences. The recommendation system can then use the utility
values to rank and recommend movies to the user based on their utility values, with
higher utility movies being recommended more prominently.
Utility theory
Utility theory in artificial intelligence provides a formal framework for reasoning about
decision-making under uncertainty. It is often used in AI systems to model decision-making
in situations where outcomes are uncertain or probabilistic, and the AI system needs to make
choices based on its preferences or subjective values.
Lottery
To understand the concept of utility theory in artificial intelligence, let's consider a simple
example of a lottery. Suppose you are given the option to play a lottery with two choices:
1. A guaranteed prize of $100$
2. A 50% chance of winning $200 and a 50% chance of winning nothing
Which option would you choose? Your decision depends on your risk tolerance, financial
situation, and personal preferences. Utility theory provides a way to model and quantify
these preferences mathematically using a utility function.
Notation
Let us define some basic notation commonly used in utility theory:
Let x represent an outcome or option.
Let U(x) denote the utility function, which maps x to its utility value.
Let p(x) denote the probability of outcome x occurring.
Let E[U(x)] denote the expected utility of outcome x, which is the sum of the utility
values of all possible outcomes weighted by their respective probabilities.
The E[U(x)] is calculated using the following expression:
E[U(x)]=∑iP(xi)⋅U(xi)
In this formula, the E[U(x)] represents the expected utility of a decision or action, which is
the sum of the product of the probability of each outcome xi (denoted as P(xi)) and its
corresponding utility value (denoted as U(xi)). The summation is taken over all possible
outcomes i.
Diving Into Utility Theory and MEU
One of the fundamental concepts in utility theory in artificial intelligence is the idea
of Maximum Expected Utility (MEU). MEU is a decision-making principle that suggests
choosing the option that maximizes the expected utility. In other words, an AI system should
select the option that is expected to yield the highest utility value, taking into account the
probabilities of different outcomes.
Utility Theory Axioms
Utility theory in artificial intelligence is based on a set of axioms or principles that define the
properties of a rational utility function. These axioms serve as the foundation for
understanding how we can use utility functions to model decision-making under uncertainty.
Let's explore some of the key utility theory axioms:
Orderability
A rational utility function should allow for comparing different outcomes based on their
utility values. In other words, if U(x)>U(y), then outcome x is preferred to outcome y.
Transitivity
If outcome x is preferred to outcome y, and outcome y is preferred to outcome z, then
outcome x should be preferred to outcome z. This axiom ensures that the preferences modeled
by the utility function are consistent and do not lead to contradictions
Continuity
Small changes in the probabilities of outcomes should result in small changes in the expected
utility. This axiom ensures that the utility function is smooth and well-behaved and that
small changes in probabilities do not result in abrupt changes in decision-making.
Substitutability
If two outcomes, x and y, are equally preferred, then we should equally prefer any
combination of x and y. This axiom allows for substituting equally preferred outcomes
without affecting the decision-making process.
Monotonicity
If the probability of an outcome increases, its expected utility should also increase. This
axiom ensures that an increase in the likelihood of an outcome increases its perceived value
or utility.
Decomposability
The utility function should be able to represent preferences over multiple attributes or
features of an outcome in a decomposable manner. This allows for modeling complex
decision problems with multiple dimensions or criteria.
Conclusion
Utility theory in artificial intelligence plays a crucial role in decision-making by
providing a mathematical framework for incorporating subjective values into
the decision-making process.
Using utility functions and the principles of maximum expected utility, AI systems
can choose different options based on their perceived value or utility, considering
factors such as risk, uncertainty, and subjective preferences.
The axioms of utility theory define the properties of a rational utility function,
ensuring that the preferences modeled by the utility function are consistent, well-
behaved, and enable rational decision-making.
Overall, utility theory provides a powerful tool for AI systems to make informed and
rational decisions in complex and uncertain environments, making it an essential
concept in the field of artificial intelligence.
There are many examples of agents in artificial intelligence. Here are a few:
Intelligent personal assistants: These are agents that are designed to help users with
various tasks, such as scheduling appointments, sending messages, and setting
reminders. Examples of intelligent personal assistants include Siri, Alexa, and Google
Assistant.
Autonomous robots: These are agents that are designed to operate autonomously in the
physical world. They can perform tasks such as cleaning, sorting, and delivering goods.
Examples of autonomous robots include the Roomba vacuum cleaner and the Amazon
delivery robot.
Gaming agents: These are agents that are designed to play games, either against human
opponents or other agents. Examples of gaming agents include chess-playing agents and
poker-playing agents.
Fraud detection agents: These are agents that are designed to detect fraudulent
behavior in financial transactions. They can analyze patterns of behavior to identify
suspicious activity and alert authorities. Examples of fraud detection agents include
those used by banks and credit card companies.
Traffic management agents: These are agents that are designed to manage traffic flow
in cities. They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to
minimize congestion. Examples of traffic management agents include those used in
smart cities around the world.
A software agent has Keystrokes, file contents, received network packages that act as
sensors and displays on the screen, files, and sent network packets acting as actuators.
A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts act as actuators.
A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors act as actuators.
Decision Network
A decision network in artificial intelligence is a graphical model that represents a set of
decisions and the uncertainties associated with those decisions. It is a type of probabilistic
graphical model that extends the concept of decision trees to handle uncertainty and
probabilistic outcomes. Decision networks are commonly used in decision analysis and
decision support systems.
Here are the key components of a decision network:
1. Decision Nodes: Represent decision points where an agent must choose between
different actions or options.
2. Chance Nodes: Represent uncertain events or conditions. These nodes are associated
with probabilities that describe the likelihood of different outcomes.
3. Utility Nodes: Represent the agent's preferences or values associated with different
outcomes. They quantify the desirability or cost of various states of the world.
4. Arcs/Edges: Connect nodes and indicate the dependencies between them. An arc
from a chance node to a decision node indicates that the decision depends on the
outcome of the uncertain event.
5. Conditional Probability Tables (CPTs): Provide the probabilities associated with
each outcome of a chance node given the different possible parent configurations.
Let's consider an example of a decision network in the context of medical diagnosis:
Scenario: A patient is exhibiting certain symptoms, and a doctor needs to decide
whether the patient has the flu or a common cold. The doctor can order a test to gather more
information, but the test is not perfect and can yield false positives or false negatives.
Decision Node (D): "Order Test" or "Do Not Order Test." The doctor must decide
whether to order a diagnostic test.
Chance Node (C): "Actual Diagnosis." This node represents the true health condition
of the patient, either "Flu" or "Cold."
Utility Node (U): "Patient Health Utility." This node represents the overall well-being
of the patient based on the correct or incorrect diagnosis and treatment decisions.
Arcs/Edges:
An arc from D to C, indicating that the decision to order a test depends on the
actual health condition.
An arc from C to U, indicating that the patient's health utility depends on the
correct diagnosis.
Conditional Probability Tables (CPTs):
Probability of "Flu" given the decision to order a test.
Probability of "Cold" given the decision to order a test.
Probability of ordering a test given the true health condition.
Utilities:
Values indicating the desirability of outcomes based on correct or incorrect
decisions and diagnoses.
This decision network allows the doctor to make informed decisions considering the
uncertainties associated with the patient's health condition and the diagnostic test.
In practice, decision networks are used to model complex decision-making problems
in various fields, including finance, healthcare, and operations research. They provide a
systematic way to analyze decision scenarios, incorporate uncertainty, and optimize decision
strategies based on utility considerations.