0% found this document useful (0 votes)
39 views26 pages

AI Module 1 Notes

The document outlines the syllabus for the AI & ML course (BDS602) for the B.E. VI semester at RNS Institute of Technology, covering foundational concepts, history, and various approaches to Artificial Intelligence. It discusses the definitions of AI, intelligent agents, and the philosophical, mathematical, economic, neuroscience, psychological, computer engineering, control theory, and linguistic foundations of AI. The course aims to provide a comprehensive understanding of AI's principles and applications.

Uploaded by

aditi.revankar17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views26 pages

AI Module 1 Notes

The document outlines the syllabus for the AI & ML course (BDS602) for the B.E. VI semester at RNS Institute of Technology, covering foundational concepts, history, and various approaches to Artificial Intelligence. It discusses the definitions of AI, intelligent agents, and the philosophical, mathematical, economic, neuroscience, psychological, computer engineering, control theory, and linguistic foundations of AI. The course aims to provide a comprehensive understanding of AI's principles and applications.

Uploaded by

aditi.revankar17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

AI & ML [BDS602]

B. E. VI SEM(2024-2025)

DEPARTMENT OF CSE-DATA SCIENCE

RNS INSTITUTE OF TECHNOLOGY


Affiliated to VTU, Recognized by GOK, Approved by AICTE, New Delhi
(NAAC ‘A+ Grade’ Accredited, NBA Accredited (UG - CSE, ECE, ISE, EIE and EEE)
Channasandra, Dr. Vishnuvardhan Road, Bengaluru - 560 098
Ph:(080)28611880,28611881 URL: [Link]
Sub Name: AI & ML Sub Code: BDS602

Table of Contents
Module-1
1. Introduction
I. What is AI? ..................................................................... 3-6
II. Foundations of AI........................................................... 6-11
III. History of AI................................................................... 11- 15
2. Intelligent Agents:
IV. Agents and environment.................................................. 15-17
V. Concept of Rationality .................................................... 17-18
VI. The nature of environment .............................................. 18-20
VII. The structure of agents .................................................... 20-29

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 2


Introduction:
Foundation of AI was laid with Boolean theory by a mathematician, Boole & other researchers.
Since the invention of computer in 1943, AI has been of interest to the researchers. They always
aimed to make machines more intelligent than humans.

Intelligence is a property of mind that encompasses many related mental abilities, such as the
capabilities to
✓ reason
✓ plan
✓ solve problems
✓ think abstractly
✓ comprehend ideas and language and
✓ learn

I. What is AI?
John McCarthy in mid-1950’s coined the term Artificial Intelligence‖ which he would define
as the science and engineering of making intelligent machines‖ AI is about teaching the
machines to learn, to act, and think as humans would do.

We can organize AI definition into 4 categories:

➢ The definitions on top are concerned with thought processes and reasoning, whereas
the ones on the bottom address behaviour.
➢ The definitions on the left measure success in terms of conformity to human
performance whereas the ones on the right measure against an ideal performance
measure called rationality.
➢ A system is rational if it does the "right thing," given what it knows.
➢ Historically, all four approaches to AI have been followed, each by different people
with different methods.
➢ A human-centred approach must be in part an empirical science, involving observations
and hypotheses about human behaviour.
➢ A rationalist’s approach involves a combination of mathematics and engineering. The
various groups have both disparaged and helped each other.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 3


Let us look at the four approaches in more detail.

Thinking humanly:
The cognitive modelling approach If we are going to say that a given program thinks like a
human, we must have some way of determining how humans think. We need to get inside the
actual workings of human minds.
There are three ways to do this:
1. through introspection-trying to catch our own thoughts as they go by
2. through psychological experiments observing a person in action; and
3. through brain imaging—observing the brain in action.
Acting humanly: The Turing Test approach
• The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence.
• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
• This test is used to evaluate a computer acting like humanly.
• For current scenarios the computer would need to possess the following capabilities:
➢ natural language processing to enable it to communicate successfully in English
➢ knowledge representation to store what it knows or hears; o automated reasoning to use
the stored information to answer questions and to draw new conclusions
➢ machine learning to adapt to new circumstances and to detect and the patterns.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 4


Sub Name: AI & ML Sub Code: BDS602

• Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
through the hatch.
• To pass the total Turing Test, the computer will need computer vision to perceive objects,
and robotics to manipulate objects and move about.
These six disciplines compose most of AI.
✓ The Turing test, proposed by Alan Turing (1950)

✓ Human interrogator is connected to either another human or a machine in another room.


✓ Interrogator may ask text questions via a computer terminal, gets answers from the
other room.
✓ If the human interrogator cannot distinguish the human from the machine, the machine
is said to be intelligent.
Thinking rationally:
The “laws of thought” approach Aristotle was one of the first to attempt to codify right thinking
that is, irrefutable reasoning processes. His syllogisms provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
Example:
Socrates is a man;
all men are mortal;
therefore, Socrates is mortal. -- logic
There are two main obstacles to this approach.
1. It is not easy to take informal knowledge and state it in the formal terms required by logical
notation, particularly when the knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem in principle‖ and solving it in
practice.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 5


Sub Name: AI & ML Sub Code: BDS602

Acting rationally:
The rational agent approaches:
• An agent is just something that acts.
• All computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, and adapt to
change, and create and pursue goals.
• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
• In the laws of thought approach to AI, the emphasis was on correct inferences.
• On the other hand, correct inference is not all of rationality; in some situations, there is no
provably correct thing to do, but something must still be done.
• For example, recoiling from a hot stove is a reflex action that is usually more successful than
a slower action taken after careful deliberation.
II. The Foundations of AI
1. Philosophy (the study of the fundamental nature of knowledge):
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
➢ Aristotle (384–322 B.C.), was the first to formulate a precise set of laws governing the
rational part of the mind. He developed an informal system of syllogisms for proper
reasoning, which in principle allowed one to generate conclusions mechanically, given
initial premises. E.g. all dogs are animals; all animals have four legs; therefore, all dogs
have four legs.
➢ Thomas Hobbes (1588–1679) proposed that reasoning was like numerical
computation that we add and subtract in our silent thoughts.
➢ Rene Descartes (1596–1650) gave the first clear discussion of the distinction between
mind and matter and of the problems that arise.
➢ The empiricism movement, starting with Francis Bacon's (1561— 1626).
➢ The confirmation theory of Carnap and Carl Hempel (1905-1997) attempted to
analyse the acquisition of knowledge from experience.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 6


➢ Sub Name: AI & ML Sub Code: BDS602

➢ Carnap's book The Logical Structure of the World (1928) defined an explicit
computational procedure for extracting knowledge from elementary experiences. It was
probably the first theory of mind as a computational process.
➢ The final element in the philosophical picture of the mind is the connection between
knowledge and action. This question is vital to Al because intelligence requires action
as well as reasoning.
2. Mathematics:
• What are the formal rules to draw valid conclusions?
• What can be computed?
➢ Formal science required a level of mathematical formalization in three fundamental
areas: logic, computation, and probability.
➢ Logic: George Boole (1815–1864), who worked out the details of propositional, or
Boolean, logic. In 1879, Gottlob Frege (1848–1925) extended Boole’s logic to include
objects and relations, creating the first order logic that is used today.
➢ First order logic – Contains predicates, quantifiers and variables
➢ Example:
Philosopher(a) ⇒ Scholar(a)
∀x, effect_carona(x) ⇒ quarantine(x)
∀x, King(x) ^ Greedy (x) ⇒ Evil (x)
➢ Alfred Tarski (1902–1983) introduced a theory of reference that shows how to relate
the objects in a logic to objects in the real world.
➢ Logic and Computation: The first nontrivial algorithm is thought to be Euclid’s
algorithm for computing greatest common divisors(GCD).
• Beside logic and computation, the third great contribution of mathematics to AI is the
probability. The Italian Gerolamo Cardanao (1501-1576) first framed the idea of
probability, describing it in terms of the possible outcomes of gambling events.
• Thomas Bayes (1702-1761) proposed a rule for updating probabilities in the light of
new evidence. Baye’s rule underlies most modern approaches to uncertain reasoning in AI
systems.
3. Economics:
• How should we make decisions so as to maximize payoff?
• How should we do this when the payoff may be far in the future?

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 7


4. Sub Name: AI & ML Sub Code: BDS602
5.
6.

• The science of economics got its start in 1776, when Scottish philosopher Adam Smith
treat it as a science, using the idea that economies can be thought of as consisting of individual
agents maximizing their own economic well-being.
➢ Decision theory, which combines probability theory with utility theory, provides a
formal and complete framework for decisions (economic or otherwise) made under
uncertainty that is, in cases where probabilistic descriptions appropriately capture the
decision maker’s environment.
➢ Von Neumann and Morgenstern’s development of game theory included the
surprising result that, for some games, a rational agent should adopt policies that are
randomized. Unlike decision theory, game theory does not offer an unambiguous
prescription for selecting actions.
➢ Operations Research, For the most part, economists did not address the third question
listed above, namely, how to make rational decisions when payoffs from actions are not
immediate but instead result from several actions taken in sequence. This topic was
pursued in the field of operations research, which emerged in World War II from efforts
in Britain to optimize radar installations, and later found civilian applications in
complex management decisions. The work of Richard Bellman (1957) formalized a
class of sequential decision problems called Markov decision processes.
➢ SATISFICING: Work in economics and operations research has contributed much to
our notion of rational agents, yet for many years AI research developed along entirely
separate paths. One reason was the apparent complexity of making rational decisions.
The pioneering AI researcher Herbert Simon (1916–2001) won the Nobel Prize in
economics in 1978 for his early work showing that models based on satisficing making
decisions that are “good enough,” rather than laboriously calculating an optimal
decision—gave a better description of actual human behavior (Simon, 1947). Since the
1990s, there has been a resurgence of interest in decision-theoretic techniques for agent
systems (Wellman, 1995).

7. Neuroscience:
• How do brain process information?
➢ Neuroscience is the study of the nervous system, particularly the brain.
➢ 335 B.C. Aristotle wrote, "Of all the animals, man has the largest brain in proportion to
his size."

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 8


Sub Name: AI & ML Sub Code: BDS602

➢ Nicolas Rashevsky (1936, 1938) was the first to apply mathematical models to the study
of the nervous system.

➢ The measurement of intact brain activity began in 1929 with the invention by Hans
Berger of the electroencephalograph (EEG).
➢ The recent development of functional magnetic resonance imaging (fMRI) (Ogawa et
al., 1990; Cabeza and Nyberg, 2001) is giving neuroscientists unprecedentedly detailed
images of brain activity, enabling measurements that correspond in interesting ways to
ongoing cognitive processes.
8. Psychology:
• How do humans and animals think and act?
➢ Behaviourism movement, led by John Watson (1878-1958). Behaviourist’s insisted on
studying only objective measures of the percept’s (stimulus) given to an animal and its
resulting actions (or response). Behaviourism discovered a lot about rats and pigeons
but had less success at understanding human.
➢ Cognitive psychology, views the brain as an information processing device. Common
view among psychologist that a cognitive theory should be like a computer

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 9


Sub Name: AI & ML Sub Code: BDS602

program.(Anderson 1980) i.e. It should describe a detailed information processing


mechanism whereby some cognitive function might be implemented.
➢ Craik specified the three key steps of a knowledge-based agent:
o the stimulus must be translated into an internal representation,
o the representation is manipulated by cognitive processes to derive new internal
representations, and
o these are in turn retranslated back into action. He clearly explained why this
was a good design for an agent

9. Computer engineering:
• How can we build an efficient computer?
➢ For artificial intelligence to succeed, we need two things: intelligence and an artefact.
The computer has been the artefact (object) of choice.
➢ The first operational computer was the electromechanical Heath Robinson, built in
1940 by Alan Turing's team for a single purpose: deciphering German messages.
➢ The first operational programmable computer was the Z-3, the invention of KonradZuse
in Germany in 1941.
➢ The first electronic computer, the ABC, was assembled by John Atanasoff and his
student Clifford Berry between 1940 and 1942 at Iowa State University.
➢ The first programmable machine was a loom, devised in 1805 by Joseph Marie
Jacquard (1752-1834) that used punched cards to store instructions for the pattern to be
woven.
10. Control theory and cybernetics:
• How can artifacts operate under their own control?
➢ Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water
clock with a regulator that maintained a constant flow rate. This invention changed the
definition of what an artifact could do.
➢ Modern control theory, especially the branch known as stochastic optimal control, has
as its goal the design of systems that maximize an objective function over time. This
roughly OBJECTIVE FUNCTION matches our view of Al: designing systems that
behave optimally.
➢ Calculus and matrix algebra- the tools of control theory The tools of logical inference
and computation allowed AI researchers to consider problems such as language, vision,
and planning that fell completely outside the control theorist’s purview.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 10


Sub Name: AI & ML Sub Code: BDS602

11. Linguistics:
• How does language relate to thought?
➢ In 1957, B. F. Skinner published Verbal Behaviour. This was a comprehensive, detailed
account of the behaviourist approach to language learning, written by the foremost
expert in the field.
➢ Noam Chomsky, who had just published a book on his own theory, Syntactic
Structures. Chomsky pointed out that the behaviourist theory did not address the notion
of creativity in language.
➢ Modern linguistics and AI were born at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language
processing.
➢ The problem of understanding language soon turned out to be considerably more
complex than it seemed in 1957. Understanding language requires an understanding of
the subject matter and context, not just an understanding of the structure of sentences.
➢ Knowledge representation (the study of how to put knowledge into a form that a
computer can reason with)- tied to language and informed by research in linguistics.

III. The History of AI


Important research that laid the groundwork for AI:

➢ The gestation of artificial intelligence (1943–1955)


1943: Warren McCulloch and Walter Pitts: a model of artificial Boolean circuit
model of neurons to perform computations.
• First steps towards connectionist computation and learning (Hebbian learning)
• Marvin Minsky and Dann Edmonds (1951) constructed the first neural network
computer.
1949: Donald Hebb: Hebbaian learning rule.
1950: Alan Turing’s “Computing Machinery and Intelligence”
• First complete vision of AI.
1950: Marvin Minsky SNARC (first neural computer)

➢ The birth of artificial intelligence (1956)


Dartmouth Workshop bringing together top minds on automata theory, neural nets
and the study of intelligence.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 11


Sub Name: AI & ML Sub Code: BDS602

• Allen Newell and Herbert Simon: The logic theorist (first nonnumeric thinking
program used for theorem proving)
• For the next 20 years the field was dominated by these participants.

➢ Early enthusiasm, great expectations (1952–1969)


1952: Newell and Simon-Logic Theorist (LT)
• Introduced the General Problem Solver (GPS).
• Imitation of human problem solving.
1952: Arthur Samuel investigated game playing (checkers) with great success.
1958: John McCarthy
• Inventor of Lisp (second-older-high-level language)
• Logic oriented, Advice Taker (separation between knowledge and
reasoning)
1958: Marvin Minsky
• Introduction of microworlds that appear to require intelligence to solve: e.g.
blocks world.
• Anti-logic orientation. society of the mind.
1959: David Gelernter: Geometry Theorem Prover
1965: J A Robinson: Resolution, a complete algorithm for logical reasoning.
1970: Patrick Winston: Blocks World Learning Theory.
1972: Terry Winograd: SHRDLU
1975: David Waltz, Vision & Constraint propagation.
The most famous microworld was the blocks world, which consists of a set of solid blocks
placed on a tabletop (or more often, a simulation of a tabletop), as shown in Figure 1.4.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 12


Sub Name: AI & ML Sub Code: BDS602

➢ A dose of reality (1966–1973)


1966-73: Minsky and Papert Perceptrons
• AI systems do not scale up well
• Neural network research almost disappears.
• Automatic translation fails.
• Progress as slower than expected (Unrealistic predictions)
• Some systems lacked scalability (Combinatorial explosion in search)
• Fundamental limitations on techniques and representations.
➢ Knowledge-based systems: The key to power? (1969–1979)
1969: B. Buchanan:General-purpose vs. Domian specific
• Dendral (expert system to infer the molecular structure from MS data)-First
successful knowledge intensive system.
• Ed Feigenbaum et al. Heuristic Programming Project (Stanford U.)
1975: Shortlifee, Feigenbaum, Buchanan: Expert systems
• MYCIN (expert system to diagnose blood infections)- Introduction of
uncertainty in reasoning.
1976: Newell & Simon: Physical Symbol Systems Hypothesis.
Increase in knowledge representation research
• Logic, frames, semantic nets, …
➢ AI becomes an industry (1980–present)

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 13


Sub Name: AI & ML Sub Code: BDS602

1981: Fifth generation project in Japan.


1982: McDermott
• Digital Equipment, R1, expert system to configure VAX computers
• Teknowledge (expert system company)
• Intellicorp (expert system company)
➢ The return of neural networks (1986–present)
1982: John Hopfield, Hopfield networks.
1985: Rumelhart, Hinton, Williams: Backpropagation.
1986: Rumelhart, McClelland: PDP books.
Puts an end to the AI winter.
Connectionist revival(1986-present): (Return of Neural Network):
• Parallel distributed processing (RumelHart and McClelland, 1986):
Backpropogation.

➢ AI adopts the scientific method (1987–present)


Scientific method: hypothesis-rigorous empirical experiment-results.
Neats vs. Scruffies
Probabilistic framework:
• In speech recognition: Hidden Markov Models (HMMs)
• In uncertain reasoning and expert systems: Bayesian Network Formalism
In neural networks
➢ The emergence of intelligent agents (1995–present)
SOAR, complete agent architecture (Newell, Laird, Rosenbloom)
The whole agent problem:
• “How does an agent act/behave embedded in real environments with continuous
sensory inputs”.
➢ The availability of very large data sets (2001–present)
2002: For the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 14


Sub Name: AI & ML Sub Code: BDS602

2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
2019: Google has demonstrated an AI program "Duplex" which was a virtual
assistant and which had taken hairdresser appointment on call, and lady on other
side didn't notice that she was talking with the machine.
2020-Present: The AI boom started with the initial development of key
architectures and algorithms such as the transformer architecture in 2017, leading
to the scaling and development of large language models exhibiting human-like
traits of reasoning, cognition, attention and creativity. The AI era has been said to
have begun around 2022-2023, with the public release of scaled large language
models such as ChatGPT.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.

Intelligent Agents:

An Intelligent Agent perceives it environment via sensors and acts rationally upon that
environment with its effectors (actuators). Hence, an agent gets percept’s one at a time, and
maps this percept sequence to actions.
IV. Agents and environment

Agent: An Agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.

➢ A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
➢ A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 15


V. Sub Name: AI & ML Sub Code: BDS602
VI.
VII.

➢ A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.

This simple idea is illustrated in Figure 2.1.

Percept: We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence: An agent's percept sequence is the complete history of everything the agent
has ever perceived.
Agent function: Mathematically speaking, we say that an agent's behaviour is described by
the agent function that maps any given percept sequence to an action.
Agent program: Internally, the agent function for an artificial agent will be implemented by
an agent program. It is important to keep these two ideas distinct. The agent function is an
abstract mathematical description; the agent program is a concrete implementation, running on
the agent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown
in Fig 2.2. This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move
left, move right, suck up the dirt, or do nothing. One very simple agent function is the
following: if the current square is dirty, then suck, otherwise move to the other square. A partial
tabulation of this agent function is shown in Fig 2.3.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 16


Sub Name: AI & ML Sub Code: BDS602

VIII. Concept of Rationality


A Rational agent is one that does the right thing. we say that the right action is the one that will
cause the agent to be most successful. That leaves us with the problem of deciding how and
when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how successful
an agent is.
✓ Ex-Agent cleaning the dirty floor
✓ Performance Measure-Amount of dirt collected
✓ When to measure-Weekly for better results
What is rational at any given time depends on four things:
• The performance measure defining the criterion of success
• The agent’s prior knowledge of the environment
• The actions that the agent can perform
• The agent’s percept sequence up to now.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 17


Sub Name: AI & ML Sub Code: BDS602

Omniscience, Learning and Autonomy:


➢ We need to distinguish between rationality and omniscience. An Omniscient agent knows
the actual outcome of its actions and can act accordingly but omniscience is impossible in
reality.
➢ Rational agent not only gathers information but also learns as much as possible from what
it perceives.
➢ If an agent just relies on the prior knowledge of its designer rather than its own percepts
then the agent lacks autonomy.
➢ A system is autonomous to the extent that its behaviour is determined its own experience.
➢ A rational agent should be autonomous.
E.g., a clock (lacks autonomy)
➢ No input (percept’s)
➢ Run only but its own algorithm (prior knowledge)
➢ No learning, no experience, etc.

IX. The nature of environment


Specifying the task environment:
Environments: The Performance measure, the environment and the agent’s actuators and
sensors come under the heading task environment. We also call this as PEAS (Performance,
Environment, Actuators, Sensors)
Figure 2.4 summarizes the PEAS description for the taxi’s task environment

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 18


Sub Name: AI & ML Sub Code: BDS602

Properties of task environments:


The range of task environments that might arise in AI is obviously vast. We can, however,
identify a fairly small number of dimensions along which task environments can be
categorized. These dimensions determine, to a large extent, the appropriate agent design and
the applicability of each of the principal families of techniques for agent implementation. First,
we list the dimensions, then we analyse several task environments to illustrate the ideas.

Environment-Types:
1. Accessible vs. inaccessible or Fully observable vs Partially Observable: If an agent
sensor can sense or access the complete state of an environment at each point of time then it is
a fully observable environment, else it is partially observable.
2. Deterministic vs. Stochastic: If the next state of the environment is completely determined
by the current state and the actions selected by the agents, then we say the environment is
deterministic
3. Episodic vs. nonepisodic:

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 19


Sub Name: AI & ML Sub Code: BDS602

➢ The agent's experience is divided into "episodes." Each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself, because
subsequent episodes do not depend on what actions occur in previous episodes.
➢ Episodic environments are much simpler because the agent does not need to think ahead.
4. Static vs. dynamic: If the environment can change while an agent is deliberating, then we
say the environment is dynamic for that agent; otherwise it is static.
5. Discrete vs. continuous: If there are a limited number of distinct, clearly defined percept’s
and actions we say that the environment is discrete. Otherwise, it is continuous.
Figure 2.6 lists the properties of a number of familiar environments.

X. The structure of agents:

➢ The job of AI is to design the agent program: a function that implements the agent
mapping from percept’s to actions. We assume this program will run on some sort of computing
device, which we will call the architecture.
➢ The architecture might be a plain computer, or it might include special-purpose hardware
for certain tasks, such as processing camera images or filtering audio input. It might also
include software that provides a degree of insulation between the raw computer and the agent
program, so that we can program at a higher level. In general, the architecture makes the
percept’s from the sensors available to the program, runs the program, and feeds the program's
action choices to the effectors as they are generated.
➢ The relationship among agents, architectures, and programs can be summed up as
follows: agent = architecture + program

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 20


Sub Name: AI & ML Sub Code: BDS602

Agent programs:
➢ Intelligent agents accept percepts from an environment and generates actions. The
early versions of agent programs will have a very simple form (Figure 2.7)
➢ Each will use some internal data structures that will be updated as new percepts
arrive.
➢ These data structures are operated on by the agent's decision-making procedures to
generate an action choice, which is then passed to the architecture to be executed

Types of agents: Agents can be grouped into four classes based on their degree of perceived
intelligence and capability:
➢ Simple Reflex Agents
➢ Model-Based Reflex Agents
➢ Goal-Based Agents
➢ Utility-Based Agents
Simple reflex agents:
➢ Simple reflex agents ignore the rest of the percept history and act only on the basis
of the current percept.
➢ The agent function is based on the condition-action rule.
➢ If the condition is true, then the action is taken, else not. This agent function only
succeeds when the environment is fully observable.
The program in Figure 2.8 is specific to one particular vacuum environment. A more general
and flexible approach is first to build a general-purpose interpreter for condition– action rules
and then to create rule sets for specific task environments. Figure 2.9 gives the structure of this
general program in schematic form, showing how the condition–action rules allow the agent to
make the connection from percept to action. The agent program, which is also very simple, is

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 21


Sub Name: AI & ML Sub Code: BDS602

shown in Figure 2.10. The agent in Figure 2.10 will work only if the correct decision can be
made on the basis of only the current percept that is, only if the environment is fully observable.

Model-based reflex agents:


➢ The Model-based agent can work in a partially observable environment, and track
the situation.
➢ A model-based agent has two important factors:

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 22


➢ Sub Name: AI & ML Sub Code: BDS602

• Model: It is knowledge about "how things happen in the world," so it is called


a Model-based agent.
• Internal State: It is a representation of the current state based on percept history.
Figure 2.11 gives the structure of the model-based reflex agent with internal state, showing
how the current percept is combined with the old internal state to generate the updated
description of the current state, based on the agent’s model of how the world works. The agent
program is shown in Figure 2.12. The interesting part is the function UPDATE-STATE, which
is responsible for creating the new internal state description.

Goal-based agents:
➢ A goal-based agent has an agenda.
➢ It operates based on a goal in front of it and makes decisions based on how best to
reach that goal.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 23


Sub Name: AI & ML Sub Code: BDS602

➢ A goal-based agent operates as a search and planning function, meaning it targets


the goal ahead and finds the right action in order to reach it.
➢ Expansion of model-based agent.
Figure 2.13 shows the goal-based agent’s structure.

Utility-based agents:
➢ A utility-based agent is an agent that acts based not only on what the goal is, but the
best way to reach that goal.
➢ The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
➢ The term utility can be used to describe how "happy" the agent is.
The utility-based agent structure appears in Figure 2.14.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 24


Sub Name: AI & ML Sub Code: BDS602

Learning agents:
We have described agent programs with various methods for selecting actions. We have not,
so far, explained how the agent programs come into being. In his famous early paper, Turing
(1950) considers the idea of actually programming his intelligent machines by hand. He
estimates how much work this might take and concludes “Some more expeditious method
seems desirable.” The method he proposes is to build learning machines and then to teach them.
In many areas of AI, this is now the preferred method for creating state-of-the-art systems.
Learning has another advantage, as we noted earlier:
It allows the agent to operate in initially unknown environments and to become more competent
than its initial knowledge alone might allow.
A learning agent can be divided into four conceptual components, as shown in Figure 2.15.

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 25


Sub Name: AI & ML Sub Code: BDS602

Dr. S J Savita, Assistant Professor, CSE-DS, RNSIT Page- 26

You might also like