0% found this document useful (0 votes)
37 views35 pages

Unit - 5-1

The document discusses various applications of Artificial Intelligence (AI) across multiple sectors, including healthcare, automotive, banking, surveillance, social media, entertainment, education, space exploration, gaming, robotics, agriculture, e-commerce, and language models. It highlights how AI enhances efficiency, personalization, and decision-making in these fields, while also addressing the role of AI in information retrieval and extraction. The document emphasizes the transformative potential of AI technologies in improving processes and user experiences across diverse industries.

Uploaded by

Mani Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views35 pages

Unit - 5-1

The document discusses various applications of Artificial Intelligence (AI) across multiple sectors, including healthcare, automotive, banking, surveillance, social media, entertainment, education, space exploration, gaming, robotics, agriculture, e-commerce, and language models. It highlights how AI enhances efficiency, personalization, and decision-making in these fields, while also addressing the role of AI in information retrieval and extraction. The document emphasizes the transformative potential of AI technologies in improving processes and user experiences across diverse industries.

Uploaded by

Mani Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

(SCSB1311)

UNIT – V

APPLICATIONS
AI Applications

Fig 1

Healthcare

One of the foremost deep-lying impacts which AI has created is within the Healthcare space. A
device, as common as a Fitbit or an iWatch, collects a lot of data like the sleep patterns of the
individual, the calories burnt by him, heart rate and a lot more which can help with early
detection, personalization, even disease diagnosis. This device, when powered with AI can easily
monitor and notify abnormal trends. This can even schedule a visit to the closest Doctor by itself
and therefore, it’s also of great help to the doctors who can get help in making decisions and
research with AI. It has been used to predict ICU transfers, improve clinical workflows and even
pinpoint a patient’s risk of hospital-acquired infections.

Automobile

At this stage where automobiles changing from an engine with a chassis around it to a
softwarecontrolled intelligent machine, the role of AI cannot be underestimated. The goal of self-
driving cars, during which Autopilot by Tesla has been the frontrunner, takes up data from all the
Tesla’s running on the road and uses it in machine learning algorithms. The assessment of both
chips is later matched by the system and followed if the input from both is the same. AI are often
witnesses working its magic through robots producing the initial nuts and bolts of a vehicle or in
an autonomous car using machine learning and vision to securely make its way through traffic.

Banking and Finance

One of the early adopter of Artificial Intelligence is the Banking and Finance Industry. From
Chatbots offered by banks, for instance , SIA by depository financial institution of India, to
intelligent robo-traders by Aidya and Nomura Securities for autonomous, high-frequency
trading, the uses are innumerable. Features like AI bots, digital payment advisers and biometric
fraud detection mechanisms cause higher quality of services to a wider customer base. The
adoption of AI in banking is constant to rework companies within the industry, provide greater
levels useful and more personalized experiences to their customers, reduce risks as well as
increase opportunities involving financial engines of our modern economy.

Surveillance

AI has made it possible to develop face recognition Tools which may be used for surveillance
and security purposes. As a result, this empowers the systems to monitor the footage in real-time
and can be a path breaking development in regards to public safety. Manual monitoring of a
CCTV camera requires constant human intervention so they’re prone to errors and fatigue.
AIbased surveillance is automated and works 24/7, providing real-time insights. According to a
report by the Carnegie Endowment for International Peace, a minimum of 75 out of the 176
countries are using AI tools for surveillance purposes. Across the country, 400 million CCTV
cameras are already in situ , powered by AI technologies, primarily face recognition .

Social Media

All of us love Social Media, don’t we? Social Media is not just a platform for networking and
expressing oneself. It subconsciously shapes our choices, ideologies, and temperament. All this
due to the synthetic Intelligence tools which work silently within the background, showing us
posts that we “might” like and advertising products that “might” be useful based on our search
and browsing history. For example, recently Instagram revealed how it’s been using AI to
customize content for the Explore Tab. This helps with social media advertising because of it’s
unprecedented ability to run paid ads to platform users based on highly granular demographic
and behavioral targeting. Did you know, we also have AI tools that will actually write Facebook
and Instagram ads for us. Another huge benefit of AI in social media is that it allows marketers
to analyze and track every step that they take.

Entertainment

The show business , with the arrival of online streaming services like Netflix and Amazon Prime,
relies heavily on the info collected by the users. This helps with recommendations based upon
the previously viewed content. This is done not only to deliver accurate suggestions but also to
create content that would be liked by a majority of the viewers. With new contents being created
every minute , it is very difficult to classify them and making them easier to search.AI tools
analyze the contents of videos frame by frame and identify objects to feature appropriate tags. AI
is additionally helping media companies to form strategic decisions.

Education

In the education sector also, there are a number of problems which will be solved by the
implementation of AI .A few of them being automated marking software, content retention
techniques and suggesting improvements that are required. This can help the teachers monitor
not just the academic but also the psychological, mental and physical well being of the students
but also their all-round development. This would also help in extending the reach of education to
areas where quality educators can’t be present physically. For Example, Case-based simulations
offered by Harvard graduate school is one such use.

Space Exploration

AI systems are being developed to scale back the danger of human life that venture into the vast
realms of the undiscovered and unraveled universe which is a very risky task that the astronauts
need to take up.As a result, unmanned space exploration missions just like the Mars Rover are
possible due to the utilization of AI. It has helped us discover numerous exoplanets, stars,
galaxies, and more recently, two new planets in our very own system. NASA is also working
with AI applications for space exploration to automate image analysis and to develop
autonomous spacecraft that would avoid space debris without human intervention, create
communication networks more efficient and distortion-free by using an AI-based device.

Gaming

In the gaming industry also , computer game Systems powered by AI is ushering us into a
replacement era of immersive experience in gaming.AI is employed to get responsive, adaptive
or intelligent behaviors primarily in non-player characters (NPCs) almost like human-like
intelligence in video games. It serves to enhance the game-player experience instead of machine
learning or deciding. AI has also been playing a huge role in creating video games and making it
more tailored to players’ preferences. Matthew Guzdial from the University of Alberta and his
team are working towards leveraging AI’s power to assist video gamers create the precise game
that they need to play.

Robotics

With increasing developments within the field of AI, robots are becoming more efficient in
performing tasks that earlier were too complex. The idea of complete automation are often
realized only with the assistance of AI, where the system can’t just perform the specified task but
also monitor, inspect and improve them without any human intervention. AI in robotics helps the
robots to learn the processes and perform the tasks with complete autonomy, without any human
intervention. This is because robots are designed to perform repetitive tasks with utmost
precision and increased speed.AI has been introducing flexibility and learning capabilities in
previously rigid applications of robots. These benefits are expected to reinforce the market
growth.

Agriculture

Artificial Intelligence is changing the way we do one among our most primitive and basic
professions which is farming.The use of AI in agriculture are often attributed to agriculture
robots, predictive analysis, and crop and soil monitoring.In addition, drones are also used for
spraying insecticides and detecting weed formation in large farms. This is getting to help firms
like Blue River Technologies, better manage the farms.AI has also enhanced crop production and
improved real-time monitoring, harvesting, processing and marketing.
E-Commerce

This is one of the Artificial Intelligence Applications that’s found to be widely used. Different
departments of E-commerce including logistics, predicting demand, intelligent marketing, better
personalization, use of chatbots, etc. are being disrupted by AI. The E-Commerce industry, a
prominent player being Amazon is one among the primary industries to embrace AI. This may
experience a good use of AI with time. E-commerce retailers are increasingly turning towards
chatbots or digital assistants to supply 24×7 support to their online buyers. Built using AI
technologies, chatbots are becoming more intuitive and are enabling a far better customer
experience. There are a number of industries which are on the verge of transformation by AI.
Though this is often in no way an exhaustive list but probably the foremost plausible ones within
the near future.

Language Models

Language modeling (LM) is the use of various statistical and probabilistic techniques to
determine the probability of a given sequence of words occurring in a sentence. They are used in
natural language processing (NLP) applications, particularly ones that generate text as an output.

There are primarily two types of language models: Statistical Language Models. ...

 Neural Language Models


 Speech Recognition
 Machine Translation
 Sentiment Analysis
 Text Suggestions
 Parsing Tools

ALanguage Model is an AI model that has been trained to predict the next word or words in a
text based on the preceding words, its part of the technology that predicts the next word you want
to type on your mobile phone allowing you to complete the message faster. The task of
predicting the next word/s is referred to as self-supervised learning, it does not need labels it just
needs lots of text. The process applies its own labels to the text.

Language models can mono linguistic or poly linguistic. Wikipedia suggests that there should be
separate language models for each document collection, however Jeremy and Sebastian found
that using the Wikipedia sets have sufficient overlap that its not necessary.

There is a broad classification of Language Models that fit into two main groups that are:
Statistical Language Models: These models use traditional statistical techniques like N-grams,
Hidden Markov Models (HMM) and certain linguistic rules to learn the probability distribution
of words.

Neural Language Models: These are new players in the NLP town and have surpassed the
statistical language models in their effectiveness. They use different kinds of Neural Networks to
model language.

It is the use of statistical and probabilistic techniques to determine the probability of a given
sequence of words occurring in a sentence. Language models are using in NLP applications in
general and particularly ones that generate text as an output.

NLP is an exciting and at the cutting edge of ML where practitioners strive to reduce the errors
and improve the abilities of NLP. Language models are the base on which this technology rests,
the better the language model the better the model trains and the more accurate the final result.

Artificial Intelligence for Information Retrieval

This article describes the most prominent approaches to apply artificial Intelligence technologies
to information retrieval (IR). Information retrieval is a key technology for knowledge
management. It deals with the search for information and the representation, storage and
organization of knowledge. Information retrieval is concerned with search processes in which a
user needs to identify a subset of information which is relevant for his information need within a
large amount of knowledge. The information seeker formulates a query trying to describe his
information need. The query is compared to document representations which were extracted
during an indexing phase. The representations of documents and queries are typically matched
by a similarity function such as the Cosine. The most similar documents are presented to the
users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem
to properly represent documents and to match imprecise representations has soon led to the
application of techniques developed within Artificial Intelligence to information retrieval.

In the early days of computer science, information retrieval (IR) and artificial intelligence (AI)
developed in parallel. In the 1980s, they started to cooperate and the term intelligent information
retrieval was coined for AI applications in IR. In the 1990s, information retrieval has seen a shift
from set based Boolean retrieval models to ranking systems like the vector space model and
probabilistic approaches. These approximate reasoning systems opened the door for more
intelligent value added components. The large amount of text documents available in
professional databases and on the internet has led to a demand for intelligent methods in text
retrieval and to considerable research in this area. The need for better preprocessing to extract
more knowledge from data has become an important way to improve systems. Off the shelf
approaches promise worse results than systems adapted to users, domain and information needs.
Today, most techniques developed in AI have been applied to retrieval systems with more or less
success. When data from users is available, systems use often machine learning to optimize their
results.

Information Retrieval

A retrieval model (IR) chooses and ranks relevant pages based on a user's query. Document
selection and ranking can be formalized using matching functions that return retrieval status
values (RSVs) for each document in a collection since documents and queries are written in the
same way. The majority of IR systems portray document contents using a collection of
descriptors known as words from a vocabulary V.
Fig 2

The query-document matching function in an IR model is defined in the following ways:

 The estimation of the likelihood of user relevance for each page and query in relation to a
collection of q training documents.
 In a vector space, the similarity function between queries and documents is computed.

Types of Information Retrieval Models

Classic IR Model

It is the most basic and straightforward IR model. This paradigm is founded on mathematical
information that was easily recognized and comprehended. The three traditional IR models are
Boolean, Vector, and Probabilistic.
Non-Classic IR Model

It is diametrically opposed to the traditional IR model. Addition than probability, similarity, and
Boolean operations, such IR models are based on other ideas. Non-classical IR models include
situation theory models, information logic models, and interaction models.

Alternative IR Model

It is an improvement to the traditional IR model that makes use of some unique approaches from
other domains. Alternative IR models include fuzzy models, cluster models, and latent semantic
indexing (LSI) models.

Classical Problem in Information Retrieval (IR) System

Ad-hoc retrieval is the classical problem in an information retrieval system. Ad-hoc retrieval
problems are a sort of classical problem in the information retrieval paradigm in which a query in
natural language is presented to obtain the relevant information.

After the query is returned, the information that does not satisfy our search criteria becomes an
ad hoc retrieval difficulty. For example, suppose we search for something on the Internet and it
returns some specific sites that are relevant to our search, but there may also be some non-
relevant results. This is because of the ad-hoc retrieval issue.

Components of Information Retrieval/ IR Model

Acquisition

Documents and other things are being chosen from various websites.

1. Documents that are mostly text-based o entire texts, titles, abstracts


2. Other research-based objects like Data, statistics, photos, maps, copyrights, soundscapes, and
so on

3. Web crawlers take data and store it in a database.

Representation

The representation of information retrieval system mainly involves indexing the following:

 Summarizing and abstracting


 Bibliographic information: author, title, sources, date, etc.
 Information about metadata
 Classification and clustering
 Field and limit organization
 Basic Index, Supplemental Index Limits

File Organisation

There are mainly 2 categories of file organization which are: sequential and inverted. The
mixture of these two is a combination.

Sequential

It organizes documents based on document data.

Reversed

It provides a list of records under each phrase, term by term.

Combination

Synthesis of inverted indexes as well as sequential documents


When just citations are retrieved, there is no requirement for document files. It leads to
approaches for large files and for computer retrieval efficiency.

Query

When a user inputs a query into the system, an IR process begins. Queries, such as search strings
in web search engines, are explicit representations of information requests. A query in
information retrieval system does not uniquely identify a particular object in a collection.
Instead, numerous things may match the query, maybe with varying degrees of significance.

Information Extraction’s main goal is to find meaningful information from the document

Information Extraction

set. IE is one type of IR. IE automatically gets structured information from a set of
unstructured documents or corpus. IE focuses more on texts that can be read and written by
humans and utilize them with NLP (natural language processing). But information retrieval
system finds information that is relevant to the user’s information need and that is stored in a
computer. It returns documents of text (unstructured form) from a large set of corpses.
The information extraction system used in online text extraction should come at a low cost. It
needs to have flexibility in development and must have an easy conversion to new domains.
Let’s take the natural language processing of the machine as an example, i.e. Here
IE(information extraction) is able to recognize the IR system of a person’s need. Using
information extraction we want to make a machine capable of extracting structured information
from documents. The importance of an information extraction system is determined by the
growing amount of information available in unstructured form(data without metadata), like on
the Internet. This knowledge can be made more accessible utilizing transformation into
relational form, or by marking-up with XML tags.
We always try to use automated learning systems in information extraction and we always use
this. This type of IE system will decrease the faults in information extraction. This will also
reduce dependencies on a domain by diminishing the requirement for supervision. IE of
structured information relies on the basic content management principle: “Content must be in
context to have value“. Information Extraction is difficult than Information Retrieval.

Difference between Information Retrieval and Information Extraction


Information Extraction is not Information Retrieval. Conventional text extraction methods also
return a set of a subset of documents that are probably relevant to the query. Result return is
based on search keywords.
The main goal of IE is to extract meaningful information from corps of documents that might
be in different languages. Here meaningful information contains types of information like
events, facts, components, or relations. These facts are then usually stored automatically into a
database, which may then be used to analyze the data for trends, to give a natural language
summary, or simply to serve for online access. More formally, Information Extraction gets
facts out of documents while Information Retrieval gets sets of relevant documents.

Aspect Information Retrieval Information Extraction

Focus Document Retrieval Feature Retrieval

Output Return set of relevant documents Return facts out of documents

The goal is to find documents that The goal is to extract pre-specified


are relevant to the user’s features from documents or display
Goal information need information.

Nature of Real information is buried inside Extract information from within the
Information documents documents

Result Format The long listing of documents Aggregate over the entire set
Aspect Information Retrieval Information Extraction

Used in many search engines –


Used in database systems to enter
Google is the best IR system for
extracted features automatically.
Application the web.

Typically uses a bag of words Typically based on some form of


Methodology model of the source text. semantic analysis of the source text.

Mostly use the theory of


Emerged from research into rule-
Theoretical information, probability, and
based systems.
Basis statistics.

Natural Language Processing

Natural Language Processing (NLP) refers to AI method of communicating with an intelligent


systems using a natural language such as English. Processing of Natural Language is required
when you want an intelligent system like robot to perform as per your instructions, when you
want to hear decision from a dialogue based clinical expert system, etc. The field of NLP
involves making computers to perform useful tasks with the natural languages humans use.

The input and output of an NLP system can be –

 Speech
 Written Text
 Components of NLP:

There are two components of NLP as given –

1. Natural Language Understanding (NLU) Understanding involves the following tasks –


 Mapping the given input in natural language into useful representations.
 Analysing different aspects of the language.
2. Natural Language Generation (NLG) It is the process of producing meaningful phrases
and sentences in the form of natural language from some internal representation. It
involves –
 Text planning − It includes retrieving the relevant content from knowledge base.
 Sentence planning − It includes choosing required words, forming meaningful
phrases, setting tone of the sentence.
 Text Realization − It is mapping sentence plan into sentence structure. The NLU
is harder than NLG.

Difficulties in NLU:

NL has an extremely rich form and structure. It is very ambiguous. There can be different levels
of ambiguity –

Lexical ambiguity − It is at very primitive level such as word-level.

 For example, treating the word “board” as noun or verb?


 Syntax Level ambiguity − A sentence can be parsed in different ways.
 For example, “He lifted the beetle with red cap.” − Did he use cap to lift the beetle or he
lifted a beetle that had red cap?
 Referential ambiguity − Referring to something using pronouns. For example, Rima went
to Gauri. She said, “I am tired.” − Exactly who is tired?
 One input can mean different meanings.
 Many inputs can mean the same thing.

NLP Terminology:

 Phonology − It is study of organizing sound systematically.


 Morphology − It is a study of construction of words from primitive meaningful units.
 Morpheme − It is primitive unit of meaning in a language.
 Syntax − It refers to arranging words to make a sentence. It also involves determining the
structural role of words in the sentence and in phrases.
 Semantics − It is concerned with the meaning of words and how to combine words into
meaningful phrases and sentences.
 Pragmatics − It deals with using and understanding sentences in different situations and
how the interpretation of the sentence is affected.
 Discourse − It deals with how the immediately preceding sentence can affect the
interpretation of the next sentence.
 World Knowledge − It includes the general knowledge about the world.

Steps in NLP: There are general five steps –

Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a
language means the collection of words and phrases in a language. Lexical analysis is dividing
the whole chunk of txt into paragraphs, sentences, and words.

Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and
arranging words in a manner that shows the relationship among the words. The sentence such as
“The school goes to boy” is rejected by English syntactic analyzer.

Fig 3
Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The
text is checked for meaningfulness. It is done by mapping syntactic structures and objects in the
task domain. The semantic analyzer disregards sentence such as “hot ice-cream”.

Discourse Integration − The meaning of any sentence depends upon the meaning of the
sentence just before it. In addition, it also brings about the meaning of immediately succeeding
sentence.

Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It
involves deriving those aspects of language which require real world knowledge.

Implementation Aspects of Syntactic Analysis There are a number of algorithms researchers


have developed for syntactic analysis, but we consider only the following simple methods –

Context-Free Grammar

Top-Down Parser Let us see them in detail –

Context-Free Grammar

It is the grammar that consists rules with a single symbol on the left-hand side of the rewrite
rules. Let us create grammar to parse a sentence –

“The bird pecks the grains”

Articles (DET) − a | an | the

Nouns − bird | birds | grain | grains

Noun Phrase (NP) − Article + Noun | Article + Adjective + Noun

= DET N | DET ADJ N

Verbs − pecks | pecking | pecked

Verb Phrase (VP) − NP V | V NP


Adjectives (ADJ) − beautiful | small | chirping

The parse tree breaks down the sentence into structured parts so that the computer can easily
understand and process it. In order for the parsing algorithm to construct this parse tree, a set of
rewrite rules, which describe what tree structures are legal, need to be constructed.

These rules say that a certain symbol may be expanded in the tree by a sequence of other
symbols. According to first order logic rule, if there are two strings Noun Phrase (NP) and Verb
Phrase (VP), then the string combined by NP followed by VP is a sentence. The rewrite rules for
the sentence are as follows –

S → NP VP

NP → DET N | DET ADJ N

VP → V NP

Lexocon –

DET → a | the

ADJ → beautiful | perching

N → bird | birds | grain | grains

V → peck | pecks | pecking

The parse tree can be created as shown –


Fig 4

Now consider the above rewrite rules. Since V can be replaced by both, "peck" or "pecks",
sentences such as "The bird peck the grains" can be wrongly permitted. i. e. the subject-verb
agreement error is approved as correct.

Merit − The simplest style of grammar, therefore widely used one.

Demerits –

They are not highly precise. For example, “The grains peck the bird”, is a syntactically correct
according to parser, but even if it makes no sense, parser takes it as a correct sentence.

To bring out high precision, multiple sets of grammar need to be prepared. It may require a
completely different sets of rules for parsing singular and plural variations, passive sentences,
etc., which can lead to creation of huge set of rules that are unmanageable.

Top-Down Parser

Here, the parser starts with the S symbol and attempts to rewrite it into a sequence of terminal
symbols that matches the classes of the words in the input sentence until it consists entirely of
terminal symbols.
These are then checked with the input sentence to see if it matched. If not, the process is started
over again with a different set of rules. This is repeated until a specific rule is found which
describes the structure of the sentence.

Merit − It is simple to implement.

Demerits –

 It is inefficient, as the search process has to be repeated if an error occurs.


 Slow speed of working.

Chatbot

At the most basic level, a chatbot is a computer program that simulates and processes human
conversation (either written or spoken), allowing humans to interact with digital devices as if
they were communicating with a real person. Chatbots can be as simple as rudimentary programs
that answer a simple query with a single-line response, or as sophisticated as digital assistants
that learn and evolve to deliver increasing levels of personalization as they gather and process
information.

There are two main types of chatbots.

Task-oriented (declarative) chatbots are single-purpose programs that focus on performing one
function. Using rules, NLP, and very little ML, they generate automated but conversational
responses to user inquiries. Interactions with these chatbots are highly specific and structured and
are most applicable to support and service functions—think robust, interactive FAQs. Task-
oriented chatbots can handle common questions, such as queries about hours of business or
simple transactions that don’t involve a variety of variables. Though they do use NLP so end
users can experience them in a conversational way, their capabilities are fairly basic. These are
currently the most commonly used chatbots.

Data-driven and predictive (conversational) chatbots are often referred to as virtual


assistants or digital assistants, and they are much more sophisticated, interactive, and
personalized than task-oriented chatbots. These chatbots are contextually aware and leverage
natural-language understanding (NLU), NLP, and ML to learn as they go. They apply
predictive intelligence and analytics to enable personalization based on user profiles and past
user behavior. Digital assistants can learn a user’s preferences over time, provide
recommendations, and even anticipate needs. In addition to monitoring data and intent, they
can initiate conversations. Apple’s Siri and Amazon’s Alexa are examples of consumer-
oriented, data-driven, predictive chatbots.

Chatbots versus AI chatbots versus virtual agents

Chatbot is the most inclusive, catch-all term. Any software simulating human conversation,
whether powered by traditional, rigid decision tree-style menu navigation or cutting-edge
conversational AI, is a chatbot. Chatbots can be found across nearly any communication channel,
from phone trees to social media to specific apps and websites.

AI chatbots are chatbots that employ a variety of AI technologies, from machine learning—
comprised of algorithms, features, and data sets—that optimize responses over time, to natural
language processing (NLP) and natural language understanding (NLU) that accurately interpret
user questions and match them to specific intents. Deep learning capabilities enable AI chatbots
to become more accurate over time, which in turn enables humans to interact with AI chatbots in
a more natural, free-flowing way without being misunderstood.

Virtual agents are a further evolution of AI chatbot software that not only use conversational AI
to conduct dialogue and deep learning to self-improve over time, but often pair those AI
technologies with robotic process automation (RPA) in a single interface to act directly upon the
user’s intent without further human intervention.

To help illustrate the distinctions, imagine that a user is curious about tomorrow’s weather. With
a traditional chatbot, the user can use the specific phrase “tell me the weather forecast.” The
chatbot says it will rain. With an AI chatbot, the user can ask, “What’s tomorrow’s weather
lookin’ like?” The chatbot, correctly interpreting the question, says it will rain. With a virtual
agent, the user can ask, “What’s tomorrow’s weather lookin’ like?”—and the virtual agent not
only predicts tomorrow’s rain, but also offers to set an earlier alarm to account for rain delays in
the morning commute.

Fig 5

Rule-based chatbots

These are akin to the foundational building blocks of a corporate strategy—consistent and
reliable. For instance, many businesses deploy them for preliminary lead generation, offering
predefined responses. The ai integrates this model efficiently, ensuring swift customer
interactions.

Keyword recognition-based chatbots


Imagine a meticulous analyst who identifies patterns and trends. These chatbots pick out crucial
keywords from a conversation and offer more nuanced responses, reminiscent of how ai’s
chatbots process user inputs to generate contextually relevant replies.

Menu-based chatbots

Just like an ATM machine guiding you through options, these chatbots simplify user journeys
with preset menus. They’re especially valuable in e-commerce settings, guiding users from
product queries to checkout.

Contextual chatbots (Intelligent chatbots)

These are the strategic consultants of the chatbot world. With an understanding of past
interactions, these chatbots remember your preferences, much like ai’s platform that harnesses
AI to deliver personalized experiences, making user interactions genuine and timely. Equipped
with NLP and machine learning, these are best for businesses eyeing in-depth customer
engagement.

Hybrid chatbots

Consider them your integrated business suites, combining the strengths of various models. The
ai’s platform showcases this versatility, accommodating both structured and AI-driven
interactions.

Voice-enabled chatbots

These are the trendsetters. They echo the rise of voice-activated tools in boardrooms and
executive suites. Their voice recognition technology caters to high-level multitaskers, offering
hands-free interactions.
Fig 6

How AI Chatbots Work

AI chatbots, also known as conversational agents, function using a combination of natural


language processing (NLP), machine learning (ML), and sometimes deep learning techniques.
Here’s a detailed breakdown of how they work:

1. Natural Language Processing (NLP)

NLP enables chatbots to understand and generate human language. Key processes include:

Tokenization: Breaking down text into smaller units like words or phrases.

Parsing: Analyzing the grammatical structure of the text.

Named Entity Recognition (NER): Identifying key elements (entities) in the text, such as
names, dates, and locations.
Sentiment Analysis: Determining the emotional tone behind a series of words to understand the
sentiment expressed.

Machine Learning (ML)

Machine learning algorithms allow chatbots to learn from interactions and improve their
responses over time. Important aspects include:

Training Data: Using large datasets of text conversations to train the model.

Supervised Learning: Training the model on labeled data where the correct output is provided.

Reinforcement Learning: Enhancing the model’s performance by rewarding correct responses


and penalizing incorrect ones.

Deep Learning

Deep learning involves using neural networks with many layers to process data. Crucial models
include:

Recurrent Neural Networks (RNNs): Useful for sequential data as they maintain context by
looping over previous outputs.

Transformers: Models like GPT (Generative Pre-trained Transformer) that process the entire
sequence of words at once, enabling more parallelization and handling longer dependencies more
effectively.

Dialogue Management

Dialogue management determines the flow of conversation, managing context and state to
maintain coherent and relevant responses. It ensures the chatbot can handle multi-turn
conversations and keep track of the context to provide meaningful interactions.
Ethical Considerations

Ensuring the ethical use of chatbots involves addressing issues like:

Bias and Fairness: Make sure the chatbot does not perpetuate or amplify biases present in the
training data.

Privacy: Safeguarding user data and ensuring compliance with privacy regulations.

Transparency: Informing users that they are interacting with a bot and not a human.

These components are essential for creating an effective and reliable conversational AI chatbot
that can handle a wide range of tasks and interactions.

Retrieval-Based Chatbots

Fig 7
Retrieval-based chatbots are used in closed-domain scenarios and rely on a collection of
predefined responses to a user message. A retrieval-based bot completes three main tasks: intent
classification, entity recognition, and response selection.

Rule-based chatbots vs. AI chatbots

Two primary contenders stand out when considering the chatbot spectrum: the steadfast Rule-
based chatbots and the dynamic AI chatbots. It’s akin to choosing between a reliable classic car
or a cutting-edge electric vehicle. Each has its merits, but the key lies in understanding their
capabilities to suit your business terrain. Let’s comparatively dissect their features.

Rule-based
Aspect Chatbots AI Chatbots

Static: Cannot
Learning learn from user Dynamic: Continuously learns and improves from user
ability interactions. interactions.

Limited: Can
only respond to
Response predefined Versatile: Can understand and respond to a wide range of
flexibility queries. user inputs, even if they haven’t been pre-programmed.

Rigid: Follows a
linear
Conversational conversation Natural: Mimics human conversation, allowing for a more
flow flow. fluid and organic interaction.

Complexity of Basic: Can Advanced: Can handle complex queries, context switches,
queries handle simple, and multi-turn conversations.
straightforward
queries.

Basic: Limited to
certain pre-
Integration defined Extensive: Can be integrated with a plethora of tools,
capabilities integrations. databases, and other advanced systems.

Limited:
Requires manual
intervention to Automated: Can easily scale and evolve as business grows
Scalability update or scale. and needs change.

Predictable:
Offers the same
User interaction Personalized: Offers tailored interactions based on user
experience repeatedly. behavior and preferences.

Frequent:
Requires regular
manual updates
to cater to new Minimal: Self-improves over time, reducing the need for
Maintenance queries. frequent manual updates.

Cost-effective: The initial setup might be higher rule-based


Higher: Regular alternatives, but they reduce long-term costs due to self-
manual updates improvement and scalability.Not just this, they also
can increase significantly boost revenue through their advanced
Cost over time costs over time. personalized up-selling and cross-selling abilities.
Speech Recognition in Artificial Intelligence
The way people interact with digital gadgets and systems has changed dramatically in recent
years due to noteworthy developments in speech recognition technology. Speech recognition is a
crucial component of artificial intelligence (AI) that helps close the communication gap between
people and machines. Automation, accessibility features, virtual assistants, transcription services,
and other uses for machine understanding and interpretation of spoken language are made
possible by this technology. The intriguing field of voice recognition in artificial intelligence,
along with its services, difficulties, and prospects, will all be covered in this article.

Developing Knowledge of Speech Recognition

Speech recognition technology, also known as Automatic Speech Recognition (ASR), makes it
possible for computers and artificial intelligence (AI) systems to translate spoken words into text.
There are several steps in this process:

1. Decoding: Based on the data obtained in the above processes, the last step includes
choosing the most probable translation for the spoken words.
2. Feature extraction: In this stage, the audio input is processed to extract characteristics
such as Mel-frequency cepstral coefficients (MFCCs), which give the system the
necessary information to recognize the sound.
3. Acoustic Analysis: The audio signal is captured by the system, which then dissects it
into its constituent elements, such as prosody and phonemes.
4. Language Modeling: To increase recognition accuracy, language models are used to
comprehend the semantics and grammatical structure of spoken words.
5. Acoustic Modeling: To link the retrieved characteristics with recognized phonetic
patterns and language context, the system applies statistical models.

Robotics

Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and
efficient robots.

Robots are the artificial agents acting in real world environment.


Robotics is a branch of AI, which is composed of Electrical Engineering, Mechanical
Engineering, and Computer Science for designing, construction, and application of robots.

Aspects of Robotics:

 The robots have mechanical construction, form, or shape designed to accomplish a


particular task.
 They have electrical components which power and control the machinery.
 They contain some level of computer program that determines what, when and how a
robot does something.

Difference in Robot System and Other AI Program:

Here is the difference between the two –

AI Programs Robots
They usually operate in computer stimulated They operate in real physical world
worlds
The input to an AI program is in symbols and Inputs to robots is analog signal in the form of
rules. speech waveform or images
They need general purpose computers to They need special hardware with sensors and
operate on. effectors.

Robot Locomotion:

Locomotion is the mechanism that makes a robot capable of moving in its environment. There
are various types of locomotions –

 Legged
 Wheeled
 Combination of Legged and Wheeled Locomotion
 Tracked slip/skid
Legged Locomotion:

This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb
up or down, etc.

It requires more number of motors to accomplish a movement. It is suited for rough as well as
smooth terrain where irregular or too smooth surface makes it consume more power for a
wheeled locomotion. It is little difficult to implement because of stability issues.

It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg
coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of the
total legs) a robot can travel depends upon the number of its legs.

If a robot has k legs, then the number of possible events N = (2k-1)!.

In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3!
= 6.

Hence there are six possible different events –

 Lifting the Left leg


 Releasing the Left leg
 Lifting the Right leg
 Releasing the Right leg
 Lifting both the legs together
 Releasing both the legs together

In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is
directly proportional to the number of legs.
Fig 8

Wheeled Locomotion

It requires fewer number of motors to accomplish a movement. It is little easy to implement as


there are less stability issues in case of more number of wheels. It is power efficient as compared
to legged locomotion.

Standard wheel − Rotates around the wheel axle and around the contact

Castor wheel − Rotates around the wheel axle and the offset steering joint.

Swedish and Swedish wheels − Omni-wheel, rotates around the contact point, around
the wheel axle, and around the rollers.

Ball or spherical wheel − Omnidirectional wheel, technically difficult to implement.


Fig 9

Slip/Skid Locomotion

In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with
different speeds in the same or opposite direction. It offers stability because of large contact area
of track and ground.

Fig 10
Components of a Robot:

Robots are constructed with the following –

Power Supply − The robots are powered by batteries, solar power, hydraulic, or pneumatic
power sources.

Actuators − They convert energy into movement.

Electric motors (AC/DC) − They are required for rotational movement.

Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.

Muscle Wires − They contract by 5% when electric current is passed through them.

Piezo Motors and Ultrasonic Motors − Best for industrial robots.

Sensors − They provide knowledge of real time information on the task environment. Robots are
equipped with vision sensors to be to compute the depth in the environment. A tactile sensor
imitates the mechanical properties of touch receptors of human fingertips.

Applications of Robotics:

The robotics has been instrumental in the various domains such as –

Industries − Robots are used for handling material, cutting, welding, color coating, drilling,
polishing, etc.

Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot
named Daksh, developed by Defense Research and Development Organization (DRDO), is in
function to destroy life-threatening objects safely.

Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously,
rehabilitating permanently disabled people, and performing complex surgeries such as brain
tumors.
Exploration − The robot rock climbers used for space exploration, underwater drones used for
ocean exploration are to name a few.

Entertainment − Disney’s engineers have created hundreds of robots for movie making.

You might also like