Unit - 5-1
Unit - 5-1
(SCSB1311)
UNIT – V
APPLICATIONS
AI Applications
Fig 1
Healthcare
One of the foremost deep-lying impacts which AI has created is within the Healthcare space. A
device, as common as a Fitbit or an iWatch, collects a lot of data like the sleep patterns of the
individual, the calories burnt by him, heart rate and a lot more which can help with early
detection, personalization, even disease diagnosis. This device, when powered with AI can easily
monitor and notify abnormal trends. This can even schedule a visit to the closest Doctor by itself
and therefore, it’s also of great help to the doctors who can get help in making decisions and
research with AI. It has been used to predict ICU transfers, improve clinical workflows and even
pinpoint a patient’s risk of hospital-acquired infections.
Automobile
At this stage where automobiles changing from an engine with a chassis around it to a
softwarecontrolled intelligent machine, the role of AI cannot be underestimated. The goal of self-
driving cars, during which Autopilot by Tesla has been the frontrunner, takes up data from all the
Tesla’s running on the road and uses it in machine learning algorithms. The assessment of both
chips is later matched by the system and followed if the input from both is the same. AI are often
witnesses working its magic through robots producing the initial nuts and bolts of a vehicle or in
an autonomous car using machine learning and vision to securely make its way through traffic.
One of the early adopter of Artificial Intelligence is the Banking and Finance Industry. From
Chatbots offered by banks, for instance , SIA by depository financial institution of India, to
intelligent robo-traders by Aidya and Nomura Securities for autonomous, high-frequency
trading, the uses are innumerable. Features like AI bots, digital payment advisers and biometric
fraud detection mechanisms cause higher quality of services to a wider customer base. The
adoption of AI in banking is constant to rework companies within the industry, provide greater
levels useful and more personalized experiences to their customers, reduce risks as well as
increase opportunities involving financial engines of our modern economy.
Surveillance
AI has made it possible to develop face recognition Tools which may be used for surveillance
and security purposes. As a result, this empowers the systems to monitor the footage in real-time
and can be a path breaking development in regards to public safety. Manual monitoring of a
CCTV camera requires constant human intervention so they’re prone to errors and fatigue.
AIbased surveillance is automated and works 24/7, providing real-time insights. According to a
report by the Carnegie Endowment for International Peace, a minimum of 75 out of the 176
countries are using AI tools for surveillance purposes. Across the country, 400 million CCTV
cameras are already in situ , powered by AI technologies, primarily face recognition .
Social Media
All of us love Social Media, don’t we? Social Media is not just a platform for networking and
expressing oneself. It subconsciously shapes our choices, ideologies, and temperament. All this
due to the synthetic Intelligence tools which work silently within the background, showing us
posts that we “might” like and advertising products that “might” be useful based on our search
and browsing history. For example, recently Instagram revealed how it’s been using AI to
customize content for the Explore Tab. This helps with social media advertising because of it’s
unprecedented ability to run paid ads to platform users based on highly granular demographic
and behavioral targeting. Did you know, we also have AI tools that will actually write Facebook
and Instagram ads for us. Another huge benefit of AI in social media is that it allows marketers
to analyze and track every step that they take.
Entertainment
The show business , with the arrival of online streaming services like Netflix and Amazon Prime,
relies heavily on the info collected by the users. This helps with recommendations based upon
the previously viewed content. This is done not only to deliver accurate suggestions but also to
create content that would be liked by a majority of the viewers. With new contents being created
every minute , it is very difficult to classify them and making them easier to search.AI tools
analyze the contents of videos frame by frame and identify objects to feature appropriate tags. AI
is additionally helping media companies to form strategic decisions.
Education
In the education sector also, there are a number of problems which will be solved by the
implementation of AI .A few of them being automated marking software, content retention
techniques and suggesting improvements that are required. This can help the teachers monitor
not just the academic but also the psychological, mental and physical well being of the students
but also their all-round development. This would also help in extending the reach of education to
areas where quality educators can’t be present physically. For Example, Case-based simulations
offered by Harvard graduate school is one such use.
Space Exploration
AI systems are being developed to scale back the danger of human life that venture into the vast
realms of the undiscovered and unraveled universe which is a very risky task that the astronauts
need to take up.As a result, unmanned space exploration missions just like the Mars Rover are
possible due to the utilization of AI. It has helped us discover numerous exoplanets, stars,
galaxies, and more recently, two new planets in our very own system. NASA is also working
with AI applications for space exploration to automate image analysis and to develop
autonomous spacecraft that would avoid space debris without human intervention, create
communication networks more efficient and distortion-free by using an AI-based device.
Gaming
In the gaming industry also , computer game Systems powered by AI is ushering us into a
replacement era of immersive experience in gaming.AI is employed to get responsive, adaptive
or intelligent behaviors primarily in non-player characters (NPCs) almost like human-like
intelligence in video games. It serves to enhance the game-player experience instead of machine
learning or deciding. AI has also been playing a huge role in creating video games and making it
more tailored to players’ preferences. Matthew Guzdial from the University of Alberta and his
team are working towards leveraging AI’s power to assist video gamers create the precise game
that they need to play.
Robotics
With increasing developments within the field of AI, robots are becoming more efficient in
performing tasks that earlier were too complex. The idea of complete automation are often
realized only with the assistance of AI, where the system can’t just perform the specified task but
also monitor, inspect and improve them without any human intervention. AI in robotics helps the
robots to learn the processes and perform the tasks with complete autonomy, without any human
intervention. This is because robots are designed to perform repetitive tasks with utmost
precision and increased speed.AI has been introducing flexibility and learning capabilities in
previously rigid applications of robots. These benefits are expected to reinforce the market
growth.
Agriculture
Artificial Intelligence is changing the way we do one among our most primitive and basic
professions which is farming.The use of AI in agriculture are often attributed to agriculture
robots, predictive analysis, and crop and soil monitoring.In addition, drones are also used for
spraying insecticides and detecting weed formation in large farms. This is getting to help firms
like Blue River Technologies, better manage the farms.AI has also enhanced crop production and
improved real-time monitoring, harvesting, processing and marketing.
E-Commerce
This is one of the Artificial Intelligence Applications that’s found to be widely used. Different
departments of E-commerce including logistics, predicting demand, intelligent marketing, better
personalization, use of chatbots, etc. are being disrupted by AI. The E-Commerce industry, a
prominent player being Amazon is one among the primary industries to embrace AI. This may
experience a good use of AI with time. E-commerce retailers are increasingly turning towards
chatbots or digital assistants to supply 24×7 support to their online buyers. Built using AI
technologies, chatbots are becoming more intuitive and are enabling a far better customer
experience. There are a number of industries which are on the verge of transformation by AI.
Though this is often in no way an exhaustive list but probably the foremost plausible ones within
the near future.
Language Models
Language modeling (LM) is the use of various statistical and probabilistic techniques to
determine the probability of a given sequence of words occurring in a sentence. They are used in
natural language processing (NLP) applications, particularly ones that generate text as an output.
There are primarily two types of language models: Statistical Language Models. ...
ALanguage Model is an AI model that has been trained to predict the next word or words in a
text based on the preceding words, its part of the technology that predicts the next word you want
to type on your mobile phone allowing you to complete the message faster. The task of
predicting the next word/s is referred to as self-supervised learning, it does not need labels it just
needs lots of text. The process applies its own labels to the text.
Language models can mono linguistic or poly linguistic. Wikipedia suggests that there should be
separate language models for each document collection, however Jeremy and Sebastian found
that using the Wikipedia sets have sufficient overlap that its not necessary.
There is a broad classification of Language Models that fit into two main groups that are:
Statistical Language Models: These models use traditional statistical techniques like N-grams,
Hidden Markov Models (HMM) and certain linguistic rules to learn the probability distribution
of words.
Neural Language Models: These are new players in the NLP town and have surpassed the
statistical language models in their effectiveness. They use different kinds of Neural Networks to
model language.
It is the use of statistical and probabilistic techniques to determine the probability of a given
sequence of words occurring in a sentence. Language models are using in NLP applications in
general and particularly ones that generate text as an output.
NLP is an exciting and at the cutting edge of ML where practitioners strive to reduce the errors
and improve the abilities of NLP. Language models are the base on which this technology rests,
the better the language model the better the model trains and the more accurate the final result.
This article describes the most prominent approaches to apply artificial Intelligence technologies
to information retrieval (IR). Information retrieval is a key technology for knowledge
management. It deals with the search for information and the representation, storage and
organization of knowledge. Information retrieval is concerned with search processes in which a
user needs to identify a subset of information which is relevant for his information need within a
large amount of knowledge. The information seeker formulates a query trying to describe his
information need. The query is compared to document representations which were extracted
during an indexing phase. The representations of documents and queries are typically matched
by a similarity function such as the Cosine. The most similar documents are presented to the
users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem
to properly represent documents and to match imprecise representations has soon led to the
application of techniques developed within Artificial Intelligence to information retrieval.
In the early days of computer science, information retrieval (IR) and artificial intelligence (AI)
developed in parallel. In the 1980s, they started to cooperate and the term intelligent information
retrieval was coined for AI applications in IR. In the 1990s, information retrieval has seen a shift
from set based Boolean retrieval models to ranking systems like the vector space model and
probabilistic approaches. These approximate reasoning systems opened the door for more
intelligent value added components. The large amount of text documents available in
professional databases and on the internet has led to a demand for intelligent methods in text
retrieval and to considerable research in this area. The need for better preprocessing to extract
more knowledge from data has become an important way to improve systems. Off the shelf
approaches promise worse results than systems adapted to users, domain and information needs.
Today, most techniques developed in AI have been applied to retrieval systems with more or less
success. When data from users is available, systems use often machine learning to optimize their
results.
Information Retrieval
A retrieval model (IR) chooses and ranks relevant pages based on a user's query. Document
selection and ranking can be formalized using matching functions that return retrieval status
values (RSVs) for each document in a collection since documents and queries are written in the
same way. The majority of IR systems portray document contents using a collection of
descriptors known as words from a vocabulary V.
Fig 2
The estimation of the likelihood of user relevance for each page and query in relation to a
collection of q training documents.
In a vector space, the similarity function between queries and documents is computed.
Classic IR Model
It is the most basic and straightforward IR model. This paradigm is founded on mathematical
information that was easily recognized and comprehended. The three traditional IR models are
Boolean, Vector, and Probabilistic.
Non-Classic IR Model
It is diametrically opposed to the traditional IR model. Addition than probability, similarity, and
Boolean operations, such IR models are based on other ideas. Non-classical IR models include
situation theory models, information logic models, and interaction models.
Alternative IR Model
It is an improvement to the traditional IR model that makes use of some unique approaches from
other domains. Alternative IR models include fuzzy models, cluster models, and latent semantic
indexing (LSI) models.
Ad-hoc retrieval is the classical problem in an information retrieval system. Ad-hoc retrieval
problems are a sort of classical problem in the information retrieval paradigm in which a query in
natural language is presented to obtain the relevant information.
After the query is returned, the information that does not satisfy our search criteria becomes an
ad hoc retrieval difficulty. For example, suppose we search for something on the Internet and it
returns some specific sites that are relevant to our search, but there may also be some non-
relevant results. This is because of the ad-hoc retrieval issue.
Acquisition
Documents and other things are being chosen from various websites.
Representation
The representation of information retrieval system mainly involves indexing the following:
File Organisation
There are mainly 2 categories of file organization which are: sequential and inverted. The
mixture of these two is a combination.
Sequential
Reversed
Combination
Query
When a user inputs a query into the system, an IR process begins. Queries, such as search strings
in web search engines, are explicit representations of information requests. A query in
information retrieval system does not uniquely identify a particular object in a collection.
Instead, numerous things may match the query, maybe with varying degrees of significance.
Information Extraction’s main goal is to find meaningful information from the document
Information Extraction
set. IE is one type of IR. IE automatically gets structured information from a set of
unstructured documents or corpus. IE focuses more on texts that can be read and written by
humans and utilize them with NLP (natural language processing). But information retrieval
system finds information that is relevant to the user’s information need and that is stored in a
computer. It returns documents of text (unstructured form) from a large set of corpses.
The information extraction system used in online text extraction should come at a low cost. It
needs to have flexibility in development and must have an easy conversion to new domains.
Let’s take the natural language processing of the machine as an example, i.e. Here
IE(information extraction) is able to recognize the IR system of a person’s need. Using
information extraction we want to make a machine capable of extracting structured information
from documents. The importance of an information extraction system is determined by the
growing amount of information available in unstructured form(data without metadata), like on
the Internet. This knowledge can be made more accessible utilizing transformation into
relational form, or by marking-up with XML tags.
We always try to use automated learning systems in information extraction and we always use
this. This type of IE system will decrease the faults in information extraction. This will also
reduce dependencies on a domain by diminishing the requirement for supervision. IE of
structured information relies on the basic content management principle: “Content must be in
context to have value“. Information Extraction is difficult than Information Retrieval.
Nature of Real information is buried inside Extract information from within the
Information documents documents
Result Format The long listing of documents Aggregate over the entire set
Aspect Information Retrieval Information Extraction
Speech
Written Text
Components of NLP:
Difficulties in NLU:
NL has an extremely rich form and structure. It is very ambiguous. There can be different levels
of ambiguity –
NLP Terminology:
Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a
language means the collection of words and phrases in a language. Lexical analysis is dividing
the whole chunk of txt into paragraphs, sentences, and words.
Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and
arranging words in a manner that shows the relationship among the words. The sentence such as
“The school goes to boy” is rejected by English syntactic analyzer.
Fig 3
Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The
text is checked for meaningfulness. It is done by mapping syntactic structures and objects in the
task domain. The semantic analyzer disregards sentence such as “hot ice-cream”.
Discourse Integration − The meaning of any sentence depends upon the meaning of the
sentence just before it. In addition, it also brings about the meaning of immediately succeeding
sentence.
Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It
involves deriving those aspects of language which require real world knowledge.
Context-Free Grammar
Context-Free Grammar
It is the grammar that consists rules with a single symbol on the left-hand side of the rewrite
rules. Let us create grammar to parse a sentence –
The parse tree breaks down the sentence into structured parts so that the computer can easily
understand and process it. In order for the parsing algorithm to construct this parse tree, a set of
rewrite rules, which describe what tree structures are legal, need to be constructed.
These rules say that a certain symbol may be expanded in the tree by a sequence of other
symbols. According to first order logic rule, if there are two strings Noun Phrase (NP) and Verb
Phrase (VP), then the string combined by NP followed by VP is a sentence. The rewrite rules for
the sentence are as follows –
S → NP VP
VP → V NP
Lexocon –
DET → a | the
Now consider the above rewrite rules. Since V can be replaced by both, "peck" or "pecks",
sentences such as "The bird peck the grains" can be wrongly permitted. i. e. the subject-verb
agreement error is approved as correct.
Demerits –
They are not highly precise. For example, “The grains peck the bird”, is a syntactically correct
according to parser, but even if it makes no sense, parser takes it as a correct sentence.
To bring out high precision, multiple sets of grammar need to be prepared. It may require a
completely different sets of rules for parsing singular and plural variations, passive sentences,
etc., which can lead to creation of huge set of rules that are unmanageable.
Top-Down Parser
Here, the parser starts with the S symbol and attempts to rewrite it into a sequence of terminal
symbols that matches the classes of the words in the input sentence until it consists entirely of
terminal symbols.
These are then checked with the input sentence to see if it matched. If not, the process is started
over again with a different set of rules. This is repeated until a specific rule is found which
describes the structure of the sentence.
Demerits –
Chatbot
At the most basic level, a chatbot is a computer program that simulates and processes human
conversation (either written or spoken), allowing humans to interact with digital devices as if
they were communicating with a real person. Chatbots can be as simple as rudimentary programs
that answer a simple query with a single-line response, or as sophisticated as digital assistants
that learn and evolve to deliver increasing levels of personalization as they gather and process
information.
Task-oriented (declarative) chatbots are single-purpose programs that focus on performing one
function. Using rules, NLP, and very little ML, they generate automated but conversational
responses to user inquiries. Interactions with these chatbots are highly specific and structured and
are most applicable to support and service functions—think robust, interactive FAQs. Task-
oriented chatbots can handle common questions, such as queries about hours of business or
simple transactions that don’t involve a variety of variables. Though they do use NLP so end
users can experience them in a conversational way, their capabilities are fairly basic. These are
currently the most commonly used chatbots.
Chatbot is the most inclusive, catch-all term. Any software simulating human conversation,
whether powered by traditional, rigid decision tree-style menu navigation or cutting-edge
conversational AI, is a chatbot. Chatbots can be found across nearly any communication channel,
from phone trees to social media to specific apps and websites.
AI chatbots are chatbots that employ a variety of AI technologies, from machine learning—
comprised of algorithms, features, and data sets—that optimize responses over time, to natural
language processing (NLP) and natural language understanding (NLU) that accurately interpret
user questions and match them to specific intents. Deep learning capabilities enable AI chatbots
to become more accurate over time, which in turn enables humans to interact with AI chatbots in
a more natural, free-flowing way without being misunderstood.
Virtual agents are a further evolution of AI chatbot software that not only use conversational AI
to conduct dialogue and deep learning to self-improve over time, but often pair those AI
technologies with robotic process automation (RPA) in a single interface to act directly upon the
user’s intent without further human intervention.
To help illustrate the distinctions, imagine that a user is curious about tomorrow’s weather. With
a traditional chatbot, the user can use the specific phrase “tell me the weather forecast.” The
chatbot says it will rain. With an AI chatbot, the user can ask, “What’s tomorrow’s weather
lookin’ like?” The chatbot, correctly interpreting the question, says it will rain. With a virtual
agent, the user can ask, “What’s tomorrow’s weather lookin’ like?”—and the virtual agent not
only predicts tomorrow’s rain, but also offers to set an earlier alarm to account for rain delays in
the morning commute.
Fig 5
Rule-based chatbots
These are akin to the foundational building blocks of a corporate strategy—consistent and
reliable. For instance, many businesses deploy them for preliminary lead generation, offering
predefined responses. The ai integrates this model efficiently, ensuring swift customer
interactions.
Menu-based chatbots
Just like an ATM machine guiding you through options, these chatbots simplify user journeys
with preset menus. They’re especially valuable in e-commerce settings, guiding users from
product queries to checkout.
These are the strategic consultants of the chatbot world. With an understanding of past
interactions, these chatbots remember your preferences, much like ai’s platform that harnesses
AI to deliver personalized experiences, making user interactions genuine and timely. Equipped
with NLP and machine learning, these are best for businesses eyeing in-depth customer
engagement.
Hybrid chatbots
Consider them your integrated business suites, combining the strengths of various models. The
ai’s platform showcases this versatility, accommodating both structured and AI-driven
interactions.
Voice-enabled chatbots
These are the trendsetters. They echo the rise of voice-activated tools in boardrooms and
executive suites. Their voice recognition technology caters to high-level multitaskers, offering
hands-free interactions.
Fig 6
NLP enables chatbots to understand and generate human language. Key processes include:
Tokenization: Breaking down text into smaller units like words or phrases.
Named Entity Recognition (NER): Identifying key elements (entities) in the text, such as
names, dates, and locations.
Sentiment Analysis: Determining the emotional tone behind a series of words to understand the
sentiment expressed.
Machine learning algorithms allow chatbots to learn from interactions and improve their
responses over time. Important aspects include:
Training Data: Using large datasets of text conversations to train the model.
Supervised Learning: Training the model on labeled data where the correct output is provided.
Deep Learning
Deep learning involves using neural networks with many layers to process data. Crucial models
include:
Recurrent Neural Networks (RNNs): Useful for sequential data as they maintain context by
looping over previous outputs.
Transformers: Models like GPT (Generative Pre-trained Transformer) that process the entire
sequence of words at once, enabling more parallelization and handling longer dependencies more
effectively.
Dialogue Management
Dialogue management determines the flow of conversation, managing context and state to
maintain coherent and relevant responses. It ensures the chatbot can handle multi-turn
conversations and keep track of the context to provide meaningful interactions.
Ethical Considerations
Bias and Fairness: Make sure the chatbot does not perpetuate or amplify biases present in the
training data.
Privacy: Safeguarding user data and ensuring compliance with privacy regulations.
Transparency: Informing users that they are interacting with a bot and not a human.
These components are essential for creating an effective and reliable conversational AI chatbot
that can handle a wide range of tasks and interactions.
Retrieval-Based Chatbots
Fig 7
Retrieval-based chatbots are used in closed-domain scenarios and rely on a collection of
predefined responses to a user message. A retrieval-based bot completes three main tasks: intent
classification, entity recognition, and response selection.
Two primary contenders stand out when considering the chatbot spectrum: the steadfast Rule-
based chatbots and the dynamic AI chatbots. It’s akin to choosing between a reliable classic car
or a cutting-edge electric vehicle. Each has its merits, but the key lies in understanding their
capabilities to suit your business terrain. Let’s comparatively dissect their features.
Rule-based
Aspect Chatbots AI Chatbots
Static: Cannot
Learning learn from user Dynamic: Continuously learns and improves from user
ability interactions. interactions.
Limited: Can
only respond to
Response predefined Versatile: Can understand and respond to a wide range of
flexibility queries. user inputs, even if they haven’t been pre-programmed.
Rigid: Follows a
linear
Conversational conversation Natural: Mimics human conversation, allowing for a more
flow flow. fluid and organic interaction.
Complexity of Basic: Can Advanced: Can handle complex queries, context switches,
queries handle simple, and multi-turn conversations.
straightforward
queries.
Basic: Limited to
certain pre-
Integration defined Extensive: Can be integrated with a plethora of tools,
capabilities integrations. databases, and other advanced systems.
Limited:
Requires manual
intervention to Automated: Can easily scale and evolve as business grows
Scalability update or scale. and needs change.
Predictable:
Offers the same
User interaction Personalized: Offers tailored interactions based on user
experience repeatedly. behavior and preferences.
Frequent:
Requires regular
manual updates
to cater to new Minimal: Self-improves over time, reducing the need for
Maintenance queries. frequent manual updates.
Speech recognition technology, also known as Automatic Speech Recognition (ASR), makes it
possible for computers and artificial intelligence (AI) systems to translate spoken words into text.
There are several steps in this process:
1. Decoding: Based on the data obtained in the above processes, the last step includes
choosing the most probable translation for the spoken words.
2. Feature extraction: In this stage, the audio input is processed to extract characteristics
such as Mel-frequency cepstral coefficients (MFCCs), which give the system the
necessary information to recognize the sound.
3. Acoustic Analysis: The audio signal is captured by the system, which then dissects it
into its constituent elements, such as prosody and phonemes.
4. Language Modeling: To increase recognition accuracy, language models are used to
comprehend the semantics and grammatical structure of spoken words.
5. Acoustic Modeling: To link the retrieved characteristics with recognized phonetic
patterns and language context, the system applies statistical models.
Robotics
Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and
efficient robots.
Aspects of Robotics:
AI Programs Robots
They usually operate in computer stimulated They operate in real physical world
worlds
The input to an AI program is in symbols and Inputs to robots is analog signal in the form of
rules. speech waveform or images
They need general purpose computers to They need special hardware with sensors and
operate on. effectors.
Robot Locomotion:
Locomotion is the mechanism that makes a robot capable of moving in its environment. There
are various types of locomotions –
Legged
Wheeled
Combination of Legged and Wheeled Locomotion
Tracked slip/skid
Legged Locomotion:
This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb
up or down, etc.
It requires more number of motors to accomplish a movement. It is suited for rough as well as
smooth terrain where irregular or too smooth surface makes it consume more power for a
wheeled locomotion. It is little difficult to implement because of stability issues.
It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg
coordination is necessary for locomotion.
The total number of possible gaits (a periodic sequence of lift and release events for each of the
total legs) a robot can travel depends upon the number of its legs.
In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3!
= 6.
In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is
directly proportional to the number of legs.
Fig 8
Wheeled Locomotion
Standard wheel − Rotates around the wheel axle and around the contact
Castor wheel − Rotates around the wheel axle and the offset steering joint.
Swedish and Swedish wheels − Omni-wheel, rotates around the contact point, around
the wheel axle, and around the rollers.
Slip/Skid Locomotion
In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with
different speeds in the same or opposite direction. It offers stability because of large contact area
of track and ground.
Fig 10
Components of a Robot:
Power Supply − The robots are powered by batteries, solar power, hydraulic, or pneumatic
power sources.
Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.
Muscle Wires − They contract by 5% when electric current is passed through them.
Sensors − They provide knowledge of real time information on the task environment. Robots are
equipped with vision sensors to be to compute the depth in the environment. A tactile sensor
imitates the mechanical properties of touch receptors of human fingertips.
Applications of Robotics:
Industries − Robots are used for handling material, cutting, welding, color coating, drilling,
polishing, etc.
Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot
named Daksh, developed by Defense Research and Development Organization (DRDO), is in
function to destroy life-threatening objects safely.
Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously,
rehabilitating permanently disabled people, and performing complex surgeries such as brain
tumors.
Exploration − The robot rock climbers used for space exploration, underwater drones used for
ocean exploration are to name a few.
Entertainment − Disney’s engineers have created hundreds of robots for movie making.