0% found this document useful (0 votes)
87 views20 pages

Builtin Com Artificial Intelligence

The document discusses artificial intelligence (AI), including its definition, history, and types. AI aims to build machines that can think and act intelligently like humans. There are four main types of AI approaches: thinking humanly, thinking rationally, acting humanly, and acting rationally. The future of AI is promising as computational power continues to increase according to Moore's Law, allowing for more advanced machine learning and deep learning techniques.

Uploaded by

mahima dilsara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
87 views20 pages

Builtin Com Artificial Intelligence

The document discusses artificial intelligence (AI), including its definition, history, and types. AI aims to build machines that can think and act intelligently like humans. There are four main types of AI approaches: thinking humanly, thinking rationally, acting humanly, and acting rationally. The future of AI is promising as computational power continues to increase according to Moore's Law, allowing for more advanced machine learning and deep learning techniques.

Uploaded by

mahima dilsara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 20

FOR EMPLOYERS JOIN LOG IN

JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

Artificial Intelligence.
What Is Artificial Intelligence (AI)? How Does AI Work?

U P D AT E D B Y R EV IEWED B Y
Written by Alyssa Schroer Ellen Glover |  Sep. 19, 2022 Jye Sawtell-Rickson |  Sep. 19, 2022

Introduction to AI
Artificial intelligence allows machines to model, and even improve upon, the capabilities
of the human mind. From the development of self-driving cars to the proliferation of
smart assistants like Siri and Alexa, AI is a growing part of everyday life. As a result, many
tech companies across various industries are investing in artificially intelligent
technologies.

WHAT IS ARTIFICIAL INTELLIGENCE?


Artificial intelligence is a wide-ranging branch of computer science
concerned with building smart machines capable of performing tasks that
typically require human intelligence.

ARTIFICIAL INTELLIGENCE DEFINITION: BASICS OF AI

EXAMPLES OF ARTIFICIAL INTELLIGENCE

AI TIMELINE: HISTORY OF ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE DEFINITION: BASICS OF AI


FOR EMPLOYERS JOIN LOG IN

JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

GETTING MACHINES TO SIMULATE HUMAN INTELLIGENCE IS THE FUNDAMENTAL GOAL OF AI. | IMAGE: SHUTTERSTOCK

How Does Artificial Intelligence Work?

What Is AI?
Less than a decade after helping the Allied forces win World
War II by breaking the Nazi encryption machine Enigma,
mathematician Alan Turing changed history a second time
with a simple question: “Can machines think?” 

Turing’s 1950 paper “Computing Machinery and Intelligence”


and its subsequent Turing Test established the fundamental
goal and vision of AI.   

At its core, AI is the branch of computer science that aims to


answer Turing’s question in the affirmative. It is the endeavor
to replicate or simulate human intelligence in machines. The
expansive goal of AI has given rise to many questions and
debates. So much so that no singular definition of the field is
universally accepted.

Can machines think? – Alan Turing, 1950

Defining AI
The major limitation in defining AI as simply “building
machines that are intelligent” is that it doesn't actually explain
what AI is and what makes a machine intelligent. AI is an
interdisciplinary science with multiple approaches, but
advancements in machine learning and deep learning are
creating a paradigm shift in virtually every sector of the tech
industry.

However, various new tests have been proposed recently that


have been largely well received, including a 2019 research
paper entitled “On the Measure of Intelligence.” In the paper,
veteran deep learning researcher and Google engineer
François Chollet argues that intelligence is the “rate at which a
learner turns its experience and priors into new skills at
valuable tasks that involve uncertainty and adaptation.” In
other words: The most intelligent systems are able to take just
a small amount of experience and go on to guess what would
be the outcome in many varied situations.
Meanwhile, in their book Artificial Intelligence: A Modern
FOR EMPLOYERS JOIN LOG IN
Approach, authors Stuart Russell and Peter Norvig approach
the concept of AI by unifying their work around the theme of
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
intelligent agents in machines. With this in mind, AI is “the
study of agents that receive percepts from the environment
and perform actions.”

CHECK OUT THE TOP AI COMPANIES

View all AI Companies

Norvig and Russell go on to explore four different approaches


that have historically defined the field of AI:

ARTIFICIAL INTELLIGENCE DEFINED: FOUR TYPES OF


APPROACHES
• Thinking humanly: mimicking thought based on the
human mind.
• Thinking rationally: mimicking thought based on
logical reasoning.
• Acting humanly: acting in a manner that mimics
human behavior.
• Acting rationally: acting in a manner that is meant to
achieve a particular goal.

The first two ideas concern thought processes and reasoning,


while the others deal with behavior. Norvig and Russell focus
particularly on rational agents that act to achieve the best
outcome, noting “all the skills needed for the Turing Test also
allow an agent to act rationally.”

Former MIT professor of AI and computer science Patrick


Winston defined AI as “algorithms enabled by constraints,
exposed by representations that support models targeted at
loops that tie thinking, perception and action together.”

While these definitions may seem abstract to the average


person, they help focus the field as an area of computer
science and provide a blueprint for infusing machines and
programs with ML and other subsets of AI.

The Future of AI
When one considers the computational costs and the technical
data infrastructure running behind artificial intelligence,
actually executing on AI is a complex and costly business.
Fortunately, there have been massive advancements in
computing technology, as indicated by Moore’s Law, which
states that the number of transistors on a microchip doubles
about every two years while the cost of computers is halved.

Although many experts believe that Moore’s Law will likely


come to an end sometime in the 2020s, this has had a major
impact on modern AI techniques — without it, deep learning
would be out of the question, financially speaking. Recent
research found that AI innovation has actually outperformed
Moore’s Law, doubling every six months or so as opposed to
FOR EMPLOYERS JOIN LOG IN
two years.

JOBS
TECH COMPANIES
By that logic, REMOTE ARTICLES

the advancements SALARIES

artificial LEARN
intelligence FIND
has made MY TECH HUB

across a variety of industries have been major over the last


several years. And the potential for an even greater impact
over the next several decades seems all but inevitable.

Types Of Artificial Intelligence | Artificial Intelligence Ex…


Ex…
Delen

Bekijken op

TYPES OF ARTIFICIAL INTELLIGENCE | ARTIFICIAL INTELLIGENCE EXPLAINED | WHAT IS AI? | EDUREKA | VIDEO:
EDUREKA!

The Four Types of Artificial Intelligence


AI can be divided into four categories, based on the type and
complexity of the tasks a system is able to perform. For
example, automated spam filtering falls into the most basic
class of AI, while the far-off potential for machines that can
perceive people’s thoughts and emotions is part of an entirely
different AI subset.

WHAT ARE THE FOUR TYPES OF ARTIFICIAL


INTELLIGENCE?
• Reactive machines: able to perceive and react to the
world in front of it as it performs limited tasks.
• Limited memory: able to store past data and
predictions to inform predictions of what may come
next.
• Theory of mind: able to make decisions based on its
perceptions of how others feel and make decisions.
• Self-awareness: able to operate with human-level
consciousness and understand its own existence.

Reactive Machines
A reactive machine follows the most basic of AI principles and,
as its name implies, is capable of only using its intelligence to
perceive and react to the world in front of it. A reactive
machine cannot store a memory and, as a result, cannot rely
on past experiences to inform decision making in real time.

Perceiving the world directly means that reactive machines


are designed to complete only a limited number of specialized
duties. Intentionally narrowing a reactive machine’s worldview
is not any sort of cost-cutting measure, however, and instead
means that this type of AI will be more trustworthy and
reliable — it will react the same way to the same stimuli every
time. 
A famous example of a reactive machine is Deep Blue, which
FOR EMPLOYERS JOIN LOG IN
was designed by IBM in the 1990s as a chess-playing
supercomputer and defeated international grandmaster Gary
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
Kasparov in a game. Deep Blue was only capable of identifying
the pieces on a chess board and knowing how each moves
based on the rules of chess, acknowledging each piece’s
present position and determining what the most logical move
would be at that moment. The computer was not pursuing
future potential moves by its opponent or trying to put its own
pieces in better position. Every turn was viewed as its own
reality, separate from any other movement that was made
beforehand.

Another example of a game-playing reactive machine is


Google’s AlphaGo. AlphaGo is also incapable of evaluating
future moves but relies on its own neural network to evaluate
developments of the present game, giving it an edge over Deep
Blue in a more complex game. AlphaGo also bested world-class
competitors of the game, defeating champion Go player Lee
Sedol in 2016.

Though limited in scope and not easily altered, reactive


machine AI can attain a level of complexity, and offers
reliability when created to fulfill repeatable tasks.

Limited Memory
Limited memory AI has the ability to store previous data and
predictions when gathering information and weighing
potential decisions — essentially looking into the past for clues
on what may come next. Limited memory AI is more complex
and presents greater possibilities than reactive machines.

Limited memory AI is created when a team continuously trains


a model in how to analyze and utilize new data or an AI
environment is built so models can be automatically trained
and renewed. 

When utilizing limited memory AI in ML, six steps must be


followed: Training data must be created, the ML model must
be created, the model must be able to make predictions, the
model must be able to receive human or environmental
feedback, that feedback must be stored as data, and these
these steps must be reiterated as a cycle.

There are several ML models that utilize limited memory AI:

• Reinforcement learning, which learns to make


better predictions through repeated trial and error.

• Recurrent neural networks (RNN), which uses


sequential data to take information from prior inputs
to influence the current input and output. These are
commonly used for ordinal or temporal problems,
FOR EMPLOYERS JOIN LOG IN
such as language translation, natural language
processing, speech recognition and image captioning.
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
One subset of recurrent neural networks is known as
long short term memory (LSTM), which utilizes past
data to help predict the next item in a sequence.
LTSMs view more recent information as most
important when making predictions, and discount
data from further in the past while still utilizing it to
form conclusions.

• Evolutionary generative adversarial networks (E-


GAN), which evolve over time, growing to explore
slightly modified paths based off of previous
experiences with every new decision. This model is
constantly in pursuit of a better path and utilizes
simulations and statistics, or chance, to predict
outcomes throughout its evolutionary mutation cycle.

• Transformers, which are networks of nodes that


learn how to do a certain task by training on existing
data. Instead of having to group elements together,
transformers are able to run processes so that every
element in the input data pays attention to every
other element. Researchers refer to this as “self-
attention,” meaning that as soon as it starts training, a
transformer can see traces of the entire data set.

Theory of Mind
Theory of mind is just that — theoretical. We have not yet
achieved the technological and scientific capabilities
necessary to reach this next level of AI.

The concept is based on the psychological premise of


understanding that other living things have thoughts and
emotions that affect the behavior of one’s self. In terms of AI
machines, this would mean that AI could comprehend how
humans, animals and other machines feel and make decisions
through self-reflection and determination, and then will utilize
that information to make decisions of their own. Essentially,
machines would have to be able to grasp and process the
concept of “mind,” the fluctuations of emotions in decision
making and a litany of other psychological concepts in real
time, creating a two-way relationship between people and AI.
FOR EMPLOYERS JOIN LOG IN

JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
WHAT IF AI BECAME SELF-AWARE? | VIDEO: ALLTIME10S

Self-Awareness
Once theory of mind can be established, sometime well into
the future of AI, the final step will be for AI to become self-
aware. This kind of AI possesses human-level consciousness
and understands its own existence in the world, as well as the
presence and emotional state of others. It would be able to
understand what others may need based on not just what they
communicate to them but how they communicate it. 

Self-awareness in AI relies both on human researchers


understanding the premise of consciousness and then learning
how to replicate that so it can be built into machines.

RELATED

Types of Artificial Intelligence: A Guide

How Is AI Used? Artificial Intelligence


Examples
While addressing a crowd at the Japan AI Experience in 2017,
 DataRobot CEO Jeremy Achin began his speech by offering
the following definition of how AI is used today:

“AI is a computer system able to perform tasks that ordinarily


require human intelligence ... Many of these artificial
intelligence systems are powered by machine learning, some
of them are powered by deep learning and some of them are
powered by very boring things like rules.”

RELATED ARTICLE

20+ Examples of AI in Everyday Life

Other AI Classifications
There are three ways to classify artificial intelligence, based on
their capabilities. Rather than types of artificial intelligence,
these are stages through which AI can evolve — and only one
of them is actually possible right now.

• Narrow AI: Sometimes referred to as “weak AI,” this


kind of AI operates within a limited context and is a
simulation of human intelligence. Narrow AI is often
focused on performing a single task extremely well
and while these machines may seem intelligent, they
are operating under far more constraints and
limitations than even the most basic human
intelligence.

• Artificial general intelligence (AGI): AGI, sometimes


referred to as “strong AI,” is the kind of AI we see in
movies — like the robots from Westworld or the
character Data from Star Trek: The Next Generation.
FOR EMPLOYERS JOIN LOG IN
AGI is a machine with general intelligence and, much
like a human being, it can apply that intelligence to
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
solve any problem.

• Superintelligence: This will likely be the pinnacle of


AI’s evolution. Superintelligent AI will not only be able
to replicate the complex emotion and intelligence of
human beings, but surpass it in every way. This could
mean making judgments and decisions on its own, or
even forming its own ideology.

Narrow AI Examples
Narrow AI, or weak AI as it’s often called, is all around us and is
easily the most successful realization of AI to date. It has
limited functions that are able to help automate specific tasks.

Because of this focus, narrow AI has experienced numerous


breakthroughs in the last decade that have had “significant
societal benefits and have contributed to the economic vitality
of the nation,” according to a 2016 report released by the
Obama administration.

EXAMPLES OF ARTIFICIAL INTELLIGENCE: NARROW AI


• Siri, Alexa and other smart assistants
• Self-driving cars
• Google search
• Conversational bots
• Email spam filters
• Netflix's recommendations

Machine Learning and Deep Learning 


Much of narrow AI is powered by breakthroughs in ML and
deep learning. Understanding the difference between AI, ML
and deep learning can be confusing. Venture capitalist Frank
Chen provides a good overview of how to distinguish between
them, noting:  

“Artificial intelligence is a set of algorithms


and intelligence to try to mimic human
intelligence. Machine learning is one of them,
and deep learning is one of those machine
learning techniques.”

Simply put, an ML algorithm is fed data by a computer, and


uses statistical techniques to help it “learn” how to get
progressively better at a task, without necessarily having been
specifically programmed for that task. Instead, ML algorithms
use historical data as input to predict new output values. To
that end, ML consists of both supervised learning (where the
expected output for the input is known thanks to labeled data
FOR EMPLOYERS JOIN LOG IN
sets) and unsupervised learning (where the expected outputs
are unknown due to the use of unlabeled data sets).
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

Machine learning is present throughout everyday life. Google


Maps uses location data from smartphones, as well as user-
reported data on things like construction and car accidents, to
monitor the ebb and flow of traffic and assess what the fastest
route will be. Personal assistants like Siri, Alexa and Cortana
are able to set reminders, search for online information and
control the lights in people’s homes all with the help of ML
algorithms that collect information, learn a user’s preferences
and improve their experience based on prior interactions with
users. Even Snapchat filters use ML algorithms in order to
track users’ facial activity.

Meanwhile, deep learning is a type of ML that runs inputs


through a biologically-inspired neural network architecture.
The neural networks contain a number of hidden layers
through which the data is processed, allowing the machine to
go “deep” in its learning, making connections and weighting
input for the best results.

Self-driving cars are a recognizable example of deep learning,


since they use deep neural networks to detect objects around
them, determine their distance from other cars, identify
traffic signals and much more. The wearable sensors and
devices used in the healthcare industry also apply deep
learning to assess the health condition of the patient, including
their blood sugar levels, blood pressure and heart rate. They
can also derive patterns from a patient’s prior medical data and
use that to anticipate any future health conditions.

Artificial General Intelligence


The creation of a machine with human-level intelligence that
can be applied to any task is the Holy Grail for many AI
researchers, but the quest for artificial general intelligence has
been fraught with difficulty.

The search for a “universal algorithm for learning and acting in


any environment,” as Russel and Norvig put it, isn’t new. In
contrast to weak AI, strong AI represents a machine with a full
set of cognitive abilities, but time hasn't eased the difficulty of
achieving such a feat.

AGI has long been the muse of dystopian science fiction, in


which super-intelligent robots overrun humanity, but experts
agree it’s not something we need to worry about anytime
soon.

Although, for now, AGI is still a fantasy, there are some


remarkably sophisticated systems out there now that are
approaching the AGI benchmark. One of them is GPT-3, an
FOR EMPLOYERS JOIN LOG IN
autoregressive language model designed by OpenAI that uses
deep learning to produce human-like text. GPT-3 is not
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
intelligent, but it has been used to create some extraordinary
things, including a chatbot that lets you talk to historical
figures and a question-based search engine. MuZero, a
computer program created by DeepMind, is another
promising frontrunner in the quest to achieve true AGI. It has
managed to master games it has not even been taught to play,
including chess and an entire suite of Atari games, through
brute force, playing games millions of times.

Superintelligence 
Besides narrow AI and AGI, some consider there to be a third
category known as superintelligence. For now, this is a
completely hypothetical situation in which machines are
completely self-aware, even surpassing the likes of human
intelligence in practically every field, from science to social
skills. In theory, this could be achieved through a single
computer, a network of computers or something completely
different, as long as it is conscious and has subjective
experiences.

Nick Bostrom, a founding professor and leader of Oxford’s


Future of Humanity Institute, appears to have coined the term
back in 1998, and predicted that we will have achieved
superhuman artificial intelligence within the first third of the
21st century. He went on to say that the likelihood of this
happening will likely depend on how quickly neuroscience can
better understand and replicate the human brain. Creating
superintelligence by imitating the human brain, he added, will
require not only sufficiently powerful hardware, but also an
“adequate initial architecture” and a “rich flux of sensory
input.”

EXAMPLES OF ARTIFICIAL INTELLIGENCE


FOR EMPLOYERS JOIN LOG IN

JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

AI HAS MANY USES. EXAMPLES INCLUDE EVERYTHING FROM AMAZON ALEXA TO SELF-DRIVING CARS. | IMAGE; 
SHUTTERSTOCK

Why Is Artificial Intelligence Important?


AI has many uses — from boosting vaccine development to
automating detection of potential fraud. 

AI private market activity saw a record-setting year in 2021,


according to CB Insights, with global funding up 108 percent
compared to 2020. Because of its fast-paced adoption, AI is
making waves in a variety of industries.

Business Insider Intelligence’s 2022 report on AI in banking


found more than half of financial services companies already
use AI solutions for risk management and revenue generation.
The application of AI in banking could lead to upwards of $400
billion in savings.

As for medicine, a 2021 World Health Organization report


noted that while integrating AI into the healthcare field comes
with challenges, the technology “holds great promise,” as it
could lead to benefits like more informed health policy and
improvements in the accuracy of diagnosing patients.

AI has also made its mark on entertainment. The global market


for AI in media and entertainment is estimated to reach $99.48
billion by 2030, growing from a value of $10.87 billion in 2021,
according to Grand View Research. That expansion includes AI
uses like recognizing plagiarism and developing high-
definition graphics.

Artificial Intelligence Pros and Cons


While AI is certainly viewed as an important and quickly
evolving asset, this emerging field comes with its share of
downsides.

The Pew Research Center surveyed 10,260 Americans in 2021


on their attitudes toward AI. The results found 45 percent of
respondents are equally excited and concerned, and 37
percent are more concerned than excited. Additionally, more
than 40 percent of respondents said they considered
FOR EMPLOYERS JOIN LOG IN
driverless cars to be bad for society. Yet the idea of using AI to
identify the spread of false information on social media was
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
more well received, with close to 40 percent of those surveyed
labeling it a good idea.

AI is a boon for improving productivity and efficiency while at


the same time reducing the potential for human error. But
there are also some disadvantages, like development costs and
the possibility for automated machines to replace human jobs.
It’s worth noting, however, that the artificial intelligence
industry stands to create jobs, too — some of which have not
even been invented yet.

AI TIMELINE: HISTORY OF ARTIFICIAL INTELLIGENCE

THE HISTORY OF ARTIFICIAL INTELLIGENCE IS LONG AND ROBUST, GOING BACK TO THE 1940S. | IMAGE:
SHUTTERSTOCK

A Brief History of Artificial Intelligence


Intelligent robots and artificial beings first appeared in ancient
Greek myths. And Aristotle’s development of syllogism and its
use of deductive reasoning was a key moment in humanity’s
quest to understand its own intelligence. While the roots are
long and deep, the history of AI as we think of it today spans
less than a century. The following is a quick look at some of the
most important events in AI.

1940s
• (1943) Warren McCullough and Walter Pitts publish
the paper “A Logical Calculus of Ideas Immanent in
Nervous Activity,” which proposes the first
mathematical model for building a neural network. 
• (1949) In his book The Organization of Behavior: A
FOR EMPLOYERS JOIN LOG IN
Neuropsychological Theory, Donald Hebb proposes the
theory that neural pathways are created from
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
experiences and that connections between neurons
become stronger the more frequently they’re used.
Hebbian learning continues to be an important model
in AI.

1950s
• (1942) Isaac Asimov publishes the Three Laws of
Robotics, an idea commonly found in science fiction
media about how artificial intelligence should not
bring harm to humans.
• (1950) Alan Turing publishes the paper “Computing
Machinery and Intelligence,” proposing what is now
known as the Turing Test, a method for determining if
a machine is intelligent. 
• (1950) Harvard undergraduates Marvin Minsky and
Dean Edmonds build SNARC, the first neural network
computer.
• (1950) Claude Shannon publishes the paper
“Programming a Computer for Playing Chess.”
• (1952) Arthur Samuel develops a self-learning
program to play checkers. 
• (1954) The Georgetown-IBM machine translation
experiment automatically translates 60 carefully
selected Russian sentences into English. 
• (1956) The phrase “artificial intelligence” is coined at
the Dartmouth Summer Research Project on Artificial
Intelligence. Led by John McCarthy, the conference is
widely considered to be the birthplace of AI.
• (1956) Allen Newell and Herbert Simon demonstrate
Logic Theorist (LT), the first reasoning program. 
• (1958) John McCarthy develops the AI programming
language Lisp and publishes “Programs with Common
Sense,” a paper proposing the hypothetical Advice
Taker, a complete AI system with the ability to learn
from experience as effectively as humans.  
• (1959) Allen Newell, Herbert Simon and J.C. Shaw
develop the General Problem Solver (GPS), a program
designed to imitate human problem-solving. 
• (1959) Herbert Gelernter develops the Geometry
Theorem Prover program.
• (1959) Arthur Samuel coins the term “machine
learning” while at IBM.
• (1959) John McCarthy and Marvin Minsky found the
MIT Artificial Intelligence Project.

1960s
• (1963) John McCarthy starts the AI Lab at Stanford.
• (1966) The Automatic Language Processing Advisory
Committee (ALPAC) report by the U.S. government
details the lack of progress in machine translations
FOR EMPLOYERS JOIN LOG IN
research, a major Cold War initiative with the promise
of automatic and instantaneous translation of Russian.
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
The ALPAC report leads to the cancellation of all
government-funded MT projects. 
• (1969) The first successful expert systems are
developed in DENDRAL, a XX program, and MYCIN,
designed to diagnose blood infections, are created at
Stanford.

1970s
• (1972) The logic programming language PROLOG is
created.
• (1973) The Lighthill Report, detailing the
disappointments in AI research, is released by the
British government and leads to severe cuts in funding
for AI projects. 
• (1974-1980) Frustration with the progress of AI
development leads to major DARPA cutbacks in
academic grants. Combined with the earlier ALPAC
report and the previous year’s Lighthill Report, AI
funding dries up and research stalls. This period is
known as the “First AI Winter.”

1980s
• (1980) Digital Equipment Corporations develops R1
(also known as XCON), the first successful commercial
expert system. Designed to configure orders for new
computer systems, R1 kicks off an investment boom in
expert systems that will last for much of the decade,
effectively ending the first AI Winter.
• (1982) Japan’s Ministry of International Trade and
Industry launches the ambitious Fifth Generation
Computer Systems project. The goal of FGCS is to
develop supercomputer-like performance and a
platform for AI development.
• (1983) In response to Japan’s FGCS, the U.S.
government launches the Strategic Computing
Initiative to provide DARPA funded research in
advanced computing and AI. 
• (1985) Companies are spending more than a billion
dollars a year on expert systems and an entire
industry known as the Lisp machine market springs
up to support them. Companies like Symbolics and
Lisp Machines Inc. build specialized computers to run
on the AI programming language Lisp. 
• (1987-1993) As computing technology improved,
cheaper alternatives emerged and the Lisp machine
market collapsed in 1987, ushering in the “Second AI
Winter.” During this period, expert systems proved
too expensive to maintain and update, eventually
falling out of favor.
1990s
FOR EMPLOYERS JOIN LOG IN
• (1991) U.S. forces deploy DART, an automated
logistics planning and scheduling tool, during the Gulf
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB
War.
• (1992) Japan terminates the FGCS project in 1992,
citing failure in meeting the ambitious goals outlined a
decade earlier.
• (1993) DARPA ends the Strategic Computing
Initiative in 1993 after spending nearly $1 billion and
falling far short of expectations. 
• (1997) IBM’s Deep Blue beats world chess champion
Gary Kasparov.

2000s
• (2005) STANLEY, a self-driving car, wins the DARPA
Grand Challenge.
• (2005) The U.S. military begins investing in
autonomous robots like Boston Dynamics’ “Big Dog”
and iRobot’s “PackBot.”
• (2008) Google makes breakthroughs in speech
recognition and introduces the feature in its iPhone
app.

2010s
• (2011) IBM’s Watson handily defeats the competition
on Jeopardy!. 
• (2011) Apple releases Siri, an AI-powered virtual
assistant through its iOS operating system. 
• (2012) Andrew Ng, founder of the Google Brain Deep
Learning project, feeds a neural network using deep
learning algorithms 10 million YouTube videos as a
training set. The neural network learned to recognize
a cat without being told what a cat is, ushering in the
breakthrough era for neural networks and deep
learning funding.
• (2014) Google makes the first self-driving car to pass
a state driving test. 
• (2014) Amazon’s Alexa, a virtual home smart device, is
released.
• (2016) Google DeepMind’s AlphaGo defeats world
champion Go player Lee Sedol. The complexity of the
ancient Chinese game was seen as a major hurdle to
clear in AI.
• (2016) The first “robot citizen,” a humanoid robot
named Sophia, is created by Hanson Robotics and is
capable of facial recognition, verbal communication
and facial expression.
• (2018) Google releases natural language processing
engine BERT, reducing barriers in translation and
understanding by ML applications.
• (2018) Waymo launches its Waymo One service,
allowing users throughout the Phoenix metropolitan
area to request a pick-up from one of the company’s
FOR EMPLOYERS JOIN LOG IN
self-driving vehicles.

JOBS
TECH 2020s
COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

• (2020) Baidu releases its LinearFold AI algorithm to


scientific and medical teams working to develop a
vaccine during the early stages of the SARS-CoV-2
pandemic. The algorithm is able to predict the RNA
sequence of the virus in just 27 seconds, 120 times
faster than other methods.
• (2020) OpenAI releases natural language processing
model GPT-3, which is able to produce text modeled
after the way people speak and write. 
• (2021) OpenAI builds on GPT-3 to develop DALL-E,
which is able to create images from text prompts.
• (2022) The National Institute of Standards and
Technology releases the first draft of its AI Risk
Management Framework, voluntary U.S. guidance “to
better manage risks to individuals, organizations, and
society associated with artificial intelligence.”
• (2022) DeepMind unveils Gato, an AI system trained
to perform hundreds of tasks, including playing Atari,
captioning images and using a robotic arm to stack
blocks.

57 Artificial
Intelligence (AI)
Companies
Delivering on The Future of AI: How
Artificial Intelligence Will
Innovation Change the World
They may not be household names,
but these 42 artificial intelligence
companies are working on some
very smart technology.

READ ARTICLE

More Stories BACK TO TOP

15 Artificial General
Intelligence
Companies to Know
READ ARTICLE
We Were Promised Smart
Cities
FOR EMPLOYERS JOIN LOG IN

JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

Can AI Cure Retailers’ How AI SEO Tools Are How Does


Holiday Headaches? Changing the Future of Backpropagation in a
Search  Neural Network Work?

What Is a Social
Robot?
READ ARTICLE

The Evolution of Chess AI

What Is Augmented An Introduction to the How Artificial Intelligence


Intelligence? ReLU Activation Function and Machine Learning Will
Reshape Enterprise
Technology

How Machine
Learning Organizes
the Library of
Congress Digital Fully Connected Layer vs.
Convolutional Layer:
Collections Explained
Researchers will study patterns
among its 170 million items.

READ ARTICLE

Continue Reading
What Is Artificial General 4 Types of Machine Learning to Strong AI vs. Weak AI: What’s the
Intelligence? Know Difference?

What Is Cognitive Computing? What Is AIOps? A Guide With Understanding and Building
Uses and Applications. Neural Network (NN) Models

A O i fR N AI d F hi 7C l
20 AI Podcasts Worth a Listen An Overview of ResNet AI and Fashion: 7 Cool
Architecture and Its Variants Applications JOIN LOG IN
FOR EMPLOYERS

4 Types of Artificial Intelligence Top 20 Humanoid Robots in Use What Is Haptic Feedback?
JOBS
TECH COMPANIES Right Now
REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

21 Top Cognitive Computing There’s a Digital Tear in My Virtual Is Google’s LaMDA AI Truly
Technologies Companies to Beer: Writing a Country Song Sentient?
Know With AI

18 Companies Turning AI Robots Artificial Intelligence Has a ‘Last AI Ethics: A Guide to Ethical AI
Into Real-Life Wins Mile’ Problem, and Machine
Learning Operations Can Solve It

A Software Revolution Is About to AI in Retail and E-Commerce: 17 How AI Trading Technology Is


Sweep Robotics Examples to Know Making Stock Market Investors
Smarter

Real Face vs. AI-Generated Fake: A Step-by-Step NLP Machine What Is Machine Learning and
The Science Behind GANs Learning Classifier Tutorial How Does It Work?

Top 12 Publicly Traded AI 15 AI in Banking Examples You Energy Costs Are Driving Up the
Companies Should Know Price of Everything. AI Can Help
Stop It.

13 Natural Language Processing AI Games: 10 Leading Companies AI in Marketing: 14 Examples You


Examples to Know to Know Should Know

17 Examples of AI in HR and 15 Examples of AI in Supply Chain AI Blockchain: 30 Examples to


Recruiting to Know and Logistics Know

22 Machine Learning in Can an AI Write a Speech Better Transformer Neural Networks: A


Marketing Examples Than a Human? Step-by-Step Breakdown

14 Examples of Artificial 15 AI in Education Examples Top 31 Computer Vision Startups


Intelligence in Business and Companies

16 Machine Learning in 25 Examples of AI in Finance 31 Machine Learning Companies


Healthcare Examples You Should Know

10 AI in Manufacturing Examples 25 AI Insurance Examples to AI Music: 9 Examples to Know


to Know Know

20 Deep Learning Applications 15 Machine Learning in 40 AI in Healthcare Examples


You Should Know Education Examples Improving the Future of
Medicine

The 15 Best AI Tools to Know 20 Machine Learning Bootcamps Are You Sure You Can Trust That
and Classes to Know AI?

What Is Deep Learning and How Why Automation Will Turn the Driving Innovation With an
Does It Work? Great Resignation Into the Great Ethnography of AI
Upgrade

The Unreal Slim Shady: How We AI Copywriting: Why Writing Jobs 27 companies merging AI and
Trained an AI to Simulate Are Safe cybersecurity to keep us safe and
Eminem’s Style sound

Is AI the Future of Sports? Will We Ever See a Real-Life ‘Star A Step-by-Step Explanation of
Trek’ Universal Translator? Stochastic Policy Gradient
Algorithms

AI-as-a-Service: 6 Benefits Third- Will AI Ever Become Ubiquitous? 3 Ways AI Can Help Reduce
Party Artificial Intelligence Carbon Emissions in Vehicle
Platforms Can Provide Fleets

Automatic Speech Recognition You Can Use Artificial Intelligence AI as a Service Will Disrupt
Is About to Make Calling to Fix Your Broken Code Everything. Is Your Business
Customer Service a Little More Ready?
Pleasant

Can AI Make Art More Human? Your Job Descriptions Are Biased. How to Transform Your Security
AI Can Help. With the Help of AI, Automation
and Analytics

An Introduction to Double Deep Press Tab to Accept: How AI AI Gone Wild: Why Startups
t oduct o to oub e eep ess ab to ccept: o Go e d: y Sta tups
Q-Learning Redefines Authorship Need Algorithmic Canaries
FOR EMPLOYERS JOIN LOG IN

Should Your Company Put a How Sports Analytics Are Used For Users, Better AI Means More
Bounty on Biased Algorithms? Today, by Teams and Fans Personalization
JOBS
TECH COMPANIES REMOTE ARTICLES
SALARIES
LEARN FIND MY TECH HUB

Machine Learning in Finance: 10 Rage Against the Machine A Deep Dive Into Deep Q-
Companies to Know Learning: My War With Learning
Recommendation Engines

16 Machine Learning Examples Is Artificial Intelligence a Good Relationship to Revenue: How


Your Industry Needs to Know Career? These AI Professionals Conversational AI Can
Now Think So. Supercharge Customer Success

Don’t Fear the Robot: The Get Started With AI Using Scikit- Moore’s Law Is Dead. Now What?
Human Consequences of Learn
Automation

3 Biggest Mistakes to Avoid How AI Teach Themselves Why Aren’t Governments Paying
When Hiring AI and ML Through Deep Reinforcement More Attention to AI?
Engineers Learning

Forget Freud. Emotion AI Is the Tech Leaders’ Top Predictions for How Can AI Transform E-
Key to Analyzing Consumers’ 2022 Commerce?
True Feelings.

Want Business Insights More Cruise Engineers Balance Safety Artificial Intelligence in Cars:
Quickly? Augment Your Analytics. With Experimentation in the Examples of AI in the Auto
Pursuit of Autonomous Vehicles Industry

5 Deep Learning Activation Data Fabric: What You Need to What Can’t AI Do?
Functions You Need to Know Know About the Next Big Thing

Want a Career in Machine What Can We Learn From 4 31 Examples of Artificial


Learning? Here’s What You Need Superhuman, Game-Playing AIs? Intelligence Shaking Up
to Know. Business as Usual

How Do You Use Data Structures How Do Self-Driving Cars Work? Robots and AI Taking Over Jobs:
and Algorithms in Python? What to Know About the Future
of Jobs

AI Makes Decisions We Don’t Think You Don’t Need Loss A Guide to Time Series Analysis in
Understand. That’s a Problem. Functions in Deep Learning? Python
Think Again.

The Next Space Primed for AI Is AI Taking the Human Out of 7 Dangerous Risks of Artificial
Disruption Isn’t What You Think Human Resources? Intelligence

Artificial Intelligence vs. Machine Unhappy With Your AI Using Buyer-Intent AI for Sales
Learning vs. Deep Learning: Implementation? Good — You Can Be a Superpower
What’s the Difference? Should Be.

Why Every Customer Service Rep 3 Guiding AI Principles to Kick- AI In Insurance: How Artificial
Will Soon Have an AI Assistant Start Your Next Initiative Intelligence Is Shaking Up the
Entire Industry

Streamlit Tutorial: A Beginner’s Fake Science Is Creating a Real Recruiting AI Can Be Objective —
Guide to Building Machine Crisis, and AI Is Making It Worse If We Design It Responsibly
Learning-Based Web
Applications

How to Evaluate Classification How AI Customer Service Will How Do Targeted Ads Work?
Models in Python: A Beginner's Transform Industries Targeted Ad Advantages &
Guide Disadvantages

How Do People Feel About the AI Generated Script: How We Can AI Solve the COBOL
COVID Vaccine? Made a Movie by an AI Script Challenge?
Writer

Insurance Companies Are What the Rise of Fashion How Your Org Can Successfully
Embracing AI, for Better and for Chatbots Tells Us About AI’s Role Implement AI
Worse in New Experiences

Make Sure the AI You’re Buying Is AI Coming for Your Job? A Comprehensive Guide to Scikit-
Isn’t Just a Marketing Gimmick Learn (Sklearn)
A Feast of Sensor Data: Feeding 6 Ways to Combat Bias in Here’s What AI-Enabled UX
Self-Driving Algorithms Machine Learning Design Frameworks AreJOIN
FOR EMPLOYERS LOG IN
Teaching Us

The Time Has


JOBS
TECHCome to DecoupleREMOTE
COMPANIES How toARTICLES

Move AI Development
SALARIES
How
LEARNTechnology Can
FIND MYKickstart
TECH HUB
AI From Human Brains Forward Travel’s Post-Pandemic Boom

Recruiting AI Needs More How to Combat Shadow IT Data How Cities Are Using Precise
Diverse Data Management Data to Enhance Urban Mobility

When Will AI Revolutionize Your Emotion AI Technology Has Great What You Should Know About an
Industry? Promise (When Used Applicant Tracking System (ATS)
Responsibly) to Land More Interviews — and
Jobs

Is an Open Web Still Possible? What Investors Look for in an AI How to Navigate Automotive AR’s
Startup Legal Challenges

How Change Management How AI Can Help Small Healthcare Is Ailing. AI Can Help.
Professionals Help Enterprises Businesses
Pivot and Adopt AI

Machine Learning Platforms: Why Keras Is the Leading Deep Will We See a Quantum
Should You Buy Commercial or Learning API Computing Revolution?
Build In-House?

Why GPT-3 Heralds a Democratic When It Comes to AI, Don’t Get How AI Can Be Used to Help
Revolution in Tech Trapped in the Magic Box Employees Safely Return to Work

How to Embark on the Complex Deepfake Phishing: Is That What Makes a Music
Journey Toward SaaSifying Your Actually Your Boss Calling? Recommendation Engine Good?
Business

A Proposal to Democratize AI How Artificial Intelligence Can How Gen Z Can Fix AI’s Ethics
Revolutionize Healthcare Problem

‘The Social Dilemma’ Left Out How AI in Sports Can Take Why Do Machine Learning
Some Important Thinkers. Mozilla Athletes’ Games to the Next Level Projects Fail?
Made a List.

6 Cutting-Edge Applications of AI 12 Ways to Address the Risks Big Step Up Your Content Marketing
Data and AI Pose to the Business Game With AI
World

Why Racial Bias Still Haunts How DoNotPay’s Bots Battle How to Spot Deepfake
Speech-Recognition AI Bureaucracy Technology

Counteracting AI Bias Instagram Bots Are Frowned How Amazon Vendors Can Use
Upon — but They Work AI-Optimization to Increase Sales

Will We Ever Want to Use A Look Into the Future of AI for 14 Ways to Use AI Right Now for
Touchscreens Again? Sales Fast Business Results

How Human Can You Make a Can AI Replace Writers? Could Contextual Analysis
Chatbot? You’d Be Surprised. Replace the Cookie?

How Root AI’s Agricultural Robots Inside the AI Trends Every Techie This Mental Health Chatbot Uses
Are Powering the Farmtech Should Be Watching Humor to Get Users to Open Up
Revolution

LOAD MORE

Tech Trends Hiring Resources

Keep up with Recruitment 3.0


tech —
industry trends. stay informed.

You might also like