LESSON 1:
Introduction
Table of contents
01 02
History of AI Objectives
03 04
Course outline Grading
History of
AI
Contributors, companies, algorithms and
perspectives
The first “computer”
● In 1939, the Nazis were using the ENIGMA (an
encryption machine) to coordinate their
attacks.
● The Allied forces turned to Alan Turing for help
with decrypting the messages from this
machine.
● Traditional code breaking methods were
ineffective thus Turing had to create a new
machine.
● This machine laid the foundation for modern
day computers. Watch the movie “The
Imitation Game”.
First chatbot and the
microprocessor
● In the 1950s, Marvin Minsky built the world’s
first working neural network using wires and 6
vacuum tubes.
● In 1966, a computer scientist at MIT called
Joseph Weizenbaum builds ELIZA, the world’s
first chatbot using IBM’s 794 computer.
● In the 1970s, Intel created the microprocessor.
This significantly shrunk the size of computers,
making them accessible for domestic use (in
homes).
Backpropagation, AI blueprint
● By the late 1970s, MicroSoft and Apple start
creating computers for personal use, leading to
the PC boom of the 80s and 90s.
● In 1986, Geoffrey Hinton created the
backpropagation algorithm. This algorithm is
the backbone of all neural networks today.
● In 1997, IBM built DeepBlue, a chess playing AI
that beat the world champion at the time
(Garry Kasparov).
● This gave the blueprint for the future of AI:
Data + Model (algorithms, formulae) +
Compute (hardware)
Information age, CNN
● In the early 2000s, the creation of companies
like Google, Facebook and Amazon brings
about the information age, generating huge
amounts of data.
● This creates a need for computers to be able to
analyze all this huge amounts of data.
● In 2006, Geoffrey Hinton created a new
algorithm called the Convolutional Neural
Network (CNN) which allows computers to
recognize patterns and images. This gives
Geoffrey the name “The Godfather of AI”
● This algorithm is used to tag and identify
The Big Bang of AI
● In 2007, the ImageNet competition was
launched in order to find the best AI algorithm.
In 2012, Geoffrey enters the competition with 2
of his students, their CNN model AlexNet wins
and creates “The Big Bang of AI”
● Google starts buying lots of different
companies working in AI. By 2015, Google is
dominating the AI model space
● Elon Musk and some other billionaires didn’t
want Google to be the only ones having access
to super-intelligent AI, so they created OpenAI
OpenAI creation, Transformers
● OpenAI is meant to be an open-source, non-
profit company geared towards building AGI
(Artificial General Intelligence)
● Elon steals one of Google’s top AI engineers.
● In 2016, DeepMind builds AlphaGo and
AlphaZero. AI programs that beat the best Go
and chess players respectively.
● In 2017, Google stumbled across transformers.
Basically super smart robots that can read a lot
of content very fast.
ChatGPT, other gen AIs
● OpenAI has an idea to use transformers to
create a modern chatbot but they don’t have
the funds to do it.
● So they partner up with Microsoft in 2020
procuring $10 billion investment from
Microsoft.
● With this new capital, OpenAI launches
ChatGPT (Chatting Generative Pretrained
Transformer) in 2022
● Other companies also launch their own
generative Ais this year too: 11Labs,
Midjourney and stability.ai
The Present
● There are more generative AI models than one
can count out there now, each with their own
specialties
● DeepSeek came out, rivaling ChatGPT’s
performance while only using a fraction of its
capital.
● Researchers say that Gen AI can’t get much
better than it is now due to limits in amount of
data.
Where do we go from here
● The AI that we are all afraid will take over the
Earth and exterminate all humans is still to
come.
● This is sometimes termed as AGI (Artificial
General Intelligence) or ASI (Artificial Super
Intelligence). These AI systems will be able to
learn new things without being explicitly
trained to do so.
● They will also be able to transfer knowledge
acquired from one domain to another.
● The estimated time of arrival of these systems
is anywhere from a few decades to at least a
century. Whatever the case, I hope I’ll be dead
Artificial
Intelligence
Definition, classification of AI,
Definition
● AI (Artificial Intelligence) refers to the simulation of human intelligence in
machines.
● This allows them to perform tasks that require reasoning, learning and
decision-making.
● AI systems can process data, recognize patterns, adopt over time based on
their interactions.
● For this course, AI will be considered as using computers to solve complex
problems without explicitly telling them how.
● We will look at some algorithms/techniques that may not be considered
“modern AI” but these algorithms have widespread uses.
Classification of AI
● AI can be classified based on
capabilities, functionalities (purpose)
and learning approach.
● Classification based on capabilities
○ Narrow AI (weak AI): specialized AI
designed for specific tasks (e.g.
ChatGPT, Google Assistant, Siri).
This is where we currently at, even
the most advanced AI systems are
still narrow AI.
Classification of AI
● Classification based on capabilities
○ AGI (Artificial General
Intelligence): hypothetical AI with
human-like cognitive abilities,
capable of reasoning and learning
across various domains.
○ Super AI (ASI – Artificial Super
Intelligence): theoretical AI that
surpasses human intelligence in all
aspects
Classification of AI
● Classification based on functionality (or purpose)
○ Generative AI: AI models that are trained on lots of content and are able
to create new content (e.g. images, text, videos, etc.).
○ LLMs (Large Language Models): a specific type of Gen AI that are trained
to understand human language.
○ Predictive AI: forecasts outcomes based on data
○ Autonomous AI: acts independently in decision-making (e.g. self-driving
car or robotics)
○ Conversational AI: AI focused on dialogue interactions
Classification of AI
● Classification based on learning approach. Refers to the type of data used to
create the data
○ Supervised learning: trained on labeled data
○ Unsupervised learning: finds pattern in unlabeled data (e.g. clustering,
anomaly detection)
○ Reinforcement learning: learns by interacting with an environment and
receiving rewards or punishments (e.g. AlphaGo, robotics)
○ Self-supervised learning AI: learns from raw data without explicit labels
(e.g. modern LLMs)
When to create AI systems
● This course is meant to prepare you for the integration of AI systems into your
applications.
● But not all systems need AI. Here we are going to list some cases where AI is
necessary and when it isn’t
● When to create AI
○ When a system requires pattern recognition (e.g. fraud detection, facial
recognition, medical diagnostics)
○ When handling large-scale data processing (e.g. search engines,
recommendation systems)
○ When personalizing user experiences dynamically (e.g. recommendation
systems)
When to create AI systems
● When not to create AI
• When a simple rule-based system suffices (e.g., basic form validation,
arithmetic calculations).
• When data is insufficient or biased, leading to unreliable AI decisions.
• When explainability and transparency are crucial (e.g., legal and ethical
decisions where black-box AI could be problematic).
• When real-time processing with guaranteed accuracy is needed,
but AI predictions are probabilistic (e.g., critical medical or aviation
control systems).
Conclusion
● AI should be used wisely.
● While AI can greatly enhance automation and decision-making, it is not always
the best choice, especially when simpler, rule-based approaches can achieve
the same results efficiently and reliably.