0% found this document useful (0 votes)
37 views96 pages

AI Curriculum for Junior Secondary Students

Uploaded by

ymng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views96 pages

AI Curriculum for Junior Secondary Students

Uploaded by

ymng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Technology Education Section

Curriculum Support Division


Education Bureau
(Teacher Version) June 2023
Preface

The Education Bureau actively promotes innovation and technology (I&T) education
for all students. Through the continuous incorporation of I&T learning elements into
both the primary and secondary curricula, it strengthens the cultivation of students’
interest in and capability of learning information technology and I&T from an early age,
equips students with the 21st century skills, and unleash their creativity and potential.

To enhance I&T education, the Education Bureau has launched the “Module on
Artificial Intelligence for Junior Secondary Level” for schools to adopt. This curriculum
module aims to help teachers integrate I&T elements into classroom learning more
systematically. Schools should conduct appropriate curriculum planning to incorporate
10 to 14 hours of artificial intelligence learning into the junior secondary curriculum in
the “Information and Communication Technology” category under Technology
Education Key Learning Area so as to further cultivate students’ computational thinking
and strengthen their innovative technology learning.

The “Module on Artificial Intelligence for Junior Secondary Level” is adapted from
“CUHK Jockey Club AI for the Future Project” funded by The Hong Kong Jockey Club
Charities Trust and jointly organised by Faculty of Engineering and Faculty of
Education of The Chinese University of Hong Kong. The Education Bureau is grateful
for the collaboration with The Hong Kong Jockey Club Charities Trust in consolidating
and drawing on the experience accumulated by the schools in the project to develop the
“Module on Artificial Intelligence for Junior Secondary Level” for adoption by all
publicly-funded schools in Hong Kong. The Technology Education Section,
Curriculum Support Division of the Education Bureau collaborated with Faculty of
Engineering and Faculty of Education of The Chinese University of Hong Kong in
developing the curriculum module based on the deliverables produced and experience
gained in the project. Views on the content of the curriculum module were collected
from the Curriculum Development Council Committee on Technology Education and
their support was sought.

The “Module on Artificial Intelligence for Junior Secondary Level” aims to provide an
Artificial Intelligence (AI) curriculum for junior secondary students. AI permeates our
daily lives, and the development of technology is also advancing rapidly. It is, therefore,
of utmost importance that our young students get exposure to AI as early as possible,
understand its capabilities and limitations, and become able to further envision its
possible future development. By learning about AI and its applications, students will
gain a better understanding of how their studies and lives relate to AI. Under the
guidance of teachers, they may also think more deeply about ethical considerations
regarding AI technology and its applications, thereby learning to become ethical
technology users.
This curriculum module provides relatively foundational AI learning content suitable
for meeting the learning needs of junior secondary students and building up their
understanding of AI and related topics, thus stimulating their interest in learning
innovative technology, enhancing their capability to apply innovation and technology,
as well as enabling them to benefit society and the world by utilising AI.

The “Module on Artificial Intelligence for Junior Secondary Level” covers topics such
as AI basics, AI ethical principles, computer vision, computer speech and language, AI
computer simulation, AI in robotics reasoning, societal impact of AI, and AI and future
of work. The curriculum is organised into three different levels (please refer to Table 1
for details), and teachers can arrange the teaching sequence according to learning and
teaching needs. The key components of the curriculum module include AI ethical
principles, societal impact of AI, and AI and future of work. In addition to the core
teaching materials, this curriculum module also includes worksheets, assessment
exercises, supplementary materials, and hardware support, in order to meet the needs of
learning and teaching.
Table 1 Course modules and suggested schedule

Booklet 1
7 lessons; 35 minutes each

 Introduction to AI
 AI Basics (I)
 AI Ethical Principles
 Computer Vision (I)
 Computer Speech & Language (I)
 AI and Computer Simulation (I)
 AI in Robotic Reasoning (I)

Booklet 2
8 lessons; 35 minutes each

 AI Basics (II)
 AI Ethical Issues
 Computer Vision (II)
 Computer Speech & Language (II)
 AI in Robotic Reasoning (II)
 AI and Future of Work (I)
 Societal Impact of AI (I)
 Group Project Design, Development and Presentations (I)

Booklet 3
6 lessons; 35 minutes each

 Computer Vision (III)


 AI and Computer Simulation (II)
 AI in Robotic Reasoning (III)
 AI and Future of Work (II)
 Societal Impact of AI (II)
 Group Project Design, Development and Presentations (II)
Views and suggestion on the “Module on Artificial Intelligence for Junior Secondary
Level” are always welcome. These may be sent to:

Chief Curriculum Development Officer (Technology Education)


Curriculum Support Division
Education Bureau
Room W101, 1/F, West Block
Kowloon Tong Education Services Centre
19 Suffolk Road, Kowloon Tong
Kowloon, Hong Kong

Fax: 2768 8664


E-mail: teched@[Link]
Module on Artificial Intelligence for Junior Secondary Level
The Chinese University of Hong Kong Editorial Team

Professor YAM Yeung


Research Professor, Department of Mechanical and Automation Engineering
Director, The Chinese University of Hong Kong Shenzhen Research Institute

Professor MENG Mei Ling, Helen


Patrick Huen Wing Ming Professor of Systems Engineering & Engineering
Management

Professor CHIU Kin Fung, Thomas


Assistant Professor, Department of Curriculum and Instruction

Team members from The Chinese University of Hong Kong (CUHK)’s Faculty of
Engineering and the Faculty of Education:

Professor CS CHAI,
Professor Philip FU,
Professor Irwin KING,
Professor Wai LAM,
Professor Darwin LAU,
Professor Dahua LIN,
Professor Xunying LIU,
Professor Anthony SO,
Professor Tien Tsin WONG,
Professor Savio WONG,
Dr. Chandni SAXENA,
Dr. Simon WONG,
Dr. Symphony XING,
Dr. Sammi YAU,
Mr. Zhe XU,
Mr. Derek CHEUNG,
Miss Mandy TAM and
Mr. Michael CHUI.

In addition, the Pioneering Schools of the CUHK Jockey Club AI for the Future
Project.
Table of Content

Unit 1 – AI Basics (II) ............................................................................................................... 9


1.1 Model Training and Testing ........................................................................................... 10
1.2 Problems in Training Data ............................................................................................. 12
1.3 Artificial Neural Networks (ANN) ................................................................................ 15
1.4 Generative AI ................................................................................................................. 16
Unit 2 – AI Ethical Issues ....................................................................................................... 22
2.1 UNESCO Recommendation on the Ethics of AI ........................................................... 23
2.2 AI Principles and Ethical Issues..................................................................................... 23
Unit 3 – Computer Vision (II) ................................................................................................. 32
3.1 How do Computers See Images? ................................................................................... 33
3.2 What is Image Classification? ........................................................................................ 36
3.3 How do Humans Classify Images? ................................................................................ 37
3.4 How does AI Classify Images? ...................................................................................... 38
Unit 4 – Computer Speech & Language (II) ........................................................................... 43
Part A - Automatic Speech Recognition (ASR) .................................................................. 44
4.1 Basics of ASR ................................................................................................................ 44
4.2 ASR Pipeline .................................................................................................................. 47
4.3 ASR Evaluation .............................................................................................................. 48
Part B - Natural Language Processing ................................................................................. 50
4.4 How do Computers Understand Words? ....................................................................... 50
4.5 Applications of NLP ...................................................................................................... 52
Unit 5 – AI in Robotic Reasoning (II)..................................................................................... 59
5.1 AI Robots and Levels of Reasoning .............................................................................. 60
5.2 A Self-Driving Car Approaching a Traffic Light .......................................................... 60
Unit 6 – Future of Work (I) ..................................................................................................... 74
6.1 Past, Present and Future Jobs ......................................................................................... 75
6.2 Impact of Industrial Revolutions on Jobs ...................................................................... 76
6.3 Future of Work with AI.................................................................................................. 78
Unit 7 – Societal Impact (I) ..................................................................................................... 82
7.1 What is Social Good? ..................................................................................................... 83
7.2 Examples of AI for Social Good .................................................................................... 85
8

7.3 Examples of AI becoming a Detriment to Society ........................................................ 87


7.4 How to Design AI for Social Good ................................................................................ 88
Unit 8 - Group Project Design, Development and Presentations (I)....................................... 93
8.1 Suggested Scenarios ....................................................................................................... 93
8.2 Project Design and Development ................................................................................... 93

8
9

Unit 1 – AI Basics (II)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Understand the training and testing of AI model.
2. Learn about Artificial Neural Networks (ANN) and Deep Neural Networks (DNN).
B. Skills:
1. Apply the trained model to new testing data and understand that the quality of training data affects
the performance of AI models.
2. Understand more about generative AI and how it can be used to generate new content.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)
Web-based tool
1.1 Model Training and Testing 1 5 mins
and dataset
Web-based tool
2 5 mins
and dataset
1.2 Problems in Training Data
Web-based tool
3&4 10 mins
and dataset

1.3 Artificial Neural Networks 5 mins Video

1.4 Generative AI 5, 6 & 7 10 mins Web-based tool

*Remarks
 Teachers can assign tasks flexibly (pre-lesson, in-class, or assessment), and not all tasks are required
to be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

9
10

Unit 1 - AI Basics (II)

1.1 Model Training and Testing

Model Training
Machine learning is a subfield of AI, with the aim to automatically “learn” models from data such that the
models can mimic human intelligence in performing certain tasks. Figure 1.1 shows an example of a model
training process. In this task, we want to train a model to classify whether the images show dogs or cats. The
training inputs are images of dogs or cats, and are labelled accordingly. These labels are the correct answers
that we want the model to output, so we call them reference outputs. The output generated from an input
image is called generated output. We denote the model before training as the initial model (M0). During
training, the machine learning algorithm will update the model parameters, aiming to obtain an optimal model
with minimum error between the generated output and the reference output. We will refer to the model
after training as the trained model (M*).

Figure 1.1: Model training in an AI system

10
11

Model Testing
Figure 1.2 shows the process of model testing. After training, we have the trained model (M*). The input
to test this model is an image. The trained model (M*) should be able to classify the image to be showing a
dog or a cat. The testing images should be images that the model has never seen before. If the model is trained
well, the generated output will likely be the correct label.

Figure 1.2: Model testing in an AI system

Task 1: Model Training and Testing (Image Classifier)

Figure 1.3: Training and testing the “cat versus dog” image classifier

1. Please enter the following link or scan the QR code to visit the website.

[Link]

2. Collect images of dogs and cats, and use them as training or testing data.
3. Train the model with the training data, and test the trained model with testing data samples.
4. What is the output?
The trained model usually can recognize the images as showing cats or dogs correctly.

11
12

Notes for teachers:

The link for downloading training data and testing data:


[Link]
[Link]/:f:/g/personal/aiforfuture_cuhk_edu_hk/EmlihJz7u
6NNvT9nXuMgDWwB5Y2TwrjLlNDjl-Ngw137hg?e=S7gQzu

1.2 Problems in Training Data

In machine learning, the AI model learns from data. Therefore, the training data is a very important factor
that determines the quality of the trained AI model. Various qualities of the training data may influence the
AI model.

Task 2: Training with a Small Training Dataset

Figure 1.4: Training an AI model with a small dataset

1. Please use only ONE dog image and ONE cat image as the training data
2. Re-train the model and test with a testing image.
3. What is the output?
The model trained on the small training dataset generates incorrect output for the testing data sample(s).

4. What is the problem of training an AI model with a small training dataset?


A model trained on a small training dataset may not perform as well as another model trained on a
large training dataset. The reason is that the model may not be able to learn, from a small training set,
the general characteristics that are different between cats and dogs.

12
13

Task 3: Training with a Noisy Dataset

Figure 1.5: Training an AI model with a noisy dataset

1. Please add other animal images (e.g. rabbit, elephant) to the training dataset
2. Re-train the model and test with a new image
3. What is the output?
The model trained on the noisy training dataset may still generate the correct output for some of the
testing data samples, but the overall performance of the model has dropped.

4. What is the problem of training an AI model with a noisy training dataset?


The model trained using a noisy training dataset (i.e. with images other than dogs and cats) is learning
from some information that is not helpful for deciding between dogs and cats.

Notes for teachers:


Noisy training data and testing data can be downloaded via:
[Link]
edu_hk/Evbzs_zXs15Cj5oa3N8UGc8BcND7f1WQcClIqpwKGe0mZQ?e=Fm
75EQ

13
14

Task 4: Training with an Unbalanced Dataset

Figure 1.6: Training an AI model with an unbalanced dataset

1. Please use ONE cat image and TEN dog images as the training data
2. Re-train the model and test with a testing image
3. What is the output?
The model trained on the unbalanced training dataset does not perform as well.

4. What is the problem of training an AI model with an unbalanced training dataset?


The model trained on an unbalanced training dataset may not perform well because the major class tends
to dominate in model training and there is insufficient data for learning the characteristics of the minor
class.

14
15

1.3 Artificial Neural Networks (ANN)

An artificial neural network (ANN), or neural network, is a class of machine learning models that is inspired
by the human brain. A deep neural network (DNN) is an ANN with many layers and therefore is described
as “deep”. Just like other machine learning models, DNN can learn from data to perform intelligent tasks.
Human intelligence originates from the human brain, which consists of numerous connected neurons. At a
very high level, neural networks resemble the human brain because they consist of many connected
perceptrons. The DNN accepts input through the input layer and computations take place throughout the
hidden layers to generate output delivered via the output later (Figure 1.8).

Figure 1.7: Human brain, neuron and perceptron

Figure 1.8: An artificial neural network (ANN)

15
16

Notes for teachers:


(Further References)
The link for a video on neural networks: The link for a video on deep learning:
[Link] [Link]

1.4 Generative AI

Generative AI (such as ChatGPT and Midjourney) refers to a subfield of AI where complex neural networks
(named “Transformers”) can be used to generate new content, such as text, images, audio, code, music,
videos, etc. Recent breakthroughs in generative AI have the potential to revolutionize many fields such as
arts, design and entertainment.
ChatGPT belongs to the family of language models named GPT (Generative Pre-trained Transformers).
ChatGPT can understand human inputs (referred to as “prompts”) and respond with generated, human-like
response in real-time. With the help of ChatGPT, generative agents can simulate human-like behaviour and
interact in a virtual town like the game The Sims.

Figure 1.9: Generative agents


Please scan the QR code to play with the demo of generative agents:
[Link]

16
17

Notes for teachers:


Reference for generative agents:
Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents:
Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442.
In the following, you will be referred to a URL to try out both ChatGPT and Midjourney. Please pay attention
to its phrasing and also the accuracy of the information it provides.
Please scan the QR code to access ChatGPT:
[Link]

Task 5: Ask ChatGPT about Generative AI


*Please note that not all information provided by ChatGPT are true.
Please ask ChatGPT the following questions:
 What is generative AI?
 What are some examples of generative AI?
 How does generative AI work?
 What are some technology breakthroughs that are important for generative AI?
 What are some use cases for generative AI?
 What are the benefits and limitations of generative AI?

Task 6: Ask ChatGPT about ChatGPT


*Please note that not all information provided by ChatGPT are true.
Please ask ChatGPT the following questions:
 What is ChatGPT?
 How does ChatGPT work?
 What is a transformer in AI?
 What is a large language model?
 What are some use cases for ChatGPT?

Another example of popular generative AI application is Midjourney.


Please scan the QR code to access Midjourney:
[Link]

17
18

Or you can use Craiyon if you do not have access to Midjourney:


[Link]

Task 7: Generate an Image Based on a Prompt


Please input the following prompt in Midjourney and see what you can generate.
/imagine cartoon character gummy rabbits having a picnic, concept art, chibi
You will get something like:

Figure 1.9: Image generated by Midjourney


Or you can try a different prompt with the help of the following website.
Please scan the QR code to access the Midjourney prompt helper:
[Link]

18
19

More example prompts and generated images:


Prompt cartoon character gummy cartoon character gummy cartoon character gummy
rabbits having a picnic:: rabbits having a picnic:: rabbits having a picnic::
comicbook drawing::1 --v 4 dots::1 glowing::1 --v 4 dots::1 glowing::1 --v 4
Image
generated

Learning Summary

• AI model training and testing: AI can learn from labelled training data to produce a trained model for a
given task, and apply the trained model on new testing data.
• The quality of training data is important: A well-trained AI model requires a large amount of data,
accurately labelled (i.e. with low noise), and well-balanced to cover different variations in the data. The
quality of the training data affects the performance of the AI model.
• Artificial neural network: A class of machine learning models that is inspired by the human brain. A deep
neural network (DNN) is an ANN with many layers. Just like other machine learning models, a DNN
can learn from data to perform intelligent tasks.
• Generative AI refers to a subfield of AI where complex neural networks (named “Transformers“) can be
used to generate new content, such as text, images, audio, code, music, videos, etc.

19
20

Unit 1 - Assessment Exercise

A. Fill in the blanks

1. The quality of the training data is important. The performance of a model may typically degrade with
small training datasets, noisy training datasets and unbalanced training datasets.
2. A perceptron in a neural network resembles a neuron in the human brain.

B. Multiple choice questions

1. Which of the following is the model used in testing? (B)

A. Initial model (M0)


B. Trained model (M*) with optimal performance
C. The model after first-round training (M1)
D. The model after second-round training (M2)

2. Which of the following is the most likely to IMPROVE model performance? (A)

A. Larger amounts of good-quality training data


B. More noise in the training data
C. Smaller training dataset
D. Using fictitious data for training

3. Which of the following does NOT belong to artificial neural networks? (B)

A. Input layer
B. Neurons
C. Output layer
D. Hidden layer

C. Short questions

1. How are AI, machine learning, and deep learning related?


AI is the technology that enables computers or machines to simulate human intelligence.
Machine learning is a subfield of AI, with the aim to automatically “learn” models from data and use
these models to perform tasks which requires “intelligence”.
Deep learning is a subfield of machine learning, with the aim of using a kind of algorithm known as
artificial neural networks to “learn” models from data. The neural networks have many layers and
therefore they are described as “deep”.

20
21

2. Give an example of a real-life situation where it is difficult to train machine learning model that
performs well.
A student wants to train a classification model to recognize different kinds of birds. However, the student
is unable to find a sufficient number of clear photos of rare bird species. The model he / she trains is
probably unable to recognize the rare bird species well.

21
22

Unit 2 – AI Ethical Issues

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Understand the five common AI ethical principles (namely, Transparency, Justice and Fairness,
Beneficence, Responsibility and Privacy).

B. Skills:
1. Apply the AI ethical principles and identify the possible risks in the use of AI technologies.

C. Attitudes:
1. Be aware of possible risks of AI technologies and applications.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)
2.1 UNESCO Recommendation on the Ethics
3 mins
of AI
2.2 AI Principles and Ethical Issues

2.2.1 Transparency 3 mins


2.2.2 Justice and Fairness 1 8 mins Video and website
2.2.3 Beneficence 2 5 mins Video and website
2.2.4 Responsibility 3 10 mins Video and website
2.2.5 Privacy 4 6 mins

*Remarks
 Teachers can assign tasks flexibly (pre-lesson or in-class activities), and not all tasks are required to
be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

22
23

Unit 2 - AI Ethical Issues

2.1 UNESCO Recommendation on the Ethics of AI

AI is developing rapidly and shaping our future. The most important consideration in the development of AI
is to ensure sustainable benefits -- to be “human-friendly” and “earth-friendly”. Recently, United Nation’s
Education, Scientific, and Cultural Organisation (UNESCO) has developed a guideline of ethical AI and
provided the recommendations of AI ethical principles. The recommendations also outlined support of
concrete policy actions to achieve the goal of developing ethical AI systems.

In the next section, we will have an in-depth discussion of issues related to five selected AI ethical principles
-- transparency, justice and fairness, beneficence, responsibility and privacy.

2.2 AI Principles and Ethical Issues

2.2.1 Transparency: Transparency in AI focuses on ensuring that AI-enabled processing in applications is


interpretable by and explainable to users.

Figure 2.1: Image recognition of a banana, or a banana with a sticker.


However, AI models are complex and lack transparency for tracing decision errors or correcting them. The
above is an example from image recognition. As shown in Figure 2.1, if we show a banana alone to the
23
24

image recognition model, it will recognize it as a banana. However, if we place a colorful sticker next to the
banana, the model outputs the label “toaster”. Presently, the image recognition model lacks transparency
and it is difficult to explain why adding a sticker to the input image can cause this change. To counter these
problems, the development of transparent AI models will be a key direction of the future.

2.2.2 Justice and Fairness: The development of AI technologies and their applications must ensure
equality, inclusion and diversity. “Justice and fairness” can be considered as the rules of fair play. AI
should promote justice and seek to eliminate all types of discrimination.

Task 1: AI for Screening Online Video Interviews


AI is often used to screen online interview videos for hiring staff. It analyzes how you speak, the words you
use and your emotional responses. Please read the materials and answer the following questions.

Figure 2.2: AI for screening online video interviews

[Link] [Link]
cons-of-using-ai-in-recruiting/ 55985943

24
25

What is/are the advantage(s) or disadvantage(s) of using AI for interviewing in hiring?


Advantage(s): Screening of a large dataset of resumes can be done more efficiently.

AI is able to act as an HR professional with facial expression recognition and diagnoses.

Disadvantage(s):
AI may follow a set of patterns in an interview dialog and may not be able to conduct a dialog with

sufficient variability based on the inputs from the applicant.

AI algorithms can be biased (e.g. gender bias or age bias) and AI models can make mistakes.

People going through the hiring process may not feel comfortable with a non-human interviewer.

Notes for teachers:


References:
[Link]

2.2.3 Beneficence: “Beneficence” refers to the development and application of AI technologies that must
not do any harm. AI should promote well-being, preserve dignity, and enhance sustainability. AI should
ultimately be developed for the common good and benefit of all humanity – “do good and do no harm”.

25
26

Task 2: Airport Security Robots


AI can be used for security or military purposes. Please read the materials and answer the questions in the
following section.
Airport Security Robots

[Link]
/

Figure 2.3: airport security robots.

1. What are the functions of these robots?


Airport security robots can help maintain airport and flight security to protect all passengers.

2. What good does this do for society? Why?


The security robots are good for our society, because they help to enhance airport security.

Which one is good for our society? Why?


The security robots are good for our society, because they help to enhance airport security.

Notes for teachers:


References:
Video: Airport security robots
[Link]

26
27

2.2.4 Responsibility: “Responsibility” refers to the development and application of AI technologies that
must uphold accountability and honesty. The development of AI systems and how they compare with the
fundamental human values should ensure well-being of humanity.

Task 3: Ethical Dilemmas Applied to Self-driving Cars


Real-world applications of AI may involve autonomous decision making, for example, how to avoid accidents
in a self-driving car. Consider the classic trolley problem with a thought experiment that involves a moral
dilemma - where an onlooker has the choice to save 5 people in danger of being hit by a trolley, by diverting
the trolley to kill just 1 person. What should be the trade-off in the onlooker’s decision? Can there be
sacrifices that are acceptable? If similar types of decisions need to be made by the self-driving car, what kind
of moral choices should be programmed into the AI models? Please read the following materials and answer
the questions below.

Figure 2.4: The trolley problem faced by a self-driving car (image source: [Link], link)

27
28

Understanding Self-Driving Cars The trolley problem Moral Machine


Under AI Technology—From
Ethics to Privacy Issues

[Link] [Link]
[Link] [Link]/words-at-
l/[Link] play/trolley-problem-moral-
philosophy-ethics

Notes for teachers:


References information:

The trolley problem MIT moral machines experiment The ethical dilemma of self-
driving cars

[Link] [Link] [Link]


v=bOpf6KcWYyw v=XCO8ET66xE4&t=44s v=ixIoDYVfKA0&t=3s

1. Moral decisions need to be programmed for self-driving cars. What should those decisions be?

Some decisions may include:


1. Choices of action versus inaction in different situations.
2. When lives must be sacrificed, what may be the considerations in making the moral decision?

E.g. consider humans versus pets, women versus men, healthy versus sick, young versus old,
high social status versus low social status, more lives versus fewer lives.

2. Who should be responsible for making the decisions?

Possible parties may include AI programmers (who built the AI models making the decisions);

manufacturers of self-driving cars; drivers of the car; and policy makers.

28
29

2.2.5 Privacy: While developing AI technologies and incorporating them into real applications, sensitive data
must be protected and secured. AI systems create new privacy and ethical challenges. AI and machine
learning consume massive volumes of data to learn to improve the AI model’s decision-making process.
However, there are numerous examples in which the power of AI algorithms results in privacy issues and
ethical dilemmas. The principle of privacy should be to protect and secure sensitive data.

Task 4: Data Collected by Mobile Devices


Every time we use our mobile devices, we are releasing a lot of our personal data.

Figure 2.5: Mobile devices and data collection.

1. What types of personal data can your phone collect about you?

Name; website visited; email address and other contact details; geolocation; social network contacts.

2. When does your phone collect your data?

When I am clicking or typing; speaking; browsing or purchasing; taking or uploading photos and videos;
contacting friends, etc.

29
30

Learning Summary

• AI can help (i.e. do good) and AI can hurt (i.e. do harm). We need to consider possible benefits versus
harm in AI applications.
• In November 2021, UNESCO’s Recommendation on the Ethics of Artificial Intelligence was adopted by
the 193 Member States as the first global standard.
• The ethical principles raise awareness of possible risks of AI technologies and applications and guide the
ethical development and development of AI to benefit humanity.

30
31

Unit 2 - Assessment Exercise

A. Fill in the blanks

1. Accidents involving self-driving cars raise concerns about accountability, which is related to the AI
ethical principle of responsibility .

B. Multiple choice questions

1. Which of the following is an action in response to the principle of justice and fairness? (A)

A. Balance the data from female and male engineers to mitigate gender discrimination in AI hiring
assistants.
B. Make sure the applications do not bring any harm to people.
C. Include illustrations and explanations of AI applications and products to help users better
understand them.
D. Protect users’ personal information.

2. What is/are the possible outcome(s) for users in allowing virtual assistants to collect their personal
information? (C)

(1) The virtual assistants will be able to provide more personalised recommendations for their users.
(2) The AI technologies supporting the virtual assistant applications will be able to infer the users’ daily
routines based on their geolocation information.
(3) Their personal information, if insufficiently protected, may be easily accessed by third parties
without their knowledge.

A. Only
B. (2) and (3)
C. (1), (2) and (3)
D. None of the above

C. Short questions

1. How can we balance the need for more accurate algorithms with the need for transparency towards
people who are being affected by these algorithms?

We should sacrifice accuracy for transparency, if necessary.

31
32

Unit 3 – Computer Vision (II)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Learn about the basic concepts of digital images and the pipelines of image classification.
2. Understand how AI classifies images using Convolutional Neural Network (CNN).

B. Skills:
1. Indicate the potentials and limitations of CV technology.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)

3.1 How do Computers See Images? 1&2 10 mins Website

3.2 What is Image Classification? 3 5 mins Website

3.3 How do Humans Classify Images? 4 5 mins

3.4 How does AI Classify Images? 15 mins

*Remarks
 Teachers can assign tasks flexibly (pre-lesson, in-class, or assessment), and not all tasks are required
to be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

32
33

Unit 3 – Computer Vision (II)

3.1 How do Computers See Images?

In the way that humans see the world, images are composed of shapes and colors. However, computers can
only process numeric values.

Pixels
If you zoom closely into an image, you will see that it is made up of squares at different positions and each
square only has one color. These squares are the smallest unit of an image and are called pixels. Figure 3.1
(left side) is an image of a parrot with 700×700 pixels. We further converted it into 36×36 pixels (Figure 3.1,
middle) and 18×18 pixels (Figure 3.1, right side) by averaging the colors so that you can see a pixel more
clearly.

Figure 3.1: Image of a parrot represented with different number of pixels

Colors
Colors can be represented as numbers.

A grayscale image is made up of pixels with different shades of gray varying from black (represented as the
number 0) to white (represented as the number 255).

A color image is made up of pixels with a red component (R), a green component (G), and a blue
component (B). Each component can vary and is represented as a number from 0 to 255 (inclusive). A
combination of different RGB components becomes different colors.

33
34

Task 1: Representing Colors as Numbers


Please scan the QR code to try an online color picker. You can change the RGB values and see how the color
changes. Or you can pick a color and see its RGB values.

Figure 3.2: Red, green and blue components of colors

Notes for teachers:


The link for the QR code: [Link]

34
35

Task 2: Representing an Image as Numbers


Now we will further convert the image into 3×3 pixels. Please scan the QR code to try an online image color
picker. For each pixel, please find its R, G, and B values. Then please put all the R, G, B values at the
corresponding locations in the R component table, G component table and B component table.
Notes for teachers:
The link for the QR code:
[Link]

Figure 3.3: Representing an image as numbers

These are actually the numbers that represent this 3×3 image in a computer. You can see that when an image
is represented as its RGB components, an image with 3×3 pixels is stored as 3×3×3 = 27 numbers. Can you
try to calculate how many numbers are there to represent the original image of 700×700 pixels (Figure 3.1
left)?

700×700×3 = 1,470,000

Notes for teachers:


3 (height) ×3 (width) ×3 (RGB components) = 27 numbers
Or you can count, in Figure 3.3, 27 numbers are used to represent the 3×3 image.
700 (height) × 700 (width) × 3 (RGB components) = 1,470,000 numbers
35
36

3.2 What is Image Classification?

One popular task in Computer Vision (CV) is image classification. In Unit 1 of Booklet 1 entitled
“Introduction to AI”, the AI model which can classify an input image as “rock”, “paper” or “scissors” is an
image classification model. Image classification is the task of classifying an image to be a class among a set
of pre-defined classes. For example, if we have many images of either an apple, an orange, or a banana, a
trained AI model can classify them accordingly.

Figure 3.4: Image classification

Task 3: Fish or Not Fish?


In this demo, an AI robot can perform an image classification task of identifying whether an image shows a
fish or not, based on examples provided by the students.
Notes for teachers:
Please scan the QR code, click try now, click continue for the demo.
The link for the QR code: [Link]

Figure 3.5: Classify whether an image shows a fish or not


36
37

3.3 How do Humans Classify Images?

Task 4: Distinguishing between Birds and Zebras in the Images

Figure 3.6: Images of birds and zebras

How do we classify all these images as either showing birds or zebras?

Images (a), (d), (f) and (h) are showing birds because a bird usually has a sharp beak and two legs.
Images (b), (c), (e) and (g) show zebras because they usually have black-and-white stripes on their skin
and each zebra has four legs.

We can classify objects using their distinctive characteristics and these characteristics are called features.
Features that help classify an image of a bird versus a zebra include: birds usually have sharp beaks, while
mouths of zebras are round. Also, zebras have a special black-and-white stripe pattern, while birds generally
do not. A third point is that each zebra has four legs, while each bird has two legs. A fourth point is that each
bird has two wings, while zebras do not have wings.

37
38

3.4 How does AI Classify Images?

As you may imagine, if we can develop an AI model to extract the features used by humans to classify images,
then the AI model should also be able to perform image classification tasks. However, it is generally very
difficult to extract the exact features used by humans.

For example, it is difficult to specify precisely what the feathers of birds look like. In Figure 3.6, the feathers
of the birds may be very different: (a) has long feathers, (d) has colourful patterns, textures of feathers of (f)
can hardly be seen, and feathers of (h) look fluffy.

Convolutional Neural Networks (CNN)


Instead, we can use deep learning to automatically capture some features that can be used by AI for image
classification. As we learned earlier in Booklet 1, deep learning is the technology behind many powerful AI
applications nowadays, including CV.
A trained deep learning model which can perform a specific task with a respectable success rate usually can
capture the appropriate features from the input. Some features that are used by AI may look like features that
human use. Other features that are used by AI cannot be understood by humans, but they are useful for the
AI models in making decisions, such as distinguishing among objects from different classes.

The Convolutional Neural Network (CNN) is a class of deep learning models which is frequently used in
CV tasks. Figure 3.7 illustrates a CNN. In a CNN, we may have multiple convolutional layers (i.e. dark
green blocks in Figure 3.7) and they are important for capturing the features from the input. This CNN
example takes an image as the input and can output the predicted class of the image.

Figure 3.7: Illustration of a CNN.

38
39

Features in a CNN
A CNN can have multiple convolutional layers and each convolutional layer can have multiple filters.
Different filters can learn to capture different features during training. Figure 3.8 shows some images after
being processed by a convolutional layer of the CNN (please refer to Figure 3.7, second layer counted from
the input side), and are sampled from 3 randomly selected (out of a total of 64) filters in that layer.

Figure 3.8: Features from different input images, extracted by three randomly selected filters in a
convolutional layer (layer 2) of a trained CNN

39
40

Figure 3.9: Features from different input images, extracted by a randomly selected filter for each of the
earlier to later convolutional layers (layer 2, layer 5, layer 9, layer 13, and layer 17) in a trained CNN

Earlier convolutional layers look for features in a smaller region in the image (i.e. lower-level features). For
example, in Figure 3.9, the image in row 2 column 2 shows a filter in an early convolutional layer (i.e. layer
2 of the CNN in Figure 3.7) which captures the detailed texture of the peacock’s tail. Later convolutional
layers look for features in a larger region in the image (i.e. higher-level features). For example, in Figure
3.9, the image in row 4 column 5 shows a filter in a late convolutional layer (i.e. layer 13 of the CNN in Figure
3.7) which captures abstract features such as the shape of the zebra.

Using these features at different levels, the CNN can decide whether the image shows a bird versus a zebra
based on output quantitative values that are interpreted as probabilities. For example, the model output
specifies that for the input image in Figure 3.7, there is a probability of 97% that it shows a bird and a
probability of 3% that it shows a zebra.

40
41

Learning Summary

• An image is made up of pixels and each pixel only has one color. Each color can be represented as one
number (for a grayscale image) or three numbers (RGB components of a color image).

• Image classification is the task of classifying an image as one of a set of pre-defined classes.

• Humans can classify images by looking for features such as striped patterns of a zebra or wings of a bird.
Features are the distinctive characteristics of an object.

• AI can classify images by looking for features. A frequently used method being a class of deep learning
models called Convolutional Neural Network (CNN). Features used by AI are different from features
used by humans but are useful for making decisions such as performing image classification tasks.

• Earlier convolutional layers look for features in a smaller region in the image (i.e. lower-level features).
Later convolutional layers look for features in a larger region in the image (i.e. higher-level features).

41
42

Unit 3 - Assessment Exercise

A. Fill in the blanks

1. A color image is made up of pixels with a red component (R), a green component (G) , and a
blue component (B).
2. Image classification is the task of classifying an image into a fixed set of pre-defined classes.
3. Features are the distinctive characteristics of an object.

B. Multiple-choice questions

1. Which of the following statements is NOT correct? (A)

A. Features used by AI in image classification are the same as features used by human.
B. In an image classification task, the possible classes are pre-defined.
C. Convolutional layers are important for capturing features.
D. Early convolutional layers look for features in a small region in the image.

C. Short questions

1. Please describe an application of image classification.


We can train an image classification model to classify if the soil for growing a a plant is too dry or not,

in order to determine if watering is needed.

We can train an image classification model to classify if a hand gesture from a human represents a

“rock”, “paper” or “scissors”, so that the AI can play the game Rock-Paper-Scissors with the human.

42
43

Unit 4 – Computer Speech & Language (II)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Learn how Automatic Speech Recognition (ASR) transcribes speech into text and its evaluation.
2. Be aware of how computers understand words through Natural Language Processing (NLP).
B. Skills:
1. Apply the NLP, such as by generating an anime video with an image and script.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)
Video and online
4.1 Basics of ASR 1 7 mins
application

4.2 ASR Pipeline 5 mins

4.3 ASR Evaluation 2&3 5 mins

4.4 How do Computers Understand Words? 4 5 mins Online Application

4.5 Applications of NLP 5, 6 & 7 13 mins Online Application

*Remarks
 Teachers can assign tasks flexibly (pre-lesson, in-class, or assessment), and not all tasks are required
to be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

43
44

Unit 4 - Computer Speech & Language (II)

The ability to process human language is considered to be a sign of intelligence. Technologies that can
process speech input and generate speech output are core to AI. Previously we have learned how AI
generates speech output with text-to-speech (TTS) technologies. In this unit, let us look at how AI “listens”
to speech input with automatic speech recognition technologies and how AI “understands” and “generates”
natural language with natural language processing (NLP).

Part A - Automatic Speech Recognition (ASR)

Automatic speech recognition (ASR) is a technology that enables computers to recognise and transcribe
human speech into text.

4.1 Basics of ASR

ASR is widely used in our daily lives as shown below in some common applications.

Application More information


Dictation system

dear tom comma


new paragraph
how are you

(Left) The user says -- “dear tom comma new paragraph how are you”.
(Right) The system recognizes the speech and converts it into text, as well as
executing instructions for punctuation, like “comma” and “new paragraph”, and
inserting capitalization automatically.

Computer-aided Allow users to practise speaking through the AI-enabled language learning system
language learning and get instant feedback to facilitate learning.
system
Notes for teachers:
References:

[Link]

44
45

Personal virtual
assistant

set timer for


10 minutes

(Left) The user says, “set timer for ten minutes” to a phone.
(Right) The system recognizes the speech and then launches the timer application,
as well as set it for 10 minutes.

CUHK disordered Recognizing speech from a subject with speech disorders


speech recognizer
(assistive
technology)

Link

The performance of ASR is affected by many factors, such as environmental characteristics (e.g. background
noise), quality of recording devices, etc., as shown in the table below:

Factors Examples
Speaker Style of speaking: fast, slow, spontaneous, enunciated, slurred, emotional, etc.
characteristics Spoken vocabulary: proper nouns, colloquial expressions
Pausing: hesitations, phrasal breaks, ends of sentences
Accent American, Australian, British accents, etc.

45
46

Task 1: Factors Affecting the Accuracy of ASR


In the following table, we present two ASR engines – ASR 1 and ASR 2. We also present a clean input speech
wave and a noisy input speech wave. If you open the ASR 1 application, please select the language option
“Chinese (Cantonese, Traditional)”. Then upload the clean input speech wave examine the generated output.
Afterwards, upload the noisy input speech wave and examine the generated output. Now, open the ASR 2
application and select the language option “廣東話 (香港)”. Please repeat the actions in uploading the two
input speech waves, one by one, and in each case, record the output you get.

Testing Audio ASR Output of ASR


ASR 1: 每個星期三晚上我都要去上搏
擊操課。

Clean:
Link

ASR 2: 每個星期三晚上我都要去上搏
擊操課。
Link

Link

ASR 1: 每個星期三晚上,我都要去上
搏擊催貨。誒?

Noisy:

Link
ASR 2: 每個星期三晚上我都要去上搏
擊槍貨。
Link

Link

46
47

4.2 ASR Pipeline

A speech waveform is recorded using a microphone connected to the sound card of a computer, and is
transformed into a digital signal. This is input into an AI-model that is trained to perform ASR. We can adopt
different output units for training the model. Here are some possible output units:
 English words, such as “the North wind”
 English phonemes, such as “/ðə nɔrθ wɪnd/” – A phoneme is the smallest unit of sound that can
distinguish one word from another, e.g. “pat(/pæ t/)” versus pet (/pɛt /)”
 Chinese characters, e.g. “香港是我家”
 Cantonese syllables in Jyutping “/hoeng1 gong2 si6 ngo5 gaa1/”
 Putonghua syllables in Pinyin “/xiang1 gang3 shi4 wo3 jia1/”

Note that for the Chinese language, the recognizer may output only the syllable, or the syllable with a tone.
Figure 4.1 provides an illustration of the ASR pipeline.

Figure 4.1: ASR pipeline

In order to recognize the different sounds as phonemes or syllables (in pinyin or Jyutping), the ASR model
needs to capture acoustic information. Furthermore, in order to recognizer words or characters, the ASR
model needs to capture linguistic information (covering the vocabulary and grammar). For example, “right”
and “write” have the same sounds in their pronunciations. In order for the ASR model to generate the correct
word in the output, it needs to capture contextual information from surrounding words in the sentence.

The ASR pipeline can be extended to recognize different characteristics from the speech signal, e.g. the gender,
age, or identity of the speaker, the language spoken, the accent of the speech, etc.

47
48

4.3 ASR Evaluation

Now that we understand how ASR works, the next question to ask is how to measure performance. This is
known as performance evaluation.

English ASR Evaluation


Imagine that someone said “Is the baby crying now?” If the ASR result is “Is the bay bee crying?”, we see
that there are errors. The model substituted “bay” for “baby”, inserted “bee” before the word “crying” and
deleted the word “now”. This can be seen by aligning the two sentences:

is the baby *** crying now


is the bay bee crying ---

The word error rate is calculated by:

Word Error Rate (WER) covers 1 substitution error, 1 deletion error and 1 insertion error, out of a total of 5
words in the input. Hence:
WER= 3/5 = 0.6 (or 60%)

Chinese ASR Evaluation


Since Chinese is written as characters, we often use character error rate (CER) for Chinese ASR evaluation.
This is because there may be uncertainty in determining the word sequence from a character sequence, e.g.
“上海市長江大橋” may be divided into two reasonable word sequences. The calculation of the character
error rate is similar to the word error rate, with the only difference being the use of a character, rather than a
word, as the unit. Imagine that someone said, “唯我們暫時唔指証他”. If ASR recognised result is “為我們
暫時指証他了” – we can see that there are recognition errors -- the model substituted “為” for “唯”, deleted
the character “唔”, and inserted “了” at the end. This can be seen by aligning the two sentences:

唯我們暫時唔指証他**
為我們暫時--指証他了

Task 2: Possible Word Sequences that Fit a Character Sequence

For the given character sequence, “上海市長江大橋”, please try to segment it into a sequence of Chinese
words. What may be the possible Chinese word sequence(s)?

Suggested solutions:

1. 上海市│長江大橋
2. 上海市│長江│大橋
3. 上海市長│江大橋
4. 上海│市長│江大橋
5. 上海│市長│江│大橋

48
49

Task 3: Character Error Rate (CER)

Please fill in table below and calculate the CER based on the ASR result: “為我們暫時指証他了” for the
speech “唯我們暫時唔指証他”.

Error Number of error(s)


Substitution 1
Deletion 1
Insertion 1

Number of character 9
CER 1 (substitution) + 1 (deletion) + 1 (insertion) / 9 (number of characters in the
input)
= 0.3, i.e. 30%

Learning Summary of Part A

 ASR aims to transcribe speech into text.


 The approach to ASR may also be extended to recognize other characteristics in speech, e.g. speaker
characteristics, the language spoken, the accent in speech, etc.
 The performance of ASR is affected by the quality of the input speech, such as loudness, environmental
noise, speaking styles (e.g. enunciation versus mumbling), etc.
 The ASR model needs to learn acoustic information (relating to the sounds) and linguistic information
(relating to vocabulary and grammar).
 The performance of ASR may be evaluated by using word error rate (e.g. for English speech) and character
error rate (e.g. for Chinese speech).

49
50

Part B - Natural Language Processing

Natural language processing (NLP) is a branch of AI that helps computers to understand, interpret and
process human language. Applications of NLP include plagiarism detection, text generation, text
summarisation, question answering, news recommendation, machine translation and grammar checking.

4.4 How do Computers Understand Words?

In the previous unit, we learned that computers can only process numeric values and images are represented
as numbers for computers. Similarly, we can represent words quantitatively, in order to be processed by
computers.

Figure 4.2: Words represented quantitatively as word embeddings.


[Reference: NTT Technology Review 15(11), 2017]

Word Embedding
Word embedding is an approach that involves complex computation to represent each word as a sequence of
numbers (which we call a vector). Word embedding vectors can be trained using large-scale (e.g. web-scale)
text data. Using the quantitative values in word embeddings, computers can calculate the meanings of words
and the relationships between words. The overall idea is illustrated in Figure 4.2. The use of word embeddings
enable computers to compute how strongly two words are associated with each other – for example, words
that have similar meanings will have a small distance between them in the word embedding space; and the
relationships among words may be reflected through angles, as illustrated in Figure 4.3.
50
51

Figure 4.3: The similarity and relationship between words can be reflected through word embeddings.

Notes for Teachers:


Reference for the figure 4.3:
[Link]

Task 4: Word Association Games


This task shows a game that can rank a list of words based on their association with the word you typed in
response to a prompt. You can win points if you are able to quickly think of a word that has close association.
Please scan the QR code or click the link to play the word association games.
[Link]

Figure 4.4: Word association game

51
52

4.5 Applications of NLP

The following are some powerful NLP applications for you to try.

Task 5: Generate Lyrics


ChatGPT is the state-of-the-art NLP application. It can perform NLP tasks such as text generation, text
summarisation, question answering, language translation, etc. Let’s ask ChatGPT to generate some lyrics.

Website to use ChatGPT:


[Link]

Prompt to use ChatGPT to generate lyrics on e.g., the rubber ducks:


Can you write me a song with the following keywords: rubber duck, double ducks, Victoria Harbour,
bathtub, bath, joy, happiness, friends, double duck is double luck, get connected, together

Figure 4.5: Rubber ducks (image generated by Midjourney)


Example of songs generated by ChatGPT:
Song 1 Song 2
(Verse 1) (Verse 1)
Rubber duck, floating in the bay, In the land of Victoria Harbour, where the water's
Victoria Harbour, is where it plays, blue,
Double ducks, now in sight, There's a little secret that I'll share with you.
Turning the harbour into a giant bathtub delight. Grab your rubber duck, and I'll bring mine too,
(Chorus) We'll jump into the bathtub and watch our worries
Bath time joy, happiness surrounds, undo.
Friends together, there's no better sound, (Chorus)
Double duck is double luck, Double ducks, double ducks, bring joy to our faces,
Get connected, let's all stay stuck. In the bath, we find our happy places.
(Verse 2) Double duck is double luck, we'll splash and play,
Tamar Park, where they reside, Get connected, together, let's seize the day.
Central and Western, by their side, (Verse 2)
Kowloon landmarks, they greet with glee, With a quack and a splash, our happiness grows,
We'll float down the river wherever it goes.
52
53

Cultural Centre, Avenue of Stars, and K11 Musea, Our friends on the shore will join in the fun,
you'll see. Together we'll bask in the warm setting sun.
(Chorus) (Chorus)
Bath time joy, happiness surrounds, Double ducks, double ducks, bring joy to our faces,
Friends together, there's no better sound, In the bath, we find our happy places.
Double duck is double luck, Double duck is double luck, we'll splash and play,
Get connected, let's all stay stuck. Get connected, together, let's seize the day.
(Verse 3) (Bridge)
Ngong Ping 360, the bubble bath ride, As the water dances, we'll sing and we'll laugh,
Flying Rubber Duck, by your side, There's nothing quite like a good, warm bath.
Star Ferry, duck-themed trip, Our hearts overflow with joy and delight,
Drenched in happiness, we'll all let it rip. Together we'll chase away the dark night.
(Chorus) (Chorus)
Bath time joy, happiness surrounds, Double ducks, double ducks, bring joy to our faces,
Friends together, there's no better sound, In the bath, we find our happy places.
Double duck is double luck, Double duck is double luck, we'll splash and play,
Get connected, let's all stay stuck. Get connected, together, let's seize the day.
(Outro) (Outro)
Rubber duck, double ducks, we'll never part, So take my hand, and let's dive on in,
Together forever, in our hearts, To a world of happiness, where we all win.
Bath time joy, happiness found, With our rubber ducks, and our hearts so light,
Friends together, forever bound. Together we'll make everything feel so right.
(Double Ducks, Double Ducks)
(Double Duck is Double Luck)

Task 6: Generate Poems


Besides generating lyrics, ChatGPT can also generate poems. Let’s try generating poems on s specific topic
starting with a specific word.
Prompt to use ChatGPT to generate poems on your school e.g., The Chinese University of Hong Kong,
starting with the word “quite” at the beginning of every line:
Can you write a poem on The Chinese University of Hong Kong using the word “quiet” at the beginning of
every line?

Figure 4.6: A girl writing poems (image generated by Midjourney)

53
54

An example poem generated by ChatGPT:


Quiet whispers of ambition,
Quiet halls that hold tradition,
Quiet scholars in deep thought,
Quiet knowledge, dearly sought.

Quiet mornings 'neath the sun,


Quiet campus, day's begun,
Quiet pathways lined with trees,
Quiet minds, like gentle breeze.

Quiet gardens, verdant spaces,


Quiet wisdom in their graces,
Quiet stories yet untold,
Quiet futures to unfold.

Quiet footsteps echo far,


Quiet under moon and star,
Quiet moments, hearts unite,
Quiet dreams in Hong Kong's night.

Quiet pride, a legacy,


Quiet strength, diversity,
Quiet voices, visions sewn,
Quiet growth, as seeds are sown.

Quiet courage, hope, and fire,


Quiet hands that lift us higher,
Quiet bonds, a firm foundation,
Quiet heart, The Chinese University of Hong Kong's aspiration.

54
55

Task 7: Generating Chinese Couplets


Besides generating lyrics and poems, ChatGPT can also generate Chinese couplets. Let’s have a try.
Prompt to use ChatGPT to generate Chinese couplets: Can you generate the Chinese couplets for “詩意悠悠
樂自然”?

Can you generate the Chinese couplets for “詩意悠悠樂自然”?

Figure 4.7: Generate Chinese couplets with ChatGPT


Examples:

The first line Couplets generated by ChatGPT


Option 1: 詩意悠悠樂自然,山水畫卷心中藏。

e.g. 詩意悠悠樂自然 Option 2: 詩意悠悠樂自然,風吹草低鳥婉轉。

Option 3: 詩意悠悠樂自然,氣象萬千情韻橫。
Option 1:天生我材必有用,千錘百煉鑄英雄。

天生我材必有用 Option 2:天生我材必有用,努力拼搏創未來。

Option 3:天生我材必有用,春華秋實展宏圖。
Option 1: 星光璀璨月華明,天外仙樂夢中聽。

星光璀璨月華明 Option 2: 星光璀璨月華明,夜色如水夢裡行。

Option 3: 星光璀璨月華明,人間美景賞不盡。

55
56

The above task is empowered by AI generated content. Besides natural language, AI can also generate many
other content, e.g. translations, code, music, etc. This is referred to as “Generative AI”. Later in the projects,
we will also show you how to generate a talking avatar that can speak about a topic of your choice!

Notes for Teachers:


Before using the Poe platform, please remind students of the following: 1) The Poe platform requires an email
address to register for an account and to log in; 2) Be clear about the information Poe will collect and use
regarding the information you provide.

Learning Summary of Part B


 Natural language processing (NLP) is a branch of AI that helps computers to understand, interpret
and manipulate human language.
 Applications of NLP include plagiarism detection, text generation, text summarisation, question
answering, news recommendation, machine translation and grammar checking.
 One way for computers to understand words is word embedding. With word embedding, words with
related meanings will have a similar representation.

56
57

Unit 4 - Assessment Exercise

A. Fill in the blanks

1. Automatic speech recognition (ASR) aims to transcribe speech into text .


2. The factors affecting the performance of an ASR system include environmental noise, quality of the
recording device , speaking style , accents in speech, etc.
3. Chinese speech can be transcribed as syllables carrying tones , as well as characters .
4. The approach to ASR can be extended to recognize the speaker’s characteristics, __the language spoken,
the accent, or the environmental conditions of the recording.
5. The performance of ASR may be evaluated in terms of word error rate or character error rate .
6. __Natural language processing__ is a branch of of AI that helps computers to understand, interpret and
manipulate human language.

B. Multiple choice questions

1. The performance of ASR is affected by many factors. Which kind of factors “Pauses” belong to? (A)

A. Speaker characteristics
B. Accents
C. Environmental factors
D. Recording quality

2. Which of the following element(s) can be found in human speech, including the Chinese language? (D)

A. Tones
B. Phrases
C. Words
D. (A), (B) and (C)

3. A/An _____ is the smallest unit of sound in a language that distinguishes one word from another. (C)

A. Tone
B. Vowel
C. Phoneme
D. Alphabet

4. What kind of information is considered as linguistic information? (C)

I. Vocabulary
II. Grammar
III. Loudness

A. I and III
B. II and III
C. I and II
D. I, II, and III
57
58

5. Which of the following elements is not needed for evaluating the performance of a speech recognizer?
(C)
A. Substitution errors
B. Insertion errors
C. Completion errors
D. Deletion errors

C. Short questions

1. What may be the possible output(s) in a speech recognition system?


Words, characters, syllables, phonemes

2. Give TWO examples of the use of ASR in our daily lives.


Fast food ordering / ticket buying system / reception robot

58
59

Unit 5 – AI in Robotic Reasoning (II)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Understand how a robot exhibits different levels of reasoning depending on its design, coding and
intended applications

B. Skills:
1. Indicate the particulars of three types of reasoning, namely Skill-based reasoning, Rule-based
reasoning, and Knowledge-based reasoning.

Time
(minutes in class / *Learning
Topic *Task
pre-lesson / after Resources
lesson)
5.1 AI Robots and Levels of Reasoning 5 mins

1 6 mins

5.2 A Self-Driving Car Approaching a Traffic 2 8 mins


Video game
Light 3 8 mins
4 8 mins

*Remarks
 Teachers can assign tasks flexibly (pre-lesson or in-class activities), and not all tasks are required to
be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

59
60

Unit 5 – AI in Robotic Reasoning (II)

5.1 AI Robots and Levels of Reasoning

A robot is a machine, typically programmable by a computer, that can execute specific tasks automatically
with speed, precision and repeatability.

Similar to humans, a robot performing a task needs to repeatedly go through a three-step process until the task
is complete: sensing – observing the situation at hand; reasoning – deciding the next action to take; and acting
– executing the decision. Among these three, reasoning is the part that contributes mostly to the perceived
“intelligence” of a robot.

Human reasoning can mainly be categorized into three levels:


1. Skill-based Reasoning: Actions derived based on intuitive sensory-motor behavior without much
conscious thinking.
2. Rule-based Reasoning: Actions derived based on instructions, rules or patterns gained from prior
experience.
3. Knowledge-based Reasoning: Actions derived from detailed analysis of the current situation, information
gained from prior experience, knowledge about the process at hand, and the objective(s) to be achieved.

Robots may be designed and coded to incorporate different levels of reasoning depending on their intended
applications and desired capabilities. Among them, AI robots are considered as those that make decision
largely based on Knowledge-based reasoning. They are capable of learning from past experience and adapting
to uncertain circumstances, and hence can better handle complex and dynamic tasks. In return, AI robots
require more powerful coding and machine learning algorithm in their micro-processor, more data from past
processes to learn from, and more advanced sensors and actuators to support task execution.

5.2 A Self-Driving Car Approaching a Traffic Light

Take the example of a self-driving car approaching a traffic light, it needs to decide whether to brake or not
to brake. When the signal is red, the car should brake and stop. When the signal is green, the car can continue
to move forward. The ambiguous situation is when the light turns from green to amber, the car needs to decide
whether to brake or to go forward. Either way, the goal of the self-driving car is not to violate any traffic rule,
i.e. the car should go past the traffic light, or it should come to a stop in front of the traffic light, before the
signal turns from amber to red.

60
61

Figure 5.1 Traffic Light Decision Simulator

In this Unit, students will use a Traffic Light Decision Simulator to see how a self-driving car makes braking
decisions under the three levels of reasoning and observe the respective outcomes. Figure 5.1 shows the screen
of the Simulator with the car approaching a traffic light which is in amber color. This is the situation under
which the car should make a decision on braking or not. Specifically, a traffic light stop line (in red) is shown.
If the car decides to brake, the car should stop in front of the stop line when the traffic light turns red. It will
be a violation if the car stops after going over the stop line. If the car decides not to brake, the car should go
past the stop line while the traffic light is still showing amber signal. It will be a violation if the car passes the
stop line after the traffic light turns red. Violation is recorded as failed attempt in the Simulator. Moreover, as
in real-life scenario, a driver should not stop the car too far away from the traffic light. Applying this concept
to the Simulator, the self-driving car should stop within the stopping zone, which is between the green and red
lines shown in Fig. 5.1.

Please download the simulator via this link.

Task 1: Braking Decision by Skill-based Reasoning


Figure 5.2 shows the user interface for Skill-based reasoning. Here, students are to execute the braking
decision mechanically when the car reaches the Decision Line (shown in Green). If the traffic light is showing
a green signal, no action is to be applied and the car keeps going. If the traffic light is showing a red or amber
signal, students need to press and hold the “B” key on their keyboards to start full braking until the car stops.
Note that the speed of the car is randomly set for each run but will not play any part in the braking decision in
this case.

61
62

Decision Line

Figure 5.2: User interface of the Simulator for Skill-based reasoning

Students are to conduct the braking experiments using the Simulator and answer the questions.

1. Conduct 20 runs with the Simulator and note the failed/successful result in each case.

Number of Successful Attempts =


Number of Failed Attempts =

2. Calculate the percentage of successful cases.


Number of Successful attempts
× 100% =
20

3. What important information do you think Skill-based reasoning has omitted in its decision process?
The speed of the car will be a highly relevant parameter. Weather condition is also
something that has strong effect on the braking performance.
Both are not included in Skill-based Reasoning above.
62
63

Notes for Teachers:

In this case, a braking attempt will have a high chance of success should the randomly assigned starting speed
of the car happens to be on the low side, and that the traffic light also happens to show amber signal when the
car reaches the Green line. Given that a sample of 20 runs is statistically small, some students may encounter
more than their fair share of such runs in their braking attempts and end up with high percentages of success.

Task 2: Braking Decision by Rule-based Reasoning


In Rule-based reasoning, the car will determine its braking action based on a set of pre-set rules that take into
consideration its current speed and distance away from the traffic light. The braking process will also start as
soon as the amber light is on and does not wait until reaching the decision line as in the Skill-based reasoning.
The pre-set rules can be viewed as driving patterns one would extract from his/her own experience in past
handling of traffic light braking scenarios.

Figure 5.3 shows the user interface of the Simulator for Rule-based reasoning. The Simulator has been
embedded with a set of 4 rules for the braking decision once the car observes that the amber light is on:

Rule #1: If the car is moving at slow speed ≤ 20km/h and at a short
distance < 10m away from the traffic light, then apply half-braking;
Rule #2: If the car is moving at slow speed ≤ 20km/h and is at a long
distance ≥ 10m away the traffic light, then apply no braking;
Rule #3: If the car is moving at high speed > 20km/h and is at a short
distance < 10m away from the traffic light, then apply full-braking;
Rule #4: If the car is moving at high speed > 20km/h and is at a long
distance ≥ 10m away from the traffic light, then apply half-braking;

As the vehicle approaches the traffic light in amber, braking decisions are made automatically according to
the stored rules. Depending on the driving speed and distance from the traffic light, different levels (zero, half
or full) of braking are applied to the vehicle depending on which stored rules is being activated at the time. To
count as a successful attempt, the vehicle should stop between the green and red line.

63
64

Figure 5.3: User interface of the Simulator for Rule-based reasoning

Students are to conduct the braking experiments using the Simulator and answer the follow-up questions.

1. Conduct 20 runs of braking with the Simulator under Rule-based reasoning using the pre-set rules and
note the failed/successful result in each case. At the end, record the following numbers:

Number of Successful Attempts =


Number of Failed Attempts =

2. Calculate the percentage of successful cases.

Number of Successful Attempts


× 100% =
20

64
65

3. Can you make sense of Rule#2 and #3 of the pre-set rules?

Rule #2 makes sense because the car is at appreciable distance from the traffic light and the speed is
not high. No action is needed until the car comes closer to the traffic light when Rule#1 will be
triggered to activate braking.
Rule #3 makes sense because the car is in high speed and the distance is short, full braking is applied

To result in the car stopping in front of the traffic light in compliance with the traffic rule.

4. Press the “Reset” button, then press the “Change Weather” button and conduct 20 runs of braking under
wet and rainy condition using the pre-set rules in Rule-based reasoning in the Simulator. Calculate the
percentage of successful cases.

Number of Successful attempts


× 100% =
20

5. Compare the performance of pre-set rules under the two different weather conditions. How would you
change the settings of the pre-set rules to improve the inferior one? You may verify your modifications
with the simulator.
In general, the pre-set rule has more successful attampts in dry and clear weather. Since the brake is
less effective in raining, modifications in rules like applying full-braking when the speed is low and
distance is short, or applying full-braking when the speed is high and distance is long can be
considered.

Notes for Teachers:

The Simulator has built-in another scenario for students to test the pre-set rules. It provides a “Change Weather”
button to change the weather from dry and clear to wet and rainy. Students can then observe how the pre-set
rules, designed for good performance under one kind of weather condition, would perform under a different
kind.
Also, the built-in pre-set rules in the Simulator uses the threshold of 20km/h to characterize between low and
high speed, and the threshold of 10m to characterize between short and long distance. The Simulator does
provide students the choice to design their own rules by varying the above threshold values characterizing
low/high speed and short/long distance with the button “Change Rule-Set.” Students can test how their own
designed rule set performs compared to the stored pre-set rules in the Simulator.
Moreover, instead of low/high and short/long for speed and distance, one may even add in additional
characterization of low, medium and high speed, and short medium and long distance through additional
threshold values to provide a finer handling of braking scenarios. This will result in 9 rules instead of the
current 4 rules, and an increase in the required computational load of running more rules.

65
66

Task 3: Braking Decision by Knowledge-based Reasoning


Knowledge-based reasoning decision is reached from a host of information pertinent to the application. In
order to stop at a traffic light, we need to control the braking process using parameters such as the braking
force applied, the speed of the vehicle and the brake-to-stop distance. Additional insights may be drawn from
the car’s working conditions, the weight of the vehicle and passengers, real-time weather and road conditions,
and the knowledge of how these may affect braking performance.
In this Task, students are invited to manually execute a number of braking runs, then generate an AI model
for traffic light decision based on machine learning of the recorded data. Students may vary the strength of
the applied braking force during the runs by tapping the “B” key on their keyboards. Tapping the key slowly
implies low braking force; tapping quickly means stronger braking force; and continuous pressing the “B” key
would mean full braking. No tapping would mean no braking applied. The Simulator will automatically record
the data after each run for training. As in Task 2 above, students may also change the weather condition and
see how their AI model trained on dry and clear condition would perform under rainy and wet condition.
Figure 5.4 shows the Simulator user interface for the Knowledge-based reasoning.

Figure 5.4: User interface of the Simulator for Knowledge-based reasoning


Students are to conduct the following using the Simulator and answer the follow-up questions.
1. Press the “Collect Data” button to initiate the data collection mode and manually conduct 10 runs of traffic
light braking on the Simulator. In this case, students can use the speed and distance of the car to decide on
the proper strength of the braking forces to apply for the runs.

66
67

2. Press the “Train Model” button to generate the AI model.

3. When training is completed, press “Evaluation Model” button. The trained AI model is now in control of
the braking process. Conduct 20 runs and note the failed/successful result.
Number of Successful Attempts =

Number of Failed Attempts =

4. Calculate the percentage of successful cases.


Number of Successful attempts
× 100% =
20

5. Press the “Change Weather” button and conduct 20 runs of braking under wet and rainy condition in the
Simulator and note the failed/successful result.
Number of Successful Attempts =
Number of Failed Attempts =

6. Calculate the percentage of successful cases.


Number of Successful attempts
× 100% =
20

7. Compare the percentage of successful cases under dry and clear condition with that of the wet and rainy
condition. Can you explain the difference between them?
In general, the AI model performs better in dry and clear condition. This is because it is
trained solely with data under dry and clear condition in the first place. It is understandable
that the AI model will perform less effectively when being applied to a different weather
condition the data of which were not included in the training.

8. Can you suggest additional information that may influence the braking decision?
This is an open-ended question. All reasonable answers are acceptable. Examples include:
The time takes for the traffic light to go from amber to red; the lighting condition (day
time/night time); the performance of the on-board camera; the total weight of the car;
the condition of the tires, etc. In real life situation, we will also have other cars and
pedestrians on the road.

67
68

Task 4: Comparison between Three Levels of Reasoning


From the above tasks, answer the following questions.
1. For the traffic light problem under consideration here, how are the braking decision triggered for Skill-
based, Rule-based and Knowledge-based reasonings? Connect the dots with lines.

Skill-based reasoning   Triggered upon the amber signal and


taking into effects (through machine
learning) the car’s past braking
performance and underlying
weather/road condition, etc.

Rule-based reasoning   Triggered upon observing the amber


signal at certain place

Knowledge-based   Triggered upon observing the amber


reasoning signal when the car is within a certain
distance to the traffic light

2. Fill in the following table comparing the particulars of the three types of reasoning.

Type of Information needed Levels of Equipment Abilities to % of success


reasoning for decision applied required learn/adapt to
braking changing
environment
Skill-based Traffic light signal; No braking; Basic Low Low
knowing when car Full braking
passes the Decision
Line

Rule-based Traffic light signal; No braking; More than Low/ Low/


the car’s distance Half braking; basic Moderate moderate
from traffic light Full braking

Knowledge- Traffic light signal; Continuous Advanced High High


based the car’s distance levels of (With high
from traffic light, braking quality
car’s past braking training
performance, dataset)
weather/road
conditions, etc.

Notes for Teachers:


In Skill-based reasoning, the self-driving car will instinctively decide its course of action by checking the
traffic light condition at a designated location (Decision Line) from the light. The decision of the car to brake
or not depends solely on the traffic light color at that moment, i.e. it applies full braking if it does not see green
traffic light and no braking if it sees one. The situation is as shown in Figure 5.5. In this sense, Skill-based

68
69

reasoning does not allow room to include other relevant information (such as the speed of the car at the time)
in its decision.
Compared with Skill-based, Rule-based reasoning involves more “thinking” during the reasoning process and
provides a decision that includes more information (speed and distance) of the situation at hand. This is shown
in Figure 5.6. However, Rule-based reasoning also poses more requirements on the sensors and actuators to
be available. In this example, we need sensors capable of providing data on the car speed and distance away
from the traffic light (not needed in Skill-based reasoning) and also a braking system that can provide half-
braking, in additional to just full braking for Skill-based reasoning.

At Decision Line:
See Amber or Red traffic light, apply brake
See Green traffic Light, no braking

Decision Line

Figure 5.5: Skill-based reasoning

69
70

Car’s speed and distance


from traffic light at the Braking action to apply
Adopted for the moment
moment traffic light turns
Rule Set
amber

Figure 5.6: Rule-based reasoning

Traffic light
color

Distance to
other vehicles

Weather and Distance to traffic


stopping line
road condition

Learning from past


braking performance

Figure 5.7: Knowledge-based reasoning consolidates all information


to come up with the decision on braking action

70
71

Notes for Teachers:


In Knowledge-based reasoning, the decision is reached from a host of pertinent information, contributing to a
well-considered and encompassing braking decision for the car to maximize the objective of adhering to the
traffic rules for the situation at hand. This is illustrated in Figure 5.7.
Note that Knowledge-based reasoning is usually facilitated through the help of advanced instruments, e.g. an
AI-enabled vision camera. As an example, Figure 5.8 shows the possible information obtained from such a
camera. They include the color of the traffic light(s) ahead, the vehicles in the surrounding area, as well the
relative distances of these vehicles from the car, etc. Information as such serves to greatly help the task of
conducting Knowledge-based reasoning in the micro-processor. This points to the fact that sophisticated
sensory and actuating instruments are needed as one goes from Skill-based to Rule-based to Knowledge-based
reasoning. It also explains why Knowledge-based reasoning harbors the capabilities of handling complex and
dynamic situations involving other vehicles and even pedestrians on the road.

Figure 5.8: Information generated by on-board AI-enabled camera system in real-time traffic situation
(Extracted from Plainsight video: [Link]

71
72

Learning Summary

Skill-based Rule-based Knowledge-based


Reasoning Reasoning Reasoning

Basic Automation AI Robot


• Basic instruments • Advanced instruments
• Low computational load • High computational load
• Lack learning and • Abilities to learn and adapt
adaptive abilities • High complexity and dynamic tasks
• Straightforward
operations
Figure 5.9: Learning summary
Similar to humans, a robot performing a task needs to repeatedly go through the three-step of sensing,
reasoning and acting, and reasoning is the part that contributes mostly to the perceived “intelligence” of a
robot.
Human reasoning can mainly be categorized into three levels: Skill-based reasoning, Rule-based reasoning,
and Knowledge-based reasoning.
Robots may exhibit different levels of reasoning depending on their intended applications, design and coding.
With the current state of technology, AI robots are generally considered as those that make decision largely
based on Knowledge-based reasoning.
This Unit uses the traffic light braking decision of a self-driving car as example to demonstrate the workings
of the three levels of reasoning:
 For Skill-based reasoning, the decision to brake or not depends solely on the traffic light color at that
moment when the car reaches a certain decision point.
 For Rule-based reasoning, the car makes decision by following a set of preset rules (or
instructions/patterns) drawn upon previous driving experience.
 For Knowledge-based reasoning, decision is made based on a host of information, including the
derived correlation of relevant parameters from past braking data, coupled with domain knowledge on
the inner working of the automobile and understanding the effects of others factors, such as lighting
and road conditions, and subject to the objective of adhering to traffic rules.

AI robots are capable of learning from past experience and adapting to uncertain circumstances, and hence
can handle complex and dynamic tasks better. In return, they require more powerful coding and machine
learning algorithm in the micro-processor, big data from previous operations to learn from, and more advanced
sensors and actuators to support task execution.
The above learning summary is illustrated in Figure 5.9.

72
73

Unit 5 - Assessment Exercise

A. Fill in the blanks

1. Reasoning is the process of ____arriving at decisions______ based on the observed information at hand.
2. In __Rule-based__ reasoning, we can increase number of instructions to allow the robot to handle the
situation with more precision.

B. Multiple choice questions

1. Which of the following statement is INCORRECT? (B)

A. To bring “intelligence” to robots, we can add human reasoning capabilities to their program.
B. Since Skill-based reasoning cannot handle complicated tasks, we should not use it in any robot.
C. In Rule-based reasoning, we can add more rules so that the robot can handle the task more precisely.
D. Knowledge-based reasoning makes use of domain knowledge of the task to come up with more
adaptive decisions.

C. Short questions

1. Which reasoning method enables AI to learn from past data and establish relationships between relevant
parameters for better solution(s)?
Knowledge-based reasoning.

73
74

Unit 6 – Future of Work (I)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Understand how technology changes the nature of work.
2. Learn about the characteristics of the four industrial revolutions.

B. Attitudes:
1. Be aware of the relationship between AI and work and develop a positive attitude to face the future
of work.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)
6.1 Past, Present and Future Jobs 1&2 10 mins Article

6.2 Impact of Industrial Revolutions on Jobs 3&4 10 mins Article

6.3 Future of Work with AI 5&6 15 mins Article

*Remarks
 Teachers can assign tasks flexibly (pre-lesson, in-class, or assessment), and not all tasks are required
to be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

74
75

Unit 6 - Future of Work (I)

6.1 Past, Present and Future Jobs

AI is currently replacing humans in many tasks and such technological automation is replacing many jobs.
Have you heard of these jobs (see Figure 6.1)?
AI, automation, and innovations are changing the nature of work as they re-invent business processes,
operations and their execution. It is important to understand the future of work in different sectors.

Task 1: Identify Past, Present and Future Jobs


Which of the following jobs were common in the past? Which are common in the present? Which will be
common in the future? Please put the labels in the table.

Figure 6.1: Different jobs

Past A, D, F, H
Present B, G
Future C, E

The nature of jobs keeps changing over the time. Technology is a driver for such changes. Some jobs no
longer exist and have been replaced. We should not take the jobs we have now for granted or expect they will
last forever.

75
76

Task 2: Transformations in the Nature of Different Jobs


List the nature of the transitions in the following jobs, in the past, present and future.

Job Past Present Future

Humans occasionally help


E.g. Marked goods sold by Humans scan barcodes on
customers using self-checkout
Cashier hand on a piece of paper goods
machines

Vehicle powered by
Vehicle powered by petroleum
Driver of a animals (e.g. horses, Vehicle powered by petroleum
or electricity and driven by
Vehicle oxen) and driven by or electricity and driven by AI
humans
humans

Human security guards monitor


Security multiple areas with the aid of Surveillance by AI to identify
Human security guards
Guard closed-circuit television suspicious activities
(CCTV) or cameras

6.2 Impact of Industrial Revolutions on Jobs

Task 3: The Four Industrial Revolutions


Have you heard about the four industrial revolutions? They occurred when innovative technologies were
developed and widely applied in industries:
Please scan the below QR codes for the references about the four industrial revolutions and complete the table:
What is Industry 4.0? The fourth industrial revolution
[Link]
revolution/#industry%204.0

(English version only)

76
77

Event Characteristics
First  Technological breakthroughs: steam engines and metal forging
Industrial Revolution  Manufacturing processes became mechanized
(18th century)
Second  Technological breakthroughs: electricity
Industrial Revolution  Standardization, mass production, and industrialization
(19th century)  Mechanical production grew in speed
Third  Technological breakthroughs: computers, internet, and software
Industrial Revolution  Software system carried out management processes
(20th century)
Fourth  Technological breakthroughs: AI, big data, cloud computing, machine
Industrial Revolution learning, Internet of Things (IoT), biotechnology, nanotechnology, etc.
(Today)  Development of smart manufacturing and autonomous systems

Notes for teachers


Reference:
[Link]

Task 4: Jobs and the Four Industrial Revolutions


Technology advancements in the four industrial revolutions have caused the disappearance of some jobs, and
the creation of other jobs. According to the table in Task 3, can you try to match the period with the job
emerging in the period?

77
78

Notes for teachers:


References:
The Stages of Industrial Revolution and
Its Impact on Jobs The “Black Gang”
[Link] [Link]
industrial-revolution-and-its-impact-on-jobs/ black-gang

6.3 Future of Work with AI

People have adapted to the changes brought by each industrial revolution. As jobs continue to be reshaped
and changed by the emergence of new technologies, we should maintain a positive, inquisitive attitude and be
prepared to learn new knowledge and acquire new skills. This will increase our competitiveness in the job
market. Jobs related to developing and deploying new AI technologies are growing. New opportunities will
increasingly emerge.

AI offers automation which can hopefully free up humans from jobs that are Dull (i.e. repetitive and tedious),
Dirty, Dangerous and Dear (i.e. automation can do it faster and cheaper). Examples include cleaning, data
entry, inspecting pollutants, etc.
Please scan the QR code to learn more about jobs AI may replace.

 The 7 jobs AI will most likely replace in 2022 – Automation is Here


[Link]
2022/

 How many jobs have been replaced by technology?


[Link]

 Will robots take my job?


[Link]

78
79

Task 5: Jobs and Chatbots


A chatbot is an AI programme which aims to be able to engage in conversations with its users. Towards the
end of 2022, a chatbot named ChatGPT was released by OpenAI and below we have listed some of its
functions:
 Solve tricky math problems step-by-step
 Get relationship advice
 Write code
 Analyse market trends
 Write essays on almost any topic, e.g. children’s book
 Summarise text
 Explain complex concepts
 Translation
 ……

While this chatbot can be a valuable tool for various industries, it is also important to consider both pros and
cons of this technology.

What will happen if a chatbot like this is applied in the workplace?


It is possible that it may replace some jobs, e.g. in writing, coding, translation, teaching. At the same
time, the AI technology may serve as a tool that can help writers, coders, translators, teachers work more
efficiently.

Notes for teachers:


References:
ChatGPT may be coming for our jobs.
11 Things You Can Do With
Introducing ChatGPT Here are the 10 roles that AI is most
ChatGPT
likely to replace
[Link] [Link] [Link]
pt/ gs-you-can-do-with-chatgpt/ gpt-jobs-at-risk-replacement-artificial-
intelligence-ai-labor-trends-2023-02

79
80

Task 6: Possible Advantages of Using AI in the Workplace


Should AI systems be used in the workplace? What are the potential advantages that AI may offer? Below
we have listed some statements which try to describe the potential advantages of using AI in the workplace.
Please indicate if you agree (✓) or disagree (✗), and give your reasons.

Statement about Agree/Disagree?


how AI may help Reasons
(✓/✗)
in the workplace
AI systems can reduce the workload for humans. AI can
Reduce workload replace tasks which are labour-intensive, repetitive and with

for humans low skill requirements. Humans will have more time to work
on more interesting and even challenging tasks.
Understand what AI is a technology and unlike humans, it does not have a sense
is important to of right versus wrong, good versus bad, ethical versus ✗
humans unethical.
Make faster AI systems can make faster decisions with well-trained models

decisions and powerful computing resources.
AI systems can potentially benefit people. AI can facilitate
decision-making, enhance efficiency, reduce human errors and
Benefit people assist humans in our daily lives. For example, AI can assist with ✓
cancer detection from medical images, and AI security systems
can be used in 24-hours CCTV.
The increasing use of AI will help generate data for training
Improving better AI models. This may, in turn, encourage even greater

performance use of AI and incentivize more people to learn about AI
technologies.
Most AI systems have specific capabilities, because AI models
are trained to focus on a targeted task. For example, AlphaGo
AI can do knows how to play the game of Go, but not other board games.

everything An AI system trained to diagnose one type of cancer for
medical images may not be able to diagnose other types of
cancers.

Learning Summary

• AI is reshaping the future of work across different industries.


• The fourth industrial revolution is defined as a combination of advances in AI, robotics, the internet of
things (IoT), and other new technologies.
• Keep learning and stay positive to face the AI future of work.

80
81

Unit 6 - Assessment Exercise

A. Fill in the blanks

1. The combination of advances in AI, robotics, the internet of things (IoT), genetic engineering, quantum
computing, and other new technologies have brought about the fourth industrial revolution.

B. Multiple choice questions

1. Which of the following statements is true? (D)

A. All jobs will continue to exist as they are now and will not change.
B. AI will replace all human jobs.
C. When introducing AI technologies in the workplace, productivity is more important than ensuring
ethical use.
D. AI may replace some jobs and assist human in other jobs.

2. Please match the job with the corresponding period: (C)

Job Period
1. AI engineers i. First industrial revolution
2. Computer programmers ii. Second industrial revolution
3. Electricians iii. Third industrial revolution
4. Firemen of steam engines iv. Fourth industrial revolution

A. 1: i; 2: ii; 3: iii; 4: iv
B. 1: iv; 2: iii; 3: i; 4: ii
C. 1: iv; 2: iii; 3: ii; 4: i
D. 1: iii; 2: iv; 3: i; 4: ii

C. Short questions

2. List one job that is MOST likely be replaced by AI in the near future.
Driver / security guard / cashier / waiter

3. List one job that is LEAST likely be replaced by AI in the near future.
Researcher / scientist / educator / artist / journalist

81
82

Unit 7 – Societal Impact (I)

Lesson Plan
Expected lesson time: 35 mins

After the lesson, students should be able to:


A. Knowledge:
1. Learn about the benefits and harms of AI in our daily lives.

B. Skills:
1. Identify examples of social good brought about by AI.
2. Understand and apply the Five Steps of Design Thinking.

Time
(minutes in class / Learning
Topic *Task
pre-lesson / after Resources
lesson)
7.1 What is Social Good? 1&2 10 mins Video

7.2 Examples of AI for Social Good 3 7 mins


7.3 Examples of AI becoming a Detriment
8 mins Online Application
to Society
7.4 How to Design AI for Social Good 4 10 mins Article

*Remarks
 Teachers can assign tasks flexibly (pre-lesson or in-class activities), and not all tasks are required to
be involved in lesson time.
 Prior to playing any reference video in a unit, it is important for teachers to carefully review the content
to ensure its suitability for teaching purposes before proceeding with playing it.

82
83

Unit 7 - Societal Impact (I)

7.1 What is Social Good?

Social good refers to something that benefit the public. Ideally, technologies should promote social good, i.e.
provide benefits and spread them widely. However, this depends on the proper design and management of the
technologies. AI can perform many tasks better than humans, which can be either beneficial or harmful.

Task 1: AI for Social Good


Is AI beneficial for society? ..................................................................................................... Yes / No
Could you give some examples of benefits or harms in using AI?
Potential benefits:
 AI can automate some tasks that are Dull, Dirty, Dangerous and Dear (please see Unit 6).
 Human error can be avoided when no human intervention is involved.
 AI is available 24*7.
Potential harms:
 Decisions made by AI may not always be correct (or may not always align with human values).
 AI is trained on Big Data and may have privacy violations.
 AI may lead to job loss because of automation.
 AI may be misused to generate fake videos and fake news.
 AI may enable the development of autonomous weapons.

Notes for teachers:


References:
Google - How AI works in everyday life | Google AI
[Link]

UNICRI (United Nations Interregional Crime and Justice Research Institute)


on ” Addressing the Risks and benefits of Artificial Intelligence (AI)”
[Link]

83
84

Task 2: The TrailGuard Artificial Intelligence Camera

Please scan the QR code to watch the video and discuss the following questions
in a group:
[Link]
qCMRGpHhH2W6XJZnJUBCLV6yuh6VXVIzHpdUrz-6g?e=vn1CM2

(Video Sources:Intel Corporation)

1. What is the problem that needs to be solved? How is it solved?

The TrailGuard Artificial Intelligence camera detects possible poachers and alerts park rangers in near
real-time, allowing rangers to take action before animals are harmed.

2. What AI technologies are being used?

The TrailGuard Artificial Intelligence camera uses AI to identify the images with important information,
such as images with poachers. The AI-equipped camera is also small and easy to hide, allowing rangers
to identify possible poachers.

3. Is it a good design? Why?

Yes, this camera can help reduce poaching or, no, the detection accuracy is not high enough
(open discussion).

84
85

7.2 Examples of AI for Social Good

Task 3: More Examples of AI for Social Good


Below are three examples of AI for social good. How do they bring benefits to society?
Examples Benefits to the society
1. AI in COVID-19 Help doctors to detect lesion in COVID-19
2. Chatbot “Bonny” Help to search application forms and facilitate the access to
government websites
3. Digital mental screening Identify students who need help

AI in COVID-19
In 2021, a research team from the Chinese University of Hong Kong developed an AI system to detect
COVID-19 automatically, quickly and accurately based on 3D chest CT images. During COVID-19, the
workload to analyse and interpret medical image data has been unprecedentedly large and urgent. Such an AI
system can effectively facilitate the management of COVID-19.

Figure 7.1: AI facilitates COVID-19 medical image analysis


Notes for teachers:
Reference:
Dou, Q., So, T.Y., Jiang, M. et al. Federated deep learning for detecting COVID-19 lung abnormalities in
CT: a privacy-preserving multinational validation study. npj Digit. Med. 4, 60 (2021).
[Link]

85
86

Chatbot “Bonny”
“Bonny” is a chatbot launched in December 2019 on the website of the HKSAR Government. It helps users
conduct searches for government e-services, application forms and related URL links. “Bonny” is able to
respond in both Chinese and English and provide relevant information through an online chat box.

Figure 7.2: Chatbot “Bonny”

Digital Mental Health Screening for Secondary Schools


“Know My Students” is a digital mental health screening program developed by a local clinical psychologist.
It is a digital conversation tool that applies AI technologies to help schools identify and support students who
are struggling with their mental health.

Figure 7.3: Digital health screening for secondary school students (source:
[Link]
86
87

7.3 Examples of AI becoming a Detriment to Society

In previous units, we have experienced the power of ChatGPT.


AI may Lead to Complacency
Do you think AI technologies like ChatGPT will cause students to become complacent dependent
completely on ChatGPT to generate the assignments? Will complacency lead us to give up our ability and
control in finding complex solutions to difficult problems? Please discuss.

Please scan the QR code to access ChatGPT:


[Link]

Notes for Teachers:


Before using the Poe platform, please remind students of the following: 1) The Poe platform requires an email
address to register for an account and to log in; 2) Be clear about the information Poe will collect and use
regarding the information you provide.

AI may Threaten Jobs


Also, do you think AI technologies like ChatGPT and Midjourney ([Link]) will lead to the
loss of many jobs? In which jobs do you think AI will replace humans? (Please refer to Booklet 2 Unit 6
and Booklet 3 Unit 4 Future of Work for more discussions on how AI may threaten our jobs)

Figure 7.4: Will AI replace humans in graphics design? (Image generated by Midjourney)

AI may Collect Sensitive Information


Users must be very careful when using AI, because any data you provide or upload may be kept and used for
other purposes. Please read the news about a company’s workers using AI to help them with tasks and
unintentionally leaked the company’s top secrets.

[Link]

Question: Can you think of other ways in which AI may bring detriment to society?

87
88

Notes for Teachers:


[Link]
society/?sh=3fc2057827fe

7.4 How to Design AI for Social Good

AI is often regarded as a key contributor to the fourth industrial revolution. The disruption from an industrial
revolution tend to cause many people to lose their jobs. Hence, we must prepare ourselves well for the future
as we live in the age of AI. To master AI technologies, it is important to have a knowledge of mathematics
and science and develop strong critical and creative thinking skills, especially in design thinking.

Design thinking is the creative use of existing knowledge / or technologies to address emerging problems.
Design thinking creates various ways of using AI technologies, designing for the benefit of others will help
us develop AI for the right reasons and become a better person. We will learn two types of design thinking
approaches, which are the five steps of design thinking and the double diamond model.

The Five Steps of Design Thinking


The first type of approach of design thinking we are going to learn is the five steps of design thinking. The
steps of design thinking can be divided into five key steps: Empathize, Define, Ideate, Prototype and Test, see
Figure 7.4.

Figure 7.5: The Five Steps of Design Thinking of AI

Please scan the QR code for more details about the five steps of design thinking:
5 Steps of the Design Thinking Process: A Step-by-Step Guide
[Link]
step-guide/

Notes for teachers:


References:
What Is Design Thinking?
[Link]

88
89

Task 4: Improve Swimming Pool Safety by Adopting the Five Steps of Design Thinking
Now that we have learnt about the Five Steps of Design Thinking, we can apply it in a real-life situation –
swimming pool safety. Using the design thinking approach, we will generate a real problem statement and a
feasible solution in order to achieve social good with AI.

Step 1: Empathize

Figure 7.6: Empathize step

What are your team's concerns?

We are concerned about the loss of lives in swimming pools because of the lack of lifeguards.

Step 2: Define

Figure 7.7: Define step

89
90

Who are affected and how are they affected?

Swimmers:
 People swimming in the swimming pool.
 Sometimes accidents can happen even when there are lifeguards.
 When there are no lifeguards, people suffering from cramps could drown.
Lifeguards:

 People watching over the safety of swimmers.

 Lifeguards can be exhausted or distracted.


 Private swimming pools may not have regular lifeguards.
 Even when there are lifeguards, a lifeguard cannot keep track of multiple
swimmers and may miss distressed swimmers.

Step 3: Ideate
How do you use AI to solve problems?
 When should AI technologies be used to solve defined problems?
 Suggest how AI technologies can solve the problems.
 Imagine how the application of AI technologies can solve problems.
(Hints: What kind of functions should AI have for solving the problems?)

 Visual recognition technologies can be applied.


 Installed visual recognition systems can recognise people who enter the pool area.
 The system can then track people who are in the pool and alert lifeguards when:
o Swimmers’ heads are under water for a certain length of time (e.g. unusually long time).
o Swimmers presenting abnormal swimming behaviours.

Step 4: Prototype
When you design an AI lifeguard system,
 What sensory inputs does the system require?
 To train the AI system, what data do you need and where can you obtain the data? What are the
challenges here?
 How do you implement your design?

 Install some cameras with audio receivers that cover the entire pool area.
 Teach the machine to identify heads under water.
 Teach the machine to distinguish between abnormal swim patterns and normal swim patterns.

90
91

Step 5: Test
Propose how you would test your ideas. (Hints: Would it be better to install the system under water? Could
it be used for coaching as well?)

 Hypothesize methods to test the system.


 Test the system over a trial period, e.g. one year, in multiple locations to gather more data.
 Adjust the algorithm.

Learning Summary
 AI may bring benefits or cause harm to humans, so we must use the technologies carefully.
 AI technologies can provide new ways to promote social good and improve the quality of peoples’ lives.
 Design thinking is a possible approach to devise AI-enabled solutions in an attempt to solve (complex)
problems.
 The five steps of design thinking: empathize, define, ideate, prototype and test.

91
92

Unit 7 - Assessment Exercise

A. Fill in the blanks

1. Social good refers to things or events that can benefit the public widely.
2. In the Five Steps of Design Thinking, the first step is to empathize , which aims to understand the
problem.
3. In the Five Steps of Design Thinking, the fourth step is to prototype , which aims to make a sample
solution that users can experiment on.
4. A good AI design should devote considerations to ethical use and possible social impact.

B. Multiple choice questions

1. What is the last step in the Five Steps of Design Thinking? (D)

A. Empathize
B. Define
C. Prototype
D. Test

2. What can AI technologies do for us in the future? (C)

A. AI can help improve our lives and support us in our work


B. AI may present risks and even dangers to humans
C. Both A and B
D. None of the above

C. Short questions

1. List three examples of AI for social good.


Lesion detection/chatbot Bonny/digital mental health screening

2. Describe the Five Steps of Design Thinking.


Empathize / Define / Ideate / Prototype / Test

92
93

Unit 8 - Group Project Design, Development and Presentations (I)

8.1 Suggested Scenarios

(I) Crowd Control Applications


In a school open day, there are many booths for visitors to visit. Some booths are more popular than the others.
An AI-based application can be used to inform and update the visitors how busy the booths are.
Examples of AI training data: Images of humans.

(II) Handwritten Digits Recognition


An AI-based application can be used to (i) determine whether a Chinese handwriting of a primary school
student is correct OR (ii) identify if a student requires special needs.
Examples of AI training data: Chinese handwriting by others.

(III) Fire Detection and Localization Using Surveillance Camera


An AI-based application can be developed to detect whether there are fire using computer vision OR audio.
Examples of AI training data: Images of fire.

8.2 Project Design and Development

Stage 1: Ask — Research your users’ needs


1.1 Who are the users?
1.2 What do they need (reasons)?

Stage 2: Define — State your users’ needs and problems


2.1 When do they need the AI-based product?
2.2 Why do they need the product??

Stage 3: Ideate — Challenge assumptions and create ideas


3.1 What are AI technologies?
3.2 What are the training data? (please provide some samples)
3.3 How do you collect the data?
3.4 How to avoid human bias?
3.5 What are the ethical considerations?

93
94

Summary for Stages 1 - 3:


Identify the pinpoint of a problem and its solution for a specific group of users. For example, we want to
design an AI-based yoga instructor for secondary school students to coach them in doing yoga at home because
the users can do more exercises.

We want to design ___________________(product) for _________________ (users) to ______________

(action) because ___________________________________. We use __________________ (data) to train

the model, and consider the bias situation: ____________________ .

Stage 4: Prototype — Start to create solutions


4.1 Provide screenshots and their descriptions
4.2 Input (testing data) and output. Explain how the model recognizes the testing data. For example,
shape of the pictures in the data or rules in the models.

Stage 5: Reflection — Try your solutions out


5.1 What is the feedback of the users?
5.2 How does the solutions benefit the society / community?
5.3 How/ what to improve the current solution?

94
95

Notes for teachers:


Marking Scheme
4 3 2 1
Ask / define  clearly state the problem in their own words Missing one Missing two Missing three
 list all constraints of the of the or more of the
 list all materials available and provide an criteria criteria criteria from
explanation for their use from from category 4.
 list all information/ ideas gained through category 4 category 4 OR
research Does not
Ideate  clearly state the AI technologies needed attempt to
 clearly present and explain training data answer. OR
needed The response
is not related
 clearly describe data collection process
 clearly explain how to avoid human bias
 list ethical considerations
Prototype  clearly present and demonstrate the product
 clearly present the testing data
 clearly present the output

Reflection  clearly present feedback of the users


 clearly explain how the solution benefit the
society/ community
 clearly present and explain how to improve
their solution
 suggest an alternative solution.
Presentation  clearly present the problem
 clearly use AI technologies to explain the
solutions
 Clearly present the training and testing data
 Clearly present the feedback from the users
 Clearly present the improvement plan

95
96

96

You might also like