PS (WEA0208)
ARTIFICIAL INTELLIGENCE: A THREAT OR A TOOL?
Let's draw the scenario: you wake up, reach for your phone, which is what I’m
sure most of you would probably do, and maybe check your email—and find that
your job's gone overnight. Not because of underperformance or because there's
some global crisis, but because some piece of technology does it faster, cheaper,
and doesn't even need a coffee break. It's not just you. That artist you stream on
Spotify? No longer human. Your physician? Automated. Your attorney? An AI in a
suit. Your child's teacher? A virtual voice, 24/7 at your beck and call. Sounds like sci-
fi, think again. It's 2025. This is the world we live in today.
A very good morning, everyone. My name is Hugo Chow Wen Jun. Today, I
would like to ask us all a crucial question: is Artificial Intelligence an existential threat
to our world, or is it the greatest driver of progress we've ever invented? Honestly,
there's no tidy answer—but we can't afford to ignore the debate.
So, let's talk about AI. It's not a buzzword—these systems are literally
reshaping the way businesses operate. AI learns, adapts, solves complex
challenges, handles communication, translates, and even designs entire business
strategies. The rate of progress? Off the charts. We’re seeing tools like ChatGPT,
Midjourney, Gemini, Claude—these aren’t just trends, they’re operational in millions
of organizations right now. AI is everywhere: powering our classrooms, streamlining
healthcare, optimizing government operations, embedded in our devices, and even
running aspects of our homes.
To begin with, when used wisely, AI is a genuine force multiplier, and I
mean a real force multiplier. Why do I say so? In medicine, it's diagnosing disease
at unprecedented speeds and precision. Radiology? AI outperforms some specialists
in interpreting scans. In agriculture, it's optimizing yields and helping to feed a
growing population. For disaster response, AI is anticipating risks, providing
actionable intelligence so that organizations and communities alike can respond
before catastrophe hits. In education, AI platforms are individualizing learning,
breaking down barriers, and making knowledge accessible to those who were left
behind.
The thing is, these aren't just incremental benefits—they're facilitating
operational excellence and competitive advantage. AI is helping businesses
minimize costly errors, boost productivity, and even solve monumental problems like
climate change. Framed in this way, AI isn't just another tool it's a strategic asset.
But let us not get carried away. There is a downside—a set of risks and
uncertainties that everyone would be wise not to ignore. The upheaval is coming,
and the stakes are high.
[]First and foremost, let's talk about work. Over 80 million jobs will vanish by
2030, the World Economic Forum says. Not just factory work. But lorry drivers, legal
secretaries, graphic designers, customer service representatives—even computer
programmers and reporters. The more sophisticated the machines get, the more
likely people are to become obsolete. Henceforth, when new jobs are created, the
transition could leave behind millions, especially those with little education or
retraining.
Now, don't even get me started on discrimination and bias. AI is learned from
data. But what if the data is terrible? Racist? Sexist? Or historically unfair? AI
systems already have racial bias in facial recognition and disparate treatment in
applicant tracking software. These systems, they can subtly encode discrimination
and we won't even notice—because we think machines are "neutral." But they aren't.
They're a reflection of us—our flaws, our inequalities, our blind spots.
Apart from that, most disquietingly, AI can be employed to deceive.
Deepfakes, cloned voices, fake news—all to appear as the truth and distort reality. In
spite, AI is an instrument of manipulation. Think of an election hijacked by thousands
of AI-rendered false videos. Think of a confidence trick in which your mother over the
phone is not your mother but a chunk of software designed to mimic her. We already
see the seeds of that being planted. The more advanced the AI is, the harder it is to
separate what is real and what is fabricated.
Even the pioneers of AI—the ones creating these tools themselves—have
sounded the warning. A group of AI leaders signed an open letter in 2023 that
warned that unchecked AI posed as much danger as pandemics and nuclear war.
When the scientists working on the job are urging us to tread more carefully, should
we not listen?
Therefore, is AI dangerous? Or is it a technology of progress?
The truth is—it's both. Therefore, we must proceed with caution. AI is not bad
in itself. But it is strong enough. And power without responsibility has never had a
good ending in human history. Think of AI as fire. In a fireplace, it warms a house.
But without control, it burns the house down. The issue is in how we use it, who is
controlling it, and if we develop ethical boundaries before it's too late.
Hence, yes! —something for each of us to do. Use AI responsibly. Don't rely
on it to replace your brain—use it to augment your brain. Question what you find on
the internet. Check your sources. Raise your voices for fairness, privacy, and
accountability. The destiny of AI shouldn't be set in Silicon Valley boardrooms alone.
It should be set by students, educators, workers, voters—all of us.
Finally, I conclude with this message:
Artificial Intelligence won't decide whether it will become a threat or an asset. We
will.
It is not the machine that decides—it is the man behind the code.
Let's be the one who do not simply wonder what AI would be able to do…
But ask why are we doing it—and for whom.
That shall be all from me, thank you for your time.