The meaning of “AI”

There are different kinds of buzzwords.

Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.

Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.

The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.

“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.

The biggest problem with the phrase “AI” is that there’s a name collision.

For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.

Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.

This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.

To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…

Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.

Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.

Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.

It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.

And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

It’s almost as if the labelling of the current technologies was more about marketing than accuracy.

Have you published a response to this? :

Responses

Luke Dorny

@adactio oh, and by the way, AI is a horrible buzzword, if only because my mother-in-law wonders why the news headlines keep talking about “this Al character. And who is he?!” Heh. Time for some serifs. Or a Display sans with more i/l distinction.

# Posted by Luke Dorny on Tuesday, November 12th, 2024 at 3:46pm

feliks

@adactio > For years, the term “AI” has been used in science-fiction.

unfortunately the article misses the origin and widespread use in scientific writing. only later the term became popular in scifi and then marketing

> talking about the sci-fi scenarios of runaway “AI”.

this is talked about in the scientific context of AGI and is not a concept emerging from scifi. the public has a hard time discerning between AI and AGI

i still agree that the adoption of the term by marketing is misleading

# Posted by feliks on Tuesday, November 12th, 2024 at 3:52pm

Thomas Vander Wal

@adactio This reads like you have been listening to many of my conversations, “what do you mean by AI?” It is like saying vehicle when thinking scooter or van and tasks that can or can’t be accomplished.

Many throwing about the term AI have no clue what they are really talking about and assigning attributes to types of AI that it isn’t remotely capable of doing.

adactio.com

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

# Friday, February 14th, 2025 at 5:07pm

10 Shares

# Shared by woe2you on Tuesday, November 12th, 2024 at 3:29pm

# Shared by iainr on Tuesday, November 12th, 2024 at 3:29pm

# Shared by Jakub Iwanowski on Tuesday, November 12th, 2024 at 3:54pm

# Shared by Mx. Aria Stewart on Tuesday, November 12th, 2024 at 3:54pm

# Shared by Mia on Tuesday, November 12th, 2024 at 3:54pm

# Shared by Sean Gillies on Tuesday, November 12th, 2024 at 3:54pm

# Shared by Florian Ziegler on Tuesday, November 12th, 2024 at 4:56pm

# Shared by Elliott B. Edwards on Tuesday, November 12th, 2024 at 5:23pm

# Shared by Joshua Kaden on Tuesday, November 12th, 2024 at 5:55pm

# Shared by Timo Tijhof on Wednesday, November 13th, 2024 at 8:06pm

7 Likes

# Liked by woe2you on Tuesday, November 12th, 2024 at 3:29pm

# Liked by Mx. Aria Stewart on Tuesday, November 12th, 2024 at 3:53pm

# Liked by Mia on Tuesday, November 12th, 2024 at 3:54pm

# Liked by katherine on Tuesday, November 12th, 2024 at 4:25pm

# Liked by yuanchuan on Tuesday, November 12th, 2024 at 4:25pm

# Liked by Thomas Vander Wal on Tuesday, November 12th, 2024 at 4:56pm

# Liked by Joshua Kaden on Tuesday, November 12th, 2024 at 5:55pm

Related posts

Design processing

Three designers I know have been writing about large language models.

Filters

A web by humans, for humans.

The machine stops

Self-hosted sabotage as a form of collective action.

Trust

How to destroy your greatest asset with AI.

InstAI

I object.

Related links

Poisoning Well: HeydonWorks

Heydon is employing a different tactic to what I’m doing to sabotage large language model crawlers. These bots don’t respect the nofollow rel value …so now they pay the price.

Raising my own middle finger to LLM manufacturers will achieve little on its own. If doing this even works at all. But if lots of writers put something similar in place, I wonder what the effect would be. Maybe we would start seeing more—and more obvious—gibberish emerging in generative AI output. Perhaps LLM owners would start to think twice about disrespecting the nofollow protocol.

Tagged with

Tagged with

Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries - Ars Technica

As it currently stands, both the rapid growth of AI-generated content overwhelming online spaces and aggressive web-crawling practices by AI firms threaten the sustainability of essential online resources. The current approach taken by some large AI companies—extracting vast amounts of data from open-source projects without clear consent or compensation—risks severely damaging the very digital ecosystem on which these AI models depend.

Tagged with

Go To Hellman: AI bots are destroying Open Access

AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet.

Tagged with

FOSS infrastructure is under attack by AI companies

More on how large language bots are DDOSing the web:

LLM scrapers are taking down FOSS projects’ infrastructure, and it’s getting worse.

Tagged with

Previously on this day

4 years ago I wrote Caching and storing

Caches are for copies.

5 years ago I wrote Third party

Imagine a web where cookies and JavaScript had to be self-hosted.

5 years ago I wrote CSS for all

Whatever happened to Mozilla’s stated policy of restricting new CSS properties to HTTPS?

13 years ago I wrote Pursuing semantic value

Agreeing and disagreeing with Divya.

15 years ago I wrote Collective action

Ajax feedback inspired by video games.

17 years ago I wrote Delusion

Agreement and disagreement.

23 years ago I wrote Procrastination

I spent most of today making tweaks and changes to my little portal.