Coattails

When I talk about large language models, I make sure to call them large language models, not “AI”. I know it’s a lost battle, but the terminology matters to me.

The term “AI” can encompass everything from a series of if/else statements right up to Skynet and HAL 9000. I’ve written about this naming collision before.

It’s not just that the term “AI” isn’t useful, it’s so broad as to be actively duplicitous. While talking about one thing—like, say, large language models—you can point to a completely different thing—like, say, machine learning or computer vision—and claim that they’re basically the same because they’re both labelled “AI”.

If a news outlet runs a story about machine learning in the context of disease prevention or archeology, the headline will inevitably contain the phrase “AI”. That story will then gleefully be used by slopagandists looking to inflate the usefulness of large language models.

Conflating these different technologies is the fallacy at the heart of Robin Sloan’s faulty logic:

If these machines churn through all media, and then, in their deployment, discover several superconductors and cure all cancers, I’d say, okay … we’re good.

John Scalzi recently wrote:

“AI” is mostly a marketing phrase for a bunch of different processes and tools which in a different era would have been called “machine learning” or “neural networks” or something else now horribly unsexy.

But I’ve noticed something recently. More than once I’ve seen genuinely-useful services refer to their technology as “traditional machine learning”.

First off, I find that endearing. Like machine learning is akin to organic farming or hand-crafted furniture.

Secondly, perhaps it points to a severing of the ways between machine learning and large language models.

Up until now it may have been mutually benificial for them to share the same marketing term, but with the bubble about to burst, anything to do with large language models might become toxic by association, including the term “AI”. Hence the desire to shake the large-language model grifters from the coattails of machine learning and computer vision.

Have you published a response to this? :

Responses

6 Shares

# Shared by KB on Wednesday, October 8th, 2025 at 5:16pm

# Shared by Wayne Myers on Wednesday, October 8th, 2025 at 5:27pm

# Shared by Joshua Kaden on Wednesday, October 8th, 2025 at 5:27pm

# Shared by Richard N Griffiths on Wednesday, October 8th, 2025 at 7:28pm

# Shared by Fynn Ellie Be on Wednesday, October 8th, 2025 at 8:28pm

# Shared by secolinsky on Thursday, October 9th, 2025 at 6:22am

9 Likes

# Liked by Wayne Myers on Wednesday, October 8th, 2025 at 5:27pm

# Liked by Kate Nyhan on Wednesday, October 8th, 2025 at 5:27pm

# Liked by Joshua Kaden on Wednesday, October 8th, 2025 at 5:27pm

# Liked by Jeff Coburn on Wednesday, October 8th, 2025 at 5:53pm

# Liked by Fynn Ellie Be on Wednesday, October 8th, 2025 at 8:28pm

# Liked by Chris Taylor on Wednesday, October 8th, 2025 at 8:56pm

# Liked by Joe Crawford on Thursday, October 9th, 2025 at 1:45pm

# Liked by Thomas Vander Wal on Thursday, October 9th, 2025 at 3:30pm

# Liked by ed on Friday, October 10th, 2025 at 3:32am

Related posts

Uses

Large language models are big messy brushes, not scalpels.

Tools

A large language model is as neutral as an AK-47.

Codewashing

Whether you’re generating slop or code, underneath it’s the same shoggoth with a smiley face.

Denial

The best of the web is under continuous attack from the technology that powers your generative “AI” tools.

Design processing

Three designers I know have been writing about large language models.

Related links

The Future of Software Development is Software Developers – Codemanship’s Blog

The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer’s IP address). And it’s the hard part when they’re prompting language models to predict plausible-looking Python.

The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.

Tagged with

The Colonization of Confidence., Sightless Scribbles

I love the small web, the clean web. I hate tech bloat.

And LLMs are the ultimate bloat.

So much truth in one story:

They built a machine to gentrify the English language.

They have built a machine that weaponizes mediocrity and sells it as perfection.

They are strip-mining your confidence to sell you back a synthetic version of it.

Tagged with

Dissent | blarg

I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.

There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.

Tagged with

AI CEO – Replace Your Boss Before They Replace You

Delivering total nonsense, with complete confidence.

Tagged with

The Jeopardy Phenomenon – Chris Coyier

AI has the Jeopardy Phenomenon too.

If you use it to generate code that is outside your expertise, you are likely to think it’s all well and good, especially if it seems to work at first pop. But if you’re intimately familiar with the technology or the code around the code it’s generating, there is a good chance you’ll be like hey! that’s not quite right!

Not just code. I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”

Like, if you know that the results are super dodgy for your own area of expertise, why would you think they’d be any better for, I don’t know, restaurant recommendations in a city you’ve never been to?

Tagged with

Previously on this day

23 years ago I wrote More pierced buildings

I spotted another modified building on my way out of Brighton station yesterday.

24 years ago I wrote Impress women with Perl

This is just about the cutest story I’ve come across in quite a while. Apparently, the way to a woman’s heart is through Perl.