Unsaid

I went to the UX Brighton conference yesterday.

The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.

But…

The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.

Not a single speaker addressed where the training data for current large language models comes from (it comes from scraping other people’s copyrighted creative works).

Not a single speaker addressed the energy requirements for current large language models (the requirements are absolutely mahoosive—not just for the training, but for each and every query).

My charitable reading of the situation yesterday was that every speaker assumed that someone else would cover those issues.

The less charitable reading is that this was a deliberate decision.

Whenever the issue of ethics came up, it was only ever in relation to how we might use these tools: considering user needs, being transparent, all that good stuff. But never once did the question arise of whether it’s ethical to even use these tools.

In fact, the message was often the opposite: words like “responsibility” and “duty” came up, but only in the admonition that UX designers have a responsibility and duty to use these tools! And if that carrot didn’t work, there’s always the stick of scaring you into using these tools for fear of being left behind and having a machine replace you.

I was left feeling somewhat depressed about the deliberately narrow focus. Maggie’s talk was the only one that dealt with any externalities, looking at how the firehose of slop is blasting away at society. But again, the focus was only ever on how these tools are used or abused; nobody addressed the possibility of deliberately choosing not to use them.

If audience members weren’t yet using generative tools in their daily work, the assumption was that they were lagging behind and it was only a matter of time before they’d get on board the hype train. There was no room for the idea that someone might examine the roots of these tools and make a conscious choice not to fund their development.

There’s a quote by Finnish architect Eliel Saarinen that UX designers like repeating:

Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.

But none of the speakers at UX Brighton chose to examine the larger context of the tools they were encouraging us to use.

One speaker told us “Be curious!”, but clearly that curiosity should not extend to the foundations of the tools themselves. Ignore what’s behind the curtain. Instead look at all the cool stuff we can do now. Don’t worry about the fact that everything you do with these tools is built on a bedrock of exploitation and environmental harm. We should instead blithely build a new generation of user interfaces on the burial ground of human culture.

Whenever I get into a discussion about these issues, it always seems to come back ’round to whether these tools are actually any good or not. People point to the genuinely useful tasks they can accomplish. But that’s not my issue. There are absolutely smart and efficient ways to use large language models—in some situations, it’s like suddenly having a superpower. But as Molly White puts it:

The benefits, though extant, seem to pale in comparison to the costs.

There are no ethical uses of current large language models.

And if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to stop using the current crop of exploitative large language models.

Anyway, like I said, all the talks at UX Brighton were very good. But I just wish just one of them had addressed the underlying questions that any good UX designer should ask: “Where did this data come from? What are the second-order effects of deploying this technology?”

Having a talk on those topics would’ve been nice, but I would’ve settled for having five minutes of one talk, or even one minute. But there was nothing.

There’s one possible explanation for this glaring absence that’s quite depressing to consider. It may be that these topics weren’t covered because there’s an assumption that everybody already knows about them, and frankly, doesn’t care.

To use an outdated movie reference, imagine a raving Charlton Heston shouting that “Soylent Green is people!”, only to be met with indifference. “Everyone knows Soylent Green is people. So what?”

Have you published a response to this? :

Responses

Dawn Ahukanna

@adactio That “stay in my lane” siloed approach I’ve observed digital designers adopt regarding ethics, even before AI tech reared its malevolent “head”.Chatting with a friend working in the AI space, they made the observation that people in general, not just designers, don’t get involved in the ethics or environmental costs of the AI technology.I’m perplexed why not but attribute it to the “howification” approach of over-indexing on how the tech works, not why it works or was made.

stefan [🏰👻][☎️😬]

@adactio @heinz “if you believe that the ethical issues will somehow be ironed out in future iterations, then that’s all the more reason to stop using the current crop of exploitative large language models”

Is this individualization of the problem, like we also (wrongly) did with climate change issues?Just a thought said out loud, not 100% sure either.

rem

@adactio opened my phone with the specific intent to see if you had written this (and please, though unsurprised, that you had!)

# Posted by rem on Saturday, November 2nd, 2024 at 12:14pm

KryptykPhysh

@adactio I really enjoyed this article and it is exactly the sort of point that needs to be raised, talked about, considered and used by people to decide how they wish to interact with these technologies.

Thanks for that.

Benjamin Parry

@adactio To your point about people knowing but not caring, there’s a recent 404 Media piece about GAI generated images of Hurricane Helene victims that people know are fake but convey how they feel. We have entered the ‘Fuck it’ era of AI. Genuinely terrifying where this might take humanity. https://www.404media.co/hurricane-helene-and-the-fuck-it-era-of-ai-generated-slop/

Hurricane Helene and the ‘Fuck It’ Era of AI-Generated Slop

Morgan Davis

@adactio

What happens if the AI hype money dries up as we begin to appreciate its actual (and limited) value and the extreme costs to support it? Devs may have to resort to what we did before: scrape and curate answers ourselves. Will the new devs even have that skill in a post-hype AI world?

As the AI tools get better, I’m seeing a lot of younger/framework-dependent devs throwing in the towel, giving up on a coding career, citing the historical fate of blacksmiths.

Who will be left?

arestelle

@adactio I very much and very depressingly keep getting the impression that “everyone” doesn’t care in much the same way that “everyone” doesn’t care about COVID. Far from everyone, but drown out, ignore, and treat as crazy those of us who do.

# Posted by arestelle on Saturday, November 2nd, 2024 at 4:47pm

Wulfy

@adactio Did you design your blog to be unreadable on purpose?

# Posted by Wulfy on Saturday, November 2nd, 2024 at 5:02pm

Wulfy

@adactio

Firefox on Android.

132.0 (Build #2016051567), hg-0e15e2edd460+GV: 132.0-20241021175835AS: 132.0.1

The only extension is ublock origin with default settings.

# Posted by Wulfy on Saturday, November 2nd, 2024 at 5:14pm

Wulfy

@adactio No idea what’s going on…Render tests pass

# Posted by Wulfy on Saturday, November 2nd, 2024 at 5:23pm

Trebach

@adactio From the third paragraph of the site: “This pragmatic approach is designed to alleviate concerns about AI, offering a clearer vision of how AI can enhance UX practice.”

It was never going to be said. The whole thing was slanted towards using AI

# Posted by Trebach on Saturday, November 2nd, 2024 at 6:44pm

Eugene Parnell

@adactio @emilymbender This is such a cogent and articulate commentary on a lot of issues I’m involved in as a UX designer at a large software company. I’m circulating it around my team. The k you for saying things that need to be said.

Karen Mardahl

@adactio Check out Noz Urbina’s site where he wants to start discussions about what he calls “truth collapse”. He has been talking with content developers/technical communicators about this, but all are welcome. He hopes to hear from people who want to actually do something about these ethical issues. https://truthcollapse.com

He just gave the kind of talk you were hoping for at the recent LavaCon conference in Portland, Oregon. He got a standing ovation.

Truth Collapse by Noz Urbina

Heinz Wittenbrink

I think that this is a different case. Developers and designers are responsible for features they implement. What I observe in the moment is a kind of mass adoption of large language models not only by developers but also in educational institutions like my former employer, the FH Joanneum. People seem to be happy to embrace a game changer because it promises to give them an advantage. This non critical attitude by people who are supposed to evaluate features and innovations supports the hegemony of a few large corporations. That is much more than a simple consumer decision.

https://wittenbrink.net/17929-2/

#LargeLanguageModels

largelanguagemodels FH JOANNEUM

Jared White ✊ RESIST 🙅🏻‍♂️

@adactio Given how hard it’s been to get folks in natural environments to care about how their consumer behavior may impact the health of those environments, I suppose it should come as no surprise the large amount of apathy among users of digital environments.

“This is bad for the environment!” just isn’t a winning message. Things will have to get really, really bad for people to advocate for change. And so it is with AI. The Web may have to get so much worse before it can get better.

Stephen Sauceda

Do you remember when you were just starting out as a developer? Remember that feeling you had when you FINALLY got some code to do The Thing ™️ you wanted it to do? That “drunk with power” moment that had you believing you could build anything? That feeling that made you think “this is what I want to do for a living”?

That’s the exact same feeling A.I. is giving to a lot of non-coders right now.

But There is a Difference

When we had that feeling, it’s (likely) because we finally understood the code that we wrote. It’s a necessity, most of the time, in order to get code working.

“My code doesn’t work and I don’t know why - my code does work and I don’t know why” jokes aside.

When a non-coder gets ChatGPT or whatever to spit out some code, I would venture to say that, most of the time, they don’t understand it. It was just handed to them with (presumably) a set of instructions on how to run it. They got the baby without the labor. In my experience, a lot of the time they even require a dev to run it for them (if they’re trying to integrate it into existing code), fix it if it doesn’t actually work, deploy it and all of that.

But, when it does result in something that “works”. It’s still that same feeling.

What It Could Mean

In my opinion, it means that, just like is wont to happen with new “real” developers, the Dunning–Kruger effect comes into play. And I swear, I don’t mean that as a slight at all, to anyone. It’s simply a common consequence a lot of us go through as we learn new things, myself included.

So that sense of empowerment, combined with (perhaps) an overestimation of confidence, also combined with the notorious hype cycles that tech goes through, can lead to some bad opinions and decisions.

Stephen Sauceda (@stephensauceda) on ThreadsA.I. is the new “pivot to video”.Threads

More and more “simple” code could start being produced by the non-coders of an organization as a means to not “bother” an actual on-staff developer. Before long, the question could start being asked “do we need this many developers if A.I. can do so much of it for us?”

How We Earn Our Keep

Because we, as developers, understand code, we should be able to build some wild shit with A.I. We should be able to write code that helps train and tune models to do things casual users (and probably even “power users”) of A.I. simply can’t.

Everyone wants a chatbot, apparently. This, in my opinion, is the most boring and generic use of the technology. But they can be quick to spin up and seen as a somewhat easy win. They are also (again, imo) a consequence of a lack of understanding of the technology, a lack of imagination, or some combination of the two. And, as I’ve said before:

Engineers have a unique perspective on products because we actually build them. We know the technical limitations. We know the caveats. We know what’s actually possible.

The way I see developers protecting themselves from A.I. “taking their job” or whatever, is to understand the technology and show people the kind of stuff it could/”should” be used for. Simple development tasks are quickly becoming table stakes for A.I. so the way we show our value is by doing the “Holy shit!”-style stuff with it.

The Caveat

The problem with more deeply understanding the technology is knowing that, in it’s current (and maybe forever) state, it is inherently an unethical technology. Bias, accountability, copyright, environmental issues. The list goes on and on.

And a seeming theme in tech is to ignore these issues. Whether it’s ignorance (willful or not), an expectation that “it’s not my problem to solve”, or a ride on yet-another hype train in tech, people are rushing to include (and advocate for) A.I. solutions and features in pretty much every product they produce lately, mostly in search of a buck.

I Don’t Know What to Do

It’s a number of different catch-22s for developers. We’re “fighting” with people experiencing the same feeling of empowerment that got us into this industry to begin with. For self-preservation, we have to understand and demonstrate how the technology can be used “in the right hands”. But, for a socially and morally conscious developer, are there really any “right hands”?

It’s an interesting time. “Interesting” isn’t the right word, but it’s the first word that comes to mind.

Stephen Sauceda Nov 9, 2024

Hidde

@adactio I wasn’t sure how to do it in my BT talk, as I was the last and I didn’t want to send people home with a downer, but even then it was the quickest section to write tbh.

Might write it up in a blog post so that speakers can easily find all the ways in which these tools are problematic in one spot (or is thinking they would too optimistic?)

# Posted by Hidde on Saturday, November 9th, 2024 at 8:14pm

81 Shares

# Shared by Eoin Prout on Saturday, November 2nd, 2024 at 11:18am

# Shared by mORA on Saturday, November 2nd, 2024 at 11:18am

# Shared by Peter Goulborn on Saturday, November 2nd, 2024 at 11:18am

# Shared by Heinz Wittenbrink on Saturday, November 2nd, 2024 at 11:53am

# Shared by Dawn Ahukanna on Saturday, November 2nd, 2024 at 11:53am

# Shared by apgw on Saturday, November 2nd, 2024 at 11:53am

# Shared by Matt Walker on Saturday, November 2nd, 2024 at 11:53am

# Shared by Baldur Bjarnason on Saturday, November 2nd, 2024 at 11:53am

# Shared by Paul Robert Lloyd 🐢 on Saturday, November 2nd, 2024 at 11:53am

# Shared by Walter on Saturday, November 2nd, 2024 at 11:53am

# Shared by Dr James Ravenscroft on Saturday, November 2nd, 2024 at 11:53am

# Shared by yuanchuan on Saturday, November 2nd, 2024 at 12:20pm

# Shared by rem on Saturday, November 2nd, 2024 at 12:20pm

# Shared by Neil Vass on Saturday, November 2nd, 2024 at 12:20pm

# Shared by NickiTheNoodle on Saturday, November 2nd, 2024 at 1:18pm

# Shared by agersant on Saturday, November 2nd, 2024 at 1:18pm

# Shared by s:mon on Saturday, November 2nd, 2024 at 1:18pm

# Shared by Wayne Myers on Saturday, November 2nd, 2024 at 1:18pm

# Shared by Beartiger on Saturday, November 2nd, 2024 at 1:53pm

# Shared by Kat on Saturday, November 2nd, 2024 at 1:53pm

# Shared by IBBoard on Saturday, November 2nd, 2024 at 1:53pm

# Shared by Joe Lanman on Saturday, November 2nd, 2024 at 1:53pm

# Shared by James :ruby: on Saturday, November 2nd, 2024 at 1:53pm

# Shared by Bernhard Sturm on Saturday, November 2nd, 2024 at 2:03pm

# Shared by “tré” on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Ian on Saturday, November 2nd, 2024 at 2:21pm

# Shared by tei on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Ben Hourahine on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Mlemily Mlotte on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Zac A Winter on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Prof. Emily M. Bender(she/her) on Saturday, November 2nd, 2024 at 2:21pm

# Shared by Amber on Saturday, November 2nd, 2024 at 2:21pm

# Shared by hypebot on Saturday, November 2nd, 2024 at 2:52pm

# Shared by Stefan Lücking on Saturday, November 2nd, 2024 at 2:52pm

# Shared by Rian on Saturday, November 2nd, 2024 at 2:52pm

# Shared by Zack on Saturday, November 2nd, 2024 at 2:52pm

# Shared by James Elliott on Saturday, November 2nd, 2024 at 2:52pm

# Shared by beat_ramseier on Saturday, November 2nd, 2024 at 2:52pm

# Shared by Bruce Lawson ✅ (quiet time) on Saturday, November 2nd, 2024 at 2:52pm

# Shared by xypnox on Saturday, November 2nd, 2024 at 3:18pm

# Shared by Benjamin Parry on Saturday, November 2nd, 2024 at 3:18pm

# Shared by SciHype on Saturday, November 2nd, 2024 at 3:18pm

# Shared by evenreven on Saturday, November 2nd, 2024 at 3:18pm

# Shared by Bill Seitz on Saturday, November 2nd, 2024 at 3:18pm

# Shared by Mia on Saturday, November 2nd, 2024 at 3:18pm

# Shared by Kolombiken on Saturday, November 2nd, 2024 at 3:47pm

# Shared by Luke Harby on Saturday, November 2nd, 2024 at 4:20pm

# Shared by Apple Annie :prami: on Saturday, November 2nd, 2024 at 4:20pm

# Shared by L.H. on Saturday, November 2nd, 2024 at 4:20pm

# Shared by mgiraldo on Saturday, November 2nd, 2024 at 4:20pm

# Shared by arestelle on Saturday, November 2nd, 2024 at 4:46pm

# Shared by Andy Linton ✅ on Saturday, November 2nd, 2024 at 4:46pm

# Shared by specialcase on Saturday, November 2nd, 2024 at 4:46pm

# Shared by Rowan Johnson on Saturday, November 2nd, 2024 at 4:46pm

# Shared by Steve Bogart 🎵 on Saturday, November 2nd, 2024 at 5:53pm

# Shared by rebecca cottrell on Saturday, November 2nd, 2024 at 5:53pm

# Shared by kit 🌃👶☕ on Saturday, November 2nd, 2024 at 6:27pm

# Shared by Evil Jim O’Donnell on Saturday, November 2nd, 2024 at 6:27pm

# Shared by Trebach on Saturday, November 2nd, 2024 at 6:58pm

# Shared by Chris Wood on Saturday, November 2nd, 2024 at 6:58pm

# Shared by Matt Machell on Saturday, November 2nd, 2024 at 6:58pm

# Shared by depone on Saturday, November 2nd, 2024 at 8:56pm

# Shared by oliverw on Saturday, November 2nd, 2024 at 10:07pm

# Shared by Rob Whittaker :thoughtbot: on Saturday, November 2nd, 2024 at 10:07pm

# Shared by Britt Coxon on Saturday, November 2nd, 2024 at 10:07pm

# Shared by eladnarra on Saturday, November 2nd, 2024 at 10:40pm

# Shared by s427 on Sunday, November 3rd, 2024 at 3:31am

# Shared by Eugene Parnell on Sunday, November 3rd, 2024 at 5:47am

# Shared by Kristof Zerbe on Sunday, November 3rd, 2024 at 8:44am

# Shared by Charles Bauer on Sunday, November 3rd, 2024 at 8:44am

# Shared by Karen Mardahl on Sunday, November 3rd, 2024 at 10:06am

# Shared by AnatoleSerial on Sunday, November 3rd, 2024 at 12:37pm

# Shared by Thomas Fricke (he/him) 🌴 🥥 on Sunday, November 3rd, 2024 at 12:37pm

# Shared by jon ⚝ on Sunday, November 3rd, 2024 at 12:37pm

# Shared by caebr on Sunday, November 3rd, 2024 at 1:05pm

# Shared by Daylon on Sunday, November 3rd, 2024 at 3:12pm

# Shared by Adrian Howard on Sunday, November 3rd, 2024 at 3:12pm

# Shared by Lisamason on Monday, November 4th, 2024 at 1:34am

# Shared by J A on Wednesday, November 6th, 2024 at 6:44pm

# Shared by Jared White ✊ RESIST 🙅🏻‍♂️ on Saturday, November 9th, 2024 at 4:12pm

# Shared by Rodney Pruitt on Monday, November 11th, 2024 at 11:04pm

83 Likes

# Liked by Piper Haywood on Saturday, November 2nd, 2024 at 11:18am

# Liked by Eoin Prout on Saturday, November 2nd, 2024 at 11:18am

# Liked by Simon R Jones on Saturday, November 2nd, 2024 at 11:18am

# Liked by mORA on Saturday, November 2nd, 2024 at 11:18am

# Liked by Dawn Ahukanna on Saturday, November 2nd, 2024 at 11:53am

# Liked by Matt Walker on Saturday, November 2nd, 2024 at 11:53am

# Liked by Baldur Bjarnason on Saturday, November 2nd, 2024 at 11:53am

# Liked by yuanchuan on Saturday, November 2nd, 2024 at 12:20pm

# Liked by Mia on Saturday, November 2nd, 2024 at 12:20pm

# Liked by Neil Vass on Saturday, November 2nd, 2024 at 12:20pm

# Liked by jameslmilner on Saturday, November 2nd, 2024 at 12:46pm

# Liked by Van Benzin :verified_paw: on Saturday, November 2nd, 2024 at 12:46pm

# Liked by herr_schaft on Saturday, November 2nd, 2024 at 12:46pm

# Liked by jan on Saturday, November 2nd, 2024 at 1:18pm

# Liked by Adrian Roselli on Saturday, November 2nd, 2024 at 1:18pm

# Liked by Chris on Saturday, November 2nd, 2024 at 1:18pm

# Liked by Wayne Myers on Saturday, November 2nd, 2024 at 1:18pm

# Liked by Sven Kaemper on Saturday, November 2nd, 2024 at 1:52pm

# Liked by Paul McAleer on Saturday, November 2nd, 2024 at 1:52pm

# Liked by Derek Pennycuff on Saturday, November 2nd, 2024 at 1:52pm

# Liked by James :ruby: on Saturday, November 2nd, 2024 at 1:53pm

# Liked by Bernhard Sturm on Saturday, November 2nd, 2024 at 2:03pm

# Liked by Emily on Saturday, November 2nd, 2024 at 2:21pm

# Liked by pirotess on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Bart Schuller on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Steven Hoefer on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Mlemily Mlotte on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Same guy as last time on Saturday, November 2nd, 2024 at 2:21pm

# Liked by doctorlaura on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Prof. Emily M. Bender(she/her) on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Tom Anypuppies on Saturday, November 2nd, 2024 at 2:21pm

# Liked by Stefan Lücking on Saturday, November 2nd, 2024 at 2:52pm

# Liked by Olga Lovick (she/her) on Saturday, November 2nd, 2024 at 2:52pm

# Liked by Zack on Saturday, November 2nd, 2024 at 2:52pm

# Liked by Chris Simpson on Saturday, November 2nd, 2024 at 2:52pm

# Liked by felix (grayscale) 🐺 on Saturday, November 2nd, 2024 at 2:52pm

# Liked by Thomas Amberg on Saturday, November 2nd, 2024 at 2:52pm

# Liked by Marc Friederich on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Anna on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Benjamin Parry on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Dwan 😗 on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Baptiste Mispelon on Saturday, November 2nd, 2024 at 3:18pm

# Liked by evenreven on Saturday, November 2nd, 2024 at 3:18pm

# Liked by 🗳️ Ashur Voted on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Mia on Saturday, November 2nd, 2024 at 3:18pm

# Liked by Robbert Broersma on Saturday, November 2nd, 2024 at 3:47pm

# Liked by lolcat on Saturday, November 2nd, 2024 at 3:47pm

# Liked by linda on Saturday, November 2nd, 2024 at 3:47pm

# Liked by Nate Barham on Saturday, November 2nd, 2024 at 3:47pm

# Liked by Luke Harby on Saturday, November 2nd, 2024 at 4:20pm

# Liked by Stephen Coles on Saturday, November 2nd, 2024 at 4:20pm

# Liked by Apple Annie :prami: on Saturday, November 2nd, 2024 at 4:20pm

# Liked by P J Evans on Saturday, November 2nd, 2024 at 4:46pm

# Liked by Mark Pospesel on Saturday, November 2nd, 2024 at 5:21pm

# Liked by Steve Bogart 🎵 on Saturday, November 2nd, 2024 at 5:53pm

# Liked by Scott Davis on Saturday, November 2nd, 2024 at 5:53pm

# Liked by Rosie Sherry on Saturday, November 2nd, 2024 at 6:20pm

# Liked by Chris Wood on Saturday, November 2nd, 2024 at 6:58pm

# Liked by Shane Becker on Saturday, November 2nd, 2024 at 8:24pm

# Liked by Jeannette Ho on Saturday, November 2nd, 2024 at 8:24pm

# Liked by Timo Tijhof on Saturday, November 2nd, 2024 at 8:56pm

# Liked by 🇳🇿 💉*8 Roger Parkinson on Saturday, November 2nd, 2024 at 8:57pm

# Liked by theAdhocracy on Saturday, November 2nd, 2024 at 10:07pm

# Liked by oliverw on Saturday, November 2nd, 2024 at 10:07pm

# Liked by Billy Blinkenlights on Saturday, November 2nd, 2024 at 11:34pm

# Liked by Kevin Devine on Saturday, November 2nd, 2024 at 11:34pm

# Liked by glyn on Sunday, November 3rd, 2024 at 2:29am

# Liked by Stephen P. Anderson on Sunday, November 3rd, 2024 at 5:47am

# Liked by Carol 🪩 on Sunday, November 3rd, 2024 at 6:40am

# Liked by Henry Bley-Vroman on Sunday, November 3rd, 2024 at 8:09am

# Liked by Charles Bauer on Sunday, November 3rd, 2024 at 8:44am

# Liked by Daniel Appelquist on Sunday, November 3rd, 2024 at 12:04pm

# Liked by Thomas Fricke (he/him) 🌴 🥥 on Sunday, November 3rd, 2024 at 12:37pm

# Liked by Daylon on Sunday, November 3rd, 2024 at 3:12pm

# Liked by Bruce Taylor on Sunday, November 3rd, 2024 at 9:08pm

# Liked by Nolan Lawson on Sunday, November 3rd, 2024 at 9:08pm

# Liked by bewilderbeast23 on Monday, November 4th, 2024 at 10:54am

# Liked by Andy Broomfield Web Dev on Monday, November 4th, 2024 at 11:20am

# Liked by J A on Wednesday, November 6th, 2024 at 6:44pm

# Liked by Matthias Zöchling on Thursday, November 7th, 2024 at 11:58am

# Liked by Jared White ✊ RESIST 🙅🏻‍♂️ on Saturday, November 9th, 2024 at 4:12pm

# Liked by Stephanie Hobson on Sunday, November 10th, 2024 at 4:07am

# Liked by Rodney Pruitt on Sunday, November 10th, 2024 at 8:45pm

Related posts

Coattails

Language matters.

Filters

A web by humans, for humans.

Crawlers

Pest control for your website.

Permission

You have the power, not Google.

Browser history

From a browser bug this morning, back to the birth of hypertext in 1945, with a look forward to a possible future for web browsers.

Related links

Vibe code is legacy code | Val Town Blog

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It’s only legacy code if you have to maintain it!

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Tagged with

Vibe coding and Robocop

The short version of what I want to say is: vibe coding seems to live very squarely in the land of prototypes and toys. Promoting software that’s been built entirely using this method would be akin to sending a hacked weekend prototype to production and expecting it to be stable.

Remy is taking a very sensible approach here:

I’ve used it myself to solve really bespoke problems where the user count is one.

Would I put this out to production: absolutely not.

Tagged with

Keeping up appearances | deadSimpleTech

Looking at LLM usage and promotion as a cultural phenomenon, it has all of the markings of a status game. The material gains from the LLM (which are usually quite marginal) really aren’t why people are doing it: they’re doing it because in many spaces, using ChatGPT and being very optimistic about AI being the “future” raises their social status. It’s important not only to be using it, but to be seen using it and be seen supporting it and telling people who don’t use it that they’re stupid luddites who’ll inevitably be left behind by technology.

Tagged with

In 2025, venture capital can’t pretend everything is fine any more – Pivot to AI

Here is the state of venture capital in early 2025:

  • Venture capital is moribund except AI.
  • AI is moribund except OpenAI.
  • OpenAI is a weird scam that wants to burn money so fast it summons AI God.
  • Nobody can cash out.

Tagged with

What I’ve learned about writing AI apps so far | Seldo.com

LLMs are good at transforming text into less text

Laurie is really onto something with this:

This is the biggest and most fundamental thing about LLMs, and a great rule of thumb for what’s going to be an effective LLM application. Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

Depending how much of the hype around AI you’ve taken on board, the idea that they “take text and turn it into less text” might seem gigantic back-pedal away from previous claims of what AI can do. But taking text and turning it into less text is still an enormous field of endeavour, and a huge market. It’s still very exciting, all the more exciting because it’s got clear boundaries and isn’t hype-driven over-reaching, or dependent on LLMs overnight becoming way better than they currently are.

Tagged with

Previously on this day

8 years ago I wrote The dConstruct Audio Archive works offline

Now you can save audio files for later listening.

9 years ago I wrote A decade on Twitter

The first ten years of microblogging.

10 years ago I wrote Fortune

I’m Brighton’s Craig Newmark, apparently.

13 years ago I wrote Progress

Something is happening.

14 years ago I wrote Timeless

Who knows where the time element goes?

18 years ago I wrote Silent witness

Pattern recognition in the films of Ridley Scott.

19 years ago I wrote Microformats gone wild

You can’t swing a cat without hitting a microformat these days.

20 years ago I wrote Bound for Cork

I’m going to be incommunicado for the next few days. I’m heading back to my hometown in Ireland.

21 years ago I wrote Viva La iPodDownload

I wrote a little while back about a nice little plug-in for iTunes called iPodDownload. It plugged a glaring usability hole in iTunes whereby you aren’t able to simply drag your music from your iPod to your computer.

22 years ago I wrote Oh my God, it's full of rock stars!

All went well with the Salter Cane concert last night. My bout of gastroenteritis had luckily passed by the time the gig rolled ‘round.

24 years ago I wrote The Brick Testament

Now, this is what the Internet was made for: Bible stories in Lego form.

24 years ago I wrote Movie industry dealt DVD-cracking blow

This is good news. The infamous DeCSS code is protected under free speech.

24 years ago I wrote Giant Sand

The Giant Sand concert was lots of fun.