Link tags: machinelearning

248

sparkline

The AI Great Leap Forward

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

AI Might Be Our Best Shot At Taking Back The Open Web | Techdirt

Not sure I buy the argument here, though I do very much look forward to local language models getting better so we can ditch the predatory peddlars of today’s slop. But this trip down memory lane to the early web of the 1990s could’ve been describing my own experience:

But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.

Fray was what made me want to make websites:

I distinctly remember sites like prehensile tales, 0sil8 and the inimitable Fray triggering something in my brain that made me realise what it was I wanted to do with my life.

I used AI. It worked. I hated it.: Taggart Tech

There’s a fundamental problem with these tools beyond the capacity of any deployment strategy to solve: the tool requires expertise to validate, but its use diminishes expertise and stunts its growth. How does one become an expert? There are no shortcuts; there is only continuous hard work and dedication. I was once told of writing, great writers learn how to break the rules in new and ingenious ways by first learning the rules.

But how is a new developer meant to learn the rules if their day-to-day work is nothing but the babysitting of models? How will they gain the hard-won experience that allows a human in the loop to be a useful safeguard?

These models alter cognition in ways deleterious to human prosperity. In other words, for as much output as they provide, they take something important from us.

The End : Focal Curve

I can’t remember the last time a blog post resonated with me this much.

Craig’s criteria on his job search:

  • One: fuck offices
  • Two: fuck AI
  • Three: fuck React

And his conclusion:

Fuck work

Flood fill vs. the magic circle

Eleven years ago, I wrote:

Sometimes I consider the explosive growth of computation and think that strong AI is a near-term inevitability.

Then I remember printers.

That was just a brainfart, but Robin tackles it seriously in his thoughtful essay.

A pleasing image: if indeed AI automation does not flood fill the physical world, it will be because the humble paper jam stood in its way.

Software cannot, in fact, eat this world. Software can reflect it; encroach upon it; more than anything, distract us from it. But the real physical world is indigestible.

Working with agents doesn’t feel like flow — Bill de hÓra

Related to Matt’s thoughts:

…working with agents feels much less like classic deep work, and much more like playing a game. Not to say the work is frivolous—it’s just because it feels like I’m in a game loop.

Flow, at least in the usual sense for me, feels smooth and continuous. The work and your attention starts to line up so cleanly that the experience becomes frictionless. You disappear into the work and meld with it. One notable aspect of flow has been I lose track of time. Working with agents on the other hand, is not like that at all. It’s highly engaging, but in a more jagged, reactive way. I’m focused, but not settled. I’m absorbed, but not merged with the task. I’m paying close attention the whole time, but the attention is dynamic and tactical rather than continuous. I don’t lose track of time at all.

Stop Sloppypasta: Don’t paste raw LLM output at people

slop·py·pas·ta n. Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

Generative AI vegetarianism | Sean Boots

Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.

I work, I think? - Annotated

This is about something that’s already happening, that doesn’t show up in employment figures: the quiet destruction of the feedback loop that turns inexperienced people into competent ones. The process by which you get something wrong, feel it, understand why, and become slightly less wrong next time. It’s unglamorous and it’s slow and it’s the only way it’s ever worked.

AI short-circuits that learning completely. Not maliciously. Just structurally. When you can generate something that looks right without doing the thinking, you will (most people, most people being me, will, most of the time, under pressure, with a deadline) and the muscle that thinking would have built never develops.

your ai slop bores me

Mutually assured Mechanical Turk.

This is genuinely much more interesting and wholesome than a chat interface powered by a large language model.

I am in an abusive relationship with the technology industry

The cognitive overload of AI trying to Make You More Productive™️ whilst you’re actually trying to be productive is so shockingly absurd. And yet, we are being made to feel like we are stagnating, being left behind, not good enough, that we are luddites should we not adopt this imposing technology. We are being told we’re missing out, even though we’re probably doing just fine. The technology is gaslighting us.

LLMs Are Antithetical to Writing and Humanity

If you’re dyslexic and just trying to communicate more clearly in writing, or you’ve got a bullshit job and you just want to get your bullshit job’s bullshit tasks out of the way so you can move on to more meaningful endeavors, or at least move past the day-to-day slog that permeates your workday and serves no real purpose other than to pay the bills, then I cede; I cannot fault you.

But if, say, you’re a “writer” and you’re using an LLM to “help you” “write” or “think” because it’s easier and takes less time and thought, then I stand my ground; I can and do fault you.

The nature of the job

Large language models help you build the thing faster, which is the primary end goal for your company but only sometimes for you. My primary goal might be to build the thing faster, but it also might be to learn something durably, to enjoy the work, to look forward to Monday.

I don’t like the mental fragility of not fully understanding how my own code works, where AI-generated code is “mine” in that it’s attributed to me in the git blame and I’m its maintainer going forward.

Webspace Invaders · Matthias Ott

There’s a power imbalance at work here that’s hard to ignore. Large “AI” companies, the ones with billions in venture capital, send their bots to harvest free content. Not only from big publishers or Wikipedia, but from small, independent websites, too. But we, the people running these sites – often as passion projects, as ways to freely share what we’ve learned, as digital gardens we tend in our spare time – we’re the ones paying for the bandwidth and server resources to handle all those additional requests while those companies profit from the training data they extract. It’s an asymmetric battle: small systems absorbing the demands generated at an entirely different, industrial scale.

I guess I kinda get why people hate AI

To be clear, I think AI will be ultimately extremely helpful. I still am using it on my projects. I am going to use it at my next job. I, personally, don’t hate AI.

But I can’t deny that the vibes right now are awful.

Not just bad, awful. It’s not just the “chat we’re cooked you’re the permanent underclass” stuff influencers say. It’s not just the “everybody is fucked” hyperbole CEOs sprout. It’s the actual, day-to-day experience with the technology. I’m a programmer—AI actually helps me a lot. But for normal people, their interactions are profoundly more negative, and none of the people behind this technology seem to care.

blakewatson.com - I used Claude Code and GSD to build the accessibility tool I’ve always wanted

You know my thoughts on generative tools based on large language models, but this example of personal empowerment is undeniably liberating.

The Mythology Of Conscious AI

This superb essay by Anil Seth won the 2025 Berggruen Prize Essay Competition.

The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.

Training your replacement | Go Make Things

I’ve had a lot of people recently tell me AI is “inevitable.” That this is “the future” and “we all better get used to it.”

For the last decade, I’ve had a lot of people tell me the same thing about React.

And over that decade of React being “the future” and “inevitable,” I worked on many, many projects without it. I’ve built a thriving career.

AI feels like that in many ways. It also feels different in that non-technical people also won’t shut the fuck about it.

A considered approach to generative AI in front-end… | Clearleft

A thoughtful approach from Sam:

  1. Use AI only for tasks you already know how to do, on occasions when the time that would be spent completing the task can be better spent on other problems.
  2. When using AI, provide the chosen tool with something you’ve made as an input along with a specific prompt.
  3. Always comprehensively review the output from an AI tool for quality.