Link tags: tool

375

sparkline

‘An Overwhelmingly Negative And Demoralizing Force’: What It’s Like Working For A Company That’s Forcing AI On Its Developers - Aftermath

Grim reading from the games industry, especially if you work at Shopify where the CEbrO has just mandated that you have to use this shite.

AI ambivalence | Read the Tea Leaves

Here’s the main problem I’ve found with generative AI, and with “vibe coding” in general: it completely sucks out the joy of software development for me.

I hate the way they’ve taken over the software industry, I hate how they make me feel while I’m using them, and I hate the human-intelligence-insulting postulation that a glorified Excel spreadsheet can do what I can but better.

Poisoning Well: HeydonWorks

Heydon is employing a different tactic to what I’m doing to sabotage large language model crawlers. These bots don’t respect the nofollow rel value …so now they pay the price.

Raising my own middle finger to LLM manufacturers will achieve little on its own. If doing this even works at all. But if lots of writers put something similar in place, I wonder what the effect would be. Maybe we would start seeing more—and more obvious—gibberish emerging in generative AI output. Perhaps LLM owners would start to think twice about disrespecting the nofollow protocol.

Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries - Ars Technica

As it currently stands, both the rapid growth of AI-generated content overwhelming online spaces and aggressive web-crawling practices by AI firms threaten the sustainability of essential online resources. The current approach taken by some large AI companies—extracting vast amounts of data from open-source projects without clear consent or compensation—risks severely damaging the very digital ecosystem on which these AI models depend.

Go To Hellman: AI bots are destroying Open Access

AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet.

FOSS infrastructure is under attack by AI companies

More on how large language bots are DDOSing the web:

LLM scrapers are taking down FOSS projects’ infrastructure, and it’s getting worse.

Please stop externalizing your costs directly into my face

Over the past few months, instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale.

This matches my experience with The Session. In fact, while I had this article open in a tab, I had to go deal with a tsunami of large language model bots. It’s really fucking depressing.

Please stop legitimizing LLMs or AI image generators or GitHub Copilot or any of this garbage. I am begging you to stop using them, stop talking about them, stop making new ones, just stop. If blasting CO2 into the air and ruining all of our freshwater and traumatizing cheap laborers and making every sysadmin you know miserable and ripping off code and books and art at scale and ruining our fucking democracy isn’t enough for you to leave this shit alone, what is?

Make stuff, on your own, first | Sean Voisen

AI can be incredibly useful when deployed skillfully in creative endeavors—as an ideation partner, as a scaffolding tool, by eliminating tedious tasks, etc.—but anyone making anything truly good with it is probably somebody who could already make something good first without it.

Another uncalled-for blog post about the ethics of using AI | Clagnut by Richard Rutter

This is a really thoughtful piece by Rich, who’s got conflicted feelings about large language models in the design process. I suspect a lot of people can relate to this.

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

In the way

This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.

“Wait, not like that”: Free and open access in the age of generative AI

Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation.

Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.

And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales. (Though perhaps anyone who has observed AI companies’ activities more generally will be unsurprised to see that they do not act as though they believe their businesses will be sustainable on the order of years.)

It would be very wise for these companies to immediately begin prioritizing the ongoing health of the commons, so that they do not wind up strangling their golden goose. It would also be very wise for the rest of us to not rely on AI companies to suddenly, miraculously come to their senses or develop a conscience en masse.

Instead, we must ensure that mechanisms are in place to force AI companies to engage with these repositories on their creators’ terms.

Hallucinations in code are the least dangerous form of LLM mistakes

The moment you run LLM generated code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.

Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.

With code you get a powerful form of fact checking for free. Run the code, see if it works.

Severance Is the Future Tech Bros Want - Reactor

The tech bros advocating for generative AI to take over art are at the same level of cultural refinement as the characters in Severance. They’re creating apps to summarize books to people, tweeting from accounts with Greek statue profile pictures.

GenAI would automate Lumon’s cultural mission, allowing humans to sever themselves from the production of art and culture.

Anchor position tool

This is a great little helper in understanding anchor positioning in CSS.

Chrome-only for now.

Generative AI use and human agency

You do not have to use generative AI.

AI itself cannot be held to account.

If you use AI, you are the one who is accountable for whatever you produce with it.

There are contexts in which it is immoral to use generative AI.

Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools.

You do not have to use generative AI.

The Generative AI Con

I Feel Like I’m Going Insane

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.

We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.

Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.

Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.

AI is Stifling Tech Adoption | Vale.Rocks

Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!

Well, your coding co-pilot is not going to going to be of any help.

Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.

Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.

So we get this instead:

I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.

Tech continues to be political | Miriam Eric Suzanne

Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.

This. A million times, this.

I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.

I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?

Is it okay?

Robin takes a fair and balanced look at the ethics of using large language models.