Journal tags: html

145

sparkline

The datalist element on iOS 26

The datalist element is all fucked up on iOS. Again.

I haven’t “upgraded” my iPhone to iOS 26 and I have no plans to. The whole Liquid Glass thing is literally offputting. So I wouldn’t have known about the latest regression in Safari if a friend hadn’t texted me about the problem.

He was trying to do a search on The Session. He was looking for the tune, The Road To Town. He started typing this into the form on the home page of the site. He got as far as “The Road To”. That’s when the entire input was obscured by a suggestion from the associated datalist.

A screenshot of The Session on an iPhone during a search on the homepage. The search input is completely obscured by the text: The Road To Lisdoonvarna.

This is incredibly annoying and seems to be a pattern of behaviour for Safari. Features are supported …technically. But the implementation is so buggy as to be unusable.

I’ll probably have to do some user-agent sniffing, which I hate. And it won’t be enough to just sniff for Safari on iOS 26. Remember that every browser on iOS is just Webkit in a trenchcoat.

Time to file a bug and then wait God knows how long for an update to get rolled out.

Update: I filed a bug, but in the meantime it looks like user-agent sniffing is going to be impossible.

Installing web apps

Safari, Chrome, and Edge all allow you to install websites as though they’re apps.

On mobile Safari, this is done with the “Add to home screen” option that’s buried deep in the “share” menu, making it all but useless.

On the desktop, this is “Add to dock” in Safari, or “Install” in Chrome or Edge.

Firefox doesn’t offer this functionality, which as a shame. Firefox is my browser of choice but they decided a while back to completely abandon progressive web apps (though they might reverse that decision soon).

Anyway, being able to install websites as apps is fantastic! I’ve got a number of these “apps” in my dock: Mastodon, Bluesky, Instagram, The Session, Google Calendar, Google Meet. They all behave just like native apps. I can’t even tell which browser I used to initially install them.

If you’d like to prompt users to install your website as an app, there’s not much you can do other than show them how to do it. But that might be about to change…

I’ve been eagerly watching the proposal for a Web Install API. This would allow authors to put a button on a page that, when clicked, would trigger the installation process (the user would still need to confirm this, of course).

Right now it’s a JavaScript API called navigator.install, but there’s talk of having a declarative version too. Personally, I think this would be an ideal job for an invoker command. Making a whole new install element seems ludicrously over-engineered to me when button invoketarget="share" is right there.

Microsoft recently announced that they’d be testing the JavaScript API in an origin trial. I immediately signed up The Session for the trial. Then I updated the site to output the appropriate HTTP header.

You still need to mess around in the browser configs to test this locally. Go to edge://flags or chrome://flags/ and search for ‘Web App Installation API’, enable it and restart.

I’m now using this API on the homepage of The Session. Unsurprisingly, I’ve wrapped up the functionality into an HTML web component that I call button-install.

Here’s the code. You use it like this:

<button-install>
  <button>Install the app</button>
</button-install>

Use whatever text you like inside the button.

I wasn’t sure whether to keep the button element in the regular DOM or generate it in the Shadow DOM of the custom element. Seeing as the button requires JavaScript to do anything, the Shadow DOM option would make sense. As Tess put it, Shadow DOM is for hiding your shame—the bits of your interface that depend on JavaScript.

In the end I decided to stick with a regular button element within the custom element, but I take steps to remove it when it’s not necessary.

There’s a potential issue in having an element that could self-destruct if the browser doesn’t cut the mustard. There might be a flash of seeing the button before it gets removed. That could even cause a nasty layout shift.

So far I haven’t seen this problem myself but I should probably use something like Scott’s CSS in reverse: fade in the button with a little delay (during which time the button might end up getting removed anyway).

My connectedCallback method starts by finding the button nested in the custom element:

class ButtonInstall extends HTMLElement {
  connectedCallback () {
    this.button = this.querySelector('button');
    …
  }
customElements.define('button-install', ButtonInstall);

If the navigator.install method doesn’t exist, remove the button.

if (!navigator.install) {
  this.button.remove();
  return;
}

If the current display-mode is standalone, then the site has already been installed, so remove the button.

if (window.matchMedia('(display-mode: standalone)').matches) {
  this.button.remove();
  return;
}

As an extra measure, I could also use the display-mode media query in CSS to hide the button:

@media (display-mode: standalone) {
  button-install button {
    display: none;
  }
}

If the button has survived these tests, I can wire it up to the navigator.install method:

this.button.addEventListener('click', async (ev) => {
  await navigator.install();
});

That’s all I’m doing for now. I’m not doing any try/catch stuff to handle all the permutations of what might happen next. I just hand it over to the browser from there.

Feel free to use this code if you want. Adjust the code as needed. If your manifest file says display: fullscreen you’ll need to change the test in the JavaScript accordingly.

Oh, and make sure your site already has a manifest file that has an id field in it. That’s required for navigator.install to work.

Bóthar

England is criss-crossed by routes that were originally laid down by the Romans. When it came time to construct modern roads, it often made sense to use these existing routes rather than trying to designate entirely new ones. So some of the roads in England are like an early kind of desire path.

Desire paths are something of a cliché in the UX world. They’re the perfect metaphor for user-centred design; instead of trying to make people take a pre-defined route, let them take the route that’s easiest for them and then codify that route.

This idea was enshrined into the very design principles of HTML as “pave the cowpaths”:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

Ireland never had any Roman roads. But it’s always had plenty of cowpaths.

The Irish word for cow is .

The Irish word for road is bóthar, which literally means “cowpath”.

The cowpaths were paved in both the landscape and the language.

Speaking at Web Day Out

Half of the line-up of speakers for Web Day Out is already on the site. One more is already confirmed.

I’m ridiculously excited about the way the line-up is taking shape, and judging by the zippiness of ticket sales, so are lots of my peers. Seriously, don’t wait to get your ticket or you might end up missing out completely.

I’ve already got a shortlist of other people I could imagine on the line-up, but I’m open to more suggestions. If you’d like to speak at Web Day Out—or you know someone you think would be great—send an email to jeremy@clearleft.com

I won’t be checking my work email while I’m away on holiday next week but it would be lovely to come back to an inbox of exciting suggestions.

A couple of pointers…

I’d rather not have too many people like me on the line-up. White dudes are already over-represented in this industry, especially at conferences.

If you’ve never given a talk before, don’t worry. I’d love to help you put your talk together and coach you in presenting it. I have some experience in this area.

No product pitches. That includes JavaScript frameworks and CSS libraries.

If I get even a whiff of “AI”, your proposal doesn’t stand a chance. There are many, many, many other events that are only too happy to have wall-to-wall talks about …that sort of thing.

If you end up speaking at Web Day Out you will, of course, be paid. We will, of course, cover travel and accommodation too. We can’t afford the travel costs of bringing anyone in from outside Europe though (and we’d like to keep the carbon footprint of the event as small as possible).

Web Day Out has an opinionated agenda all about showing what’s possible in web browsers today. Some potential topics include:

The emphasis should be on using stuff in production rather than theoretical demos.

If you’ve got a case study about using the web platform—perhaps migrating away from a framework-driven approach—that would fit the bill perfectly.

How’s all that sounding? Know someone who could deliver the goods? Let me know!

Announcing Web Day Out

I’m going to cut right to the chase: Clearleft is putting on a brand new conference in 2026. It’s called Web Day Out. It’ll be on Thursday, March 12th right here in Brighton. Tickets are just £225+VAT. You should be there!

If you’ve ever been to Responsive Day Out or Patterns Day, the format will be familiar to you. There’s going to be eight 30 minute talks. Bam! Bam! Bam!

Like those other one-day conferences, this one has a laser-sharp focus.

Web Day Out is all about what you can do in web browsers today. You can expect talks that showcase hands-on practical uses for the latest advances in HTML, CSS, and JavaScript APIs. There will be no talks about libraries, frameworks or build tools, and I can guarantee there will be absolutely no so-called “AI”.

As you might have gathered, this is an opinionated conference.

If you care about performance, accessibility, and progressive enhancement, Web Day Out is the event for you.

Or if you’ve been living in React-land but starting to feel that maybe you’re missing out on what’s been shipping in web browsers, Web Day Out is the event for you too. And I’m not talking about cute demos here. This is very much about shipping to production.

I’ve got half of the line-up assembled already:

Jemima’s talk gives you a flavour of what to expect at Web Day Out:

In this talk, we’ll take a look at how to use HTML and CSS to build simpler alternatives to popular JavaScript components such as accordions, modals, scroll transitions, carousels etc We’ll also take a look at the performance and accessibility benefits and real-life applications and use-cases of these components.

Web Day Out will be in The Studio Theatre of the Brighton Dome, which is a fantastic intimate venue. That means that places are limited, so get your ticket now!

Streamlining HTML web components

If you’re a front-end developer and you don’t read Chris Ferdinandi’s blog, you should change that right now. Add that RSS feed to your feed reader of choice!

Lately he’s been posting about some of the thinking behind his Kelp UI library. That includes some great nuggets of wisdom around HTML web components.

First of all, he pointed out that web components don’t need a constructor(). This was news to me. I thought custom elements had to include this incantation at the start:

constructor () {
  super();
}

But it turns that if all you’re doing is calling super(), you can omit the whole thing and it’ll be done for you.

I immediately refactored all the web components I’m using on The Session. While I was at it, I implemented Chris’s bulletproof web component loading.

Now technically, I don’t need to do this. I’m linking to my JavaScript at the bottom of every page so I know it’s going to load after the HTML. But I don’t like having that assumption baked into my code.

For any of my custom elements that reference other elements in the DOM—using, say, document.querySelector()—I updated the connectedCallback() method to use Chris’s technique.

It turned out that there weren’t that many of my custom elements that were doing that. Because HTML web components are wrapped around existing markup, the contents of the custom element are usually what matters (rather than other elements on the same page).

I guess that’s another unexpected benefit to HTML web components. Because they’ve already got their own bit of DOM inside them, you don’t need to worry about when you load your markup and when you load your JavaScript.

And no faffing about with the dark arts of the Shadow DOM either.

Awareness

Today is Global Accessibility Awareness Day:

The purpose of GAAD is to get everyone talking, thinking and learning about digital access and inclusion, and the more than One Billion people with disabilities/impairments.

Awareness is good. It’s necessary. But it’s not sufficient.

Accessibility, like sustainability and equality, is the kind of thing that most businesses will put at the end of sentences that begin “We are committed to…”

It’s what happens next that matters. How does that declared commitment—that awareness—turn into action?

In the worst-case scenario, an organisation might reach for an accessibility overlay. Who can blame them? They care about accessibility. They want to do something. This is something.

Good intentions alone can result in an inaccessible website. That’s why I think there’s another level of awareness that’s equally important. Designers and developers need to be aware of what they can actually do in service of accessibility.

Fortunately that’s not an onerous expectation. It doesn’t take long to grasp the importance of having good colour contrast or using the right HTML elements.

An awareness of HTML is like a superpower when it comes to accessibility. Like I wrote in the foreword to the Web Accessibility Cookbook by O’Reilly:

It’s supposed to be an accessibility cookbook but it’s also one of the best HTML tutorials you’ll ever find. Come for the accessibility recipe; stay for the deep understanding of markup.

The challenge is that HTML is hidden. Like Cassie said in the accessibility episode of The Clearleft Podcast:

You get JavaScript errors if you do that wrong and you can see if your CSS is broken, but you don’t really have that with accessibility. It’s not as obvious when you’ve got something wrong.

We are biased towards what we can see—hierarchy, layout, imagery, widgets. Those are the outputs. When it comes to accessibility, what matters is how those outputs are generated. Is that button actually a button element or is it a div? Is that heading actually an h1 or is it another div?

This isn’t about the semantics of HTML. This is about the UX of HTML:

Instead of explaining the meaning of a certain element, I show them what it does.

That’s the kind of awareness I’m talking about.

One way of gaining this awareness is to get a feel for using a screen reader.

The name is a bit of a misnomer. Reading the text on screen is the least important thing that the software does. The really important thing that a screen reader does is convey the structure of what’s on screen.

Friend of Clearleft, Jamie Knight very generously spent an hour of his time this week showing everyone the basics of using VoiceOver on a Mac (there’s a great short video by Ethan that also covers this).

Using the rotor, everyone was able to explore what’s under the hood of a web page; all the headings, the text of all the links, the different regions of the page.

That’s not going to turn anyone into an accessibility expert overnight, but it gave everyone an awareness of how much the HTML matters.

Mind you, accessibility is a much bigger field than just screen readers.

Fred recently hosted a terrific panel called Is neurodiversity the next frontier of accessibility in UX design?—well worth a watch!

One of those panelists—Craig Abbott—is speaking on day two of UX London next month. His talk has the magnificent title, Accessibility is a design problem:

I spend a bit of time covering some misconceptions about accessibility, who is responsible for it, and why it’s important that we design for it up front. It also includes real-world examples where design has impacted accessibility, before moving onto lots of practical guidance on what to be aware of and how to design for many different accessibility issues.

Get yourself a ticket and get ready for some practical accessibility awareness.

Command and control

I’ve been banging on for a while now about how much I’d like a declarative option for the Web Share API. I was thinking that the type attribute on the button element would be a good candidate for this (there’s prior art in the way we extended the type attribute on the input element for HTML5).

I wrote about the reason for a share button type as well as creating a polyfill. I also wrote about how this idea would work for other button types: fullscreen, print, copy to clipboard, that sort of thing.

Since then, I’ve been very interested in the idea of “invokers” being pursued by the Open UI group. Rather than extending the type attribute, they’ve been looking at adding a new attribute. Initially it was called invoketarget (so something like button invoketarget="share").

Things have been rolling along and invoketarget has now become the command attribute (there’s also a new commandfor attribute that you can point to an element with an ID). Here’s a list of potential values for the command attribute on a button element.

Right now they’re focusing on providing declarative options for launching dialogs and other popovers. That’s already shipping.

The next step is to use command and commandfor for controlling audio and video, as well as some form controls. I very much approve! I love the idea of being able to build and style a fully-featured media player without any JavaScript.

I’m hoping that after that we’ll see the command attribute get expanded to cover JavaScript APIs that require a user interaction. These seem like the ideal candidates:

There’s also scope for declarative options for navigating the browser’s history stack:

  • button command="back"
  • button command="forward"
  • button command="refresh"

Whatever happens next, I’m very glad to see that so much thinking is being applied to declarative solutions for common interface patterns.

Making the website for Research By The Sea

UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.

I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:

We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.

If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.

Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!

These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:

I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.

I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.

I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.

I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.

Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.

Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left, right, width or height in a style sheet, it looks like a bug to me.

Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp declaration on the body element, which can also get the job done.

Okay, so those are the things that feel standard to me. So what could I play around with that was new?

View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.

Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…

Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.

You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.

That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.

Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference @media query.

I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).

Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.

Archives

Speaking of serendipity, not long after I wrote about making a static archive of The Session for people to download and share, I came across a piece by Alex Chan about using static websites for tiny archives.

The use-case is slightly different—this is about personal archives, like paperwork, screenshots, and bookmarks. But we both came up with the same process:

I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.

And we share the same hope:

Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.

You should read the whole thing, where Alex describes all the other approaches they took before settling on plain ol’ HTML files in a folder:

HTML is low maintenance, it’s flexible, and it’s not going anywhere. It’s the foundation of the entire web, and pretty much every modern computer has a web browser that can render HTML pages. These files will be usable for a very long time – probably decades, if not more.

I’m enjoying this approach, so I’m going to keep using it. What I particularly like is that the maintenance burden has been essentially zero – once I set up the initial site structure, I haven’t had to do anything to keep it working.

They also talk about digital preservation:

I’d love to see static websites get more use as a preservation tool.

I concur! And it’s particularly interesting for Alex to be making this observation in the context of working with the Flickr foundation. That’s where they’re experimenting with the concept of a data lifeboat

What should we do when a digital service sinks?

This is something that George spoke about at the final dConstruct in 2022. You can listen to the talk on the dConstruct archive.

Train coding

When I went up to London for the State of the Browser conference last month, I shared the train journey with Remy.

I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.

On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.

Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.

If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.

Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.

Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.

The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.

There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.

With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.

Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.

By the time I was back home, the code was done. Here’s the result:

The Session archive: a static copy of the content on thesession.org.

If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.

I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.

Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.

You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.

I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).

I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.

Preventing automated sign-ups

The Session goes through periods of getting spammed with automated sign-ups. I’m not sure why. It’s not like they do anything with the accounts. They’re just created and then they sit there (until I delete them).

In the past I’ve dealt with them in an ad-hoc way. If the sign-ups were all coming from the same IP addresses, I could block them. If the sign-ups showed some pattern in the usernames or emails, I could use that to block them.

Recently though, there was a spate of sign-ups that didn’t have any patterns, all coming from different IP addresses.

I decided it was time to knuckle down and figure out a way to prevent automated sign-ups.

I knew what I didn’t want to do. I didn’t want to put any obstacles in the way of genuine sign-ups. There’d be no CAPTCHAs or other “prove you’re a human” shite. That’s the airport security model: inconvenience everyone to stop a tiny number of bad actors.

The first step I took was the bare minimum. I added two form fields—called “wheat” and “chaff”—that are randomly generated every time the sign-up form is loaded. There’s a connection between those two fields that I can check on the server.

Here’s how I’m generating the fields in PHP:

$saltstring = 'A string known only to me.';
$wheat = base64_encode(openssl_random_pseudo_bytes(16));
$chaff = password_hash($saltstring.$wheat, PASSWORD_BCRYPT);

See how the fields are generated from a combination of random bytes and a string of characters never revealed on the client? To keep it from goint stale, this string—the salt—includes something related to the current date.

Now when the form is submitted, I can check to see if the relationship holds true:

if (!password_verify($saltstring.$_POST['wheat'], $_POST['chaff'])) {
    // Spammer!
}

That’s just the first line of defence. After thinking about it for a while, I came to conclusion that it wasn’t enough to just generate some random form field values; I needed to generate random form field names.

Previously, the names for the form fields were easily-guessable: “username”, “password”, “email”. What I needed to do was generate unique form field names every time the sign-up page was loaded.

First of all, I create a one-time password:

$otp = base64_encode(openssl_random_pseudo_bytes(16));

Now I generate form field names by hashing that random value with known strings (“username”, “password”, “email”) together with a salt string known only to me.

$otp_hashed_for_username = md5($saltstring.'username'.$otp);
$otp_hashed_for_password = md5($saltstring.'password'.$otp);
$otp_hashed_for_email = md5($saltstring.'email'.$otp);

Those are all used for form field names on the client, like this:

<input type="text" name="<?php echo $otp_hashed_for_username; ?>">
<input type="password" name="<?php echo $otp_hashed_for_password; ?>">
<input type="email" name="<?php echo $otp_hashed_for_email; ?>">

(Remember, the name—or the ID—of the form field makes no difference to semantics or accessibility; the accessible name is derived from the associated label element.)

The one-time password also becomes a form field on the client:

<input type="hidden" name="otp" value="<?php echo $otp; ?>">

When the form is submitted, I use the value of that form field along with the salt string to recreate the field names:

$otp_hashed_for_username = md5($saltstring.'username'.$_POST['otp']);
$otp_hashed_for_password = md5($saltstring.'password'.$_POST['otp']);
$otp_hashed_for_email = md5($saltstring.'email'.$_POST['otp']);

If those form fields don’t exist, the sign-up is rejected.

As an added extra, I leave honeypot hidden forms named “username”, “password”, and “email”. If any of those fields are filled out, the sign-up is rejected.

I put that code live and the automated sign-ups stopped straight away.

It’s not entirely foolproof. It would be possible to create an automated sign-up system that grabs the names of the form fields from the sign-up form each time. But this puts enough friction in the way to make automated sign-ups a pain.

You can view source on the sign-up page to see what the form fields are like.

I used the same technique on the contact page to prevent automated spam there too.

The datalist element on iOS

The datalist element is good. It was a bit bumpy there for a while, but browser implementations have improved over time. Now it’s by far the simplest and most robust way to create an autocompleting combobox widget.

Hook up an input element with a datalist element using the list and id attributes and you’re done. You can even use a bit of Ajax to dynamically update the option elements inside the datalist in response to the user’s input. The browser takes care of all the interaction. If you try to roll your own combobox implementation, it’s almost certainly going to involve a lot of JavaScript and still probably won’t account for all use cases.

Safari on iOS—and therefore all browsers on iOS—didn’t support datalist for quite a while. But once it finally shipped, it worked really nicely. The options showed up just like automplete suggestions above the keyboard.

But that broke a while back.

The suggestions still appeared, but if you tapped on one of them, nothing happened. The input element didn’t get updated. You had to tap on a little downward arrow inside the input in order to see the list of options.

That was really frustrating for anybody on iOS using The Session. By far the most common task on the site is searching for a tune, something that’s greatly (progressively) enhanced with a dynamically-updating datalist.

I just updated to iOS 18 specifically to see if this bug has been fixed, and it has:

Fixed updating the input value when selecting an option from a datalist element.

Hallelujah!

But now there’s some additional behaviour that’s a little weird.

As well as showing the options in the autocomplete list above the keyboard, Safari on iOS—and therefore all browsers on iOS—also pops up the options as a list (as if you had tapped on that downward arrow). If the list is more than a few options long, it completely obscures the input element you’re typing into!

I’m not sure if this is a bug or if it’s the intended behaviour. It feels like a bug, but I don’t know if I should file something.

For now, I’ve updated the datalist elements on The Session to only ever hold three option elements in order to minimise the problem. Seeing as the autosuggest list above the keyboard only ever shows a maximum of three suggestions anyway, this feels like a reasonable compromise.

Manual ’till it hurts

I’ve been going buildless—or as Brad crudely puts it, raw-dogging websites on a few projects recently. Not just obviously simple things like Clearleft’s Browser Support page, but sites like:

They also have 0 dependencies.

Like Max says:

Funnily enough, many build tools advertise their superior “Developer Experience” (DX). For my money, there’s no better DX than shipping code straight to the browser and not having to worry about some cryptic node_modules error in between.

Making websites without a build step is a gift to your future self. When you open that project six months or a year or two years later, there’ll be no faffing about with npm updates, installs, or vulnerabilities.

Need to edit the CSS? You edit the CSS. Need to change the markup? You change the markup.

It’s remarkably freeing. It’s also very, very performant.

If you’re thinking that your next project couldn’t possibly be made without a build step, let me tell you about a phrase I first heard in the indie web community: “Manual ‘till it hurts”. It’s basically a two-step process:

  1. Start doing what you need to do by hand.
  2. When that becomes unworkable, introduce some kind of automation.

It’s remarkable how often you never reach step two.

I’m not saying premature optimisation is the root of all evil. I’m just saying it’s premature.

Start simple. Get more complex if and when you need to.

You might never need to.

My approach to HTML web components

I’ve been deep-diving into HTML web components over the past few weeks. I decided to refactor the JavaScript on The Session to use custom elements wherever it made sense.

I really enjoyed doing this, even though the end result for users is exactly the same as before. This was one of those refactors that was for me, and also for future me. The front-end codebase looks a lot more understandable and therefore maintainable.

Most of the JavaScript on The Session is good ol’ DOM scripting. Listen for events; when an event happens, make some update to some element. It’s the kind of stuff we might have used jQuery for in the past.

Chris invoked Betteridge’s law of headlines recently by asking Will Web Components replace React and Vue? I agree with his assessment. The reactivity you get with full-on frameworks isn’t something that web components offer. But I do think web components can replace jQuery and other approaches to scripting the DOM.

I’ve written about my preferred way to do DOM scripting: element.target.closest. One of the advantages to that approach is that even if the DOM gets updated—perhaps via Ajax—the event listening will still work.

Well, this is exactly the kind of thing that custom elements take care of for you. The connectedCallback method gets fired whenever an instance of the custom element is added to the document, regardless of whether that’s in the initial page load or later in an Ajax update.

So my client-side scripting style has updated over time:

  1. Adding event handlers directly to elements.
  2. Adding event handlers to the document and using event.target.closest.
  3. Wrapping elements in a web component that handles the event listening.

None of these progressions were particularly ground-breaking or allowed me to do anything I couldn’t do previously. But each progression improved the resilience and maintainability of my code.

Like Chris, I’m using web components to progressively enhance what’s already in the markup. In fact, looking at the code that Chris is sharing, I think we may be writing some very similar web components!

A few patterns have emerged for me…

Naming custom elements

Naming things is famously hard. Every time you make a new custom element you have to give it a name that includes a hyphen. I settled on the convention of using the first part of the name to echo the element being enhanced.

If I’m adding an enhancement to a button element, I’ll wrap it in a custom element that starts with button-. I’ve now got custom elements like button-geolocate, button-confirm, button-clipboard and so on.

Likewise if the custom element is enhancing a link, it will begin with a-. If it’s enhancing a form, it will begin with form-.

The name of the custom element tells me how it’s expected to be used. If I find myself wrapping a div with button-geolocate I shouldn’t be surprised when it doesn’t work.

Naming attributes

You can use any attributes you want on a web component. You made up the name of the custom element and you can make up the names of the attributes too.

I’m a little nervous about this. What if HTML ends up with a new global attribute in the future that clashes with something I’ve invented? It’s unlikely but it still makes me wary.

So I use data- attributes. I’ve already got a hyphen in the name of my custom element, so it makes sense to have hyphens in my attributes too. And by using data- attributes, the browser gives me automatic reflection of the value in the dataset property.

Instead of getting a value with this.getAttribute('maximum') I get to use this.dataset.maximum. Nice and neat.

The single responsibility principle

My favourite web components aren’t all-singing, all-dancing powerhouses. Rather they do one thing, often a very simple thing.

Here are some examples:

  • Jason’s aria-collapsable for toggling the display of one element when you click on another.
  • David’s play-button for adding a play button to an audio or video element.
  • Chris’s ajax-form for sending a form via Ajax instead of a full page refresh.
  • Jim’s user-avatar for adding a tooltip to an image.
  • Zach’s table-saw for making tables responsive.

All of those are HTML web components in that they extend your existing markup rather than JavaScript web components that are used to replace HTML. All of those are also unambitious by design. They each do one thing and one thing only.

But what if my web component needs to do two things?

I make two web components.

The beauty of custom elements is that they can be used just like regular HTML elements. And the beauty of HTML is that it’s composable.

What if you’ve got some text that you want to be a level-three heading and also a link? You don’t bemoan the lack of an element that does both things. You wrap an a element in an h3 element.

The same goes for custom elements. If I find myself adding multiple behaviours to a single custom element, I stop and ask myself if this should be multiple custom elements instead.

Take some of those button- elements I mentioned earlier. One of them copies text to the clipboard, button-clipboard. Another throws up a confirmation dialog to complete an action, button-confirm. Suppose I want users to confirm when they’re copying something to their clipboard (not a realistic example, I admit). I don’t have to create a new hybrid web component. Instead I wrap the button in the two existing custom elements.

Rather than having a few powerful web components, I like having lots of simple web components. The power comes with how they’re combined. Like Unix pipes. And it has the added benefit of stopping my code getting too complex and hard to understand.

Communicating across components

Okay, so I’ve broken all of my behavioural enhancements down into single-responsibility web components. But what if one web component needs to have awareness of something that happens in another web component?

Here’s an example from The Session: the results page when you search for sessions in London.

There’s a map. That’s one web component. There’s a list of locations. That’s another web component. There are links for traversing backwards and forwards through the locations via Ajax. Those links are in web components too.

I want the map to update when the list of locations changes. Where should that logic live? How do I get the list of locations to communicate with the map?

Events!

When a list of locations is added to the document, it emits a custom event that bubbles all the way up. In fact, that’s all this component does.

You can call the event anything you want. It could be a newLocations event. That event is dispatched in the connectedCallback of the component.

Meanwhile in the map component, an event listener listens for any newLocations events on the document. When that event handler is triggered, the map updates.

The web component that lists locations has no idea that there’s a map on the same page. It doesn’t need to. It just needs to dispatch its event, no questions asked.

There’s nothing specific to web components here. Event-driven programming is a tried and tested approach. It’s just a little easier to do thanks to the connectedCallback method.

I’m documenting all this here as a snapshot of my current thinking on HTML web components when it comes to:

  • naming custom elements,
  • naming attributes,
  • the single responsibility principle, and
  • communicating across components.

I may well end up changing my approach again in the future. For now though, these ideas are serving me well.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.

Pickin’ dates on iOS

This is a little follow-up to my post about web components for date inputs.

If you try the demo on iOS it doesn’t work. There’s nothing stopping you selecting any date.

That’s nothing to do with the web components. It turns out that Safari on iOS doesn’t support min and max on date inputs. This is also true of any other browser on iOS because they’re all just Safari in a trenchcoat …for now.

I was surprised — input type="date" has been around for a long time now. I mean, it’s not the end of the world. You’d have to do validation on inputted dates on the server anyway, but it sure would be nice for the user experience of filling in forms.

Alas, it doesn’t look like this is something on the interop radar.

What really surprised me was looking at Can I Use. That shows Safari on iOS as fully supporting date inputs.

Maybe it’s just semantic nitpickery on my part but I would consider that the lack of support for the min and max attributes means that date inputs are partially supported.

Can I Use gets its data from here. I guess I need to study the governance rules and try to figure out how to submit a pull request to update the currently incorrect information.

Pickin’ dates

I had the opportunity to trim some code from The Session recently. That’s always a good feeling.

In this case, it was a progressive enhancement pattern that was no longer needed. Kind of like removing a polyfill.

There are a couple of places on the site where you can input a date. This is exactly what input type="date" is for. But when I was making the interface, the support for this type of input was patchy.

So instead the interface used three select dropdowns: one for days, one for months, and one for years. Then I did a bit of feature detection and if the browser supported input type="date", I replaced the three selects with one date input.

It was a little fiddly but it worked.

Fast forward to today and input type="date" is supported across the board. So I threw away the JavaScript and updated the HTML to use date inputs by default. Nice!

I was discussing date inputs recently when I was talking to students in Amsterdam:

They’re given a PDF inheritance-tax form and told to convert it for the web.

That form included dates. The dates were all in the past so the students wanted to be able to set a max value on the datepicker. Ideally that should be done on the server, but it would be nice if you could easily do it in the browser too.

Wouldn’t it be nice if you could specify past dates like this?

<input type="date" max="today">

Or for future dates:

<input type="date" min="today">

Alas, no such syntactic sugar exists in HTML so we need to use JavaScript.

This seems like an ideal use-case for HTML web components:

Instead of all-singing, all-dancing web components, it feels a lot more elegant to use web components to augment your existing markup with just enough extra behaviour.

In this case, it would be nice to augment an existing input type="date" element. Something like this:

 <input-date-past>
   <input type="date">
 </input-date-past>

Here’s the JavaScript that does the augmentation:

 customElements.define('input-date-past', class extends HTMLElement {
     constructor() {
         super();
     }
     connectedCallback() {
         this.querySelector('input[type="date"]').setAttribute('max', new Date().toISOString().substring(0,10));
     }
 });

That’s it.

Here’s a CodePen where you can see it in action along with another HTML web component for future dates called, you guessed it, input-date-future.

See the Pen Date input HTML web components by Jeremy Keith (@adactio) on CodePen.

Schooltijd

I was in Amsterdam last week. Usually I’m in that city for an event like the excellent CSS Day. Not this time. I was there as a guest of Vasilis. He invited me over to bother his students at the CMD (Communications and Multimedia Design) school.

There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.

Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?

Got your styles all looking good on the screen? Great! Now what about print styles?

Got form validation working? Great! Now how might you use local storage to save data locally?

As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.

Having to switch gears between modules reminded me of switching between prototypes and production:

Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.

Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!

In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?

If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.

This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.

I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.

Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.

It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.

You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.

So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.

I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.

I’d love to do it again sometime.

button invoketarget=”share”

I’ve written quite a bit about how I’d like to see a declarative version of the Web Share API. My current proposal involves extending the type attribute on the button element to support a value of “share”.

Well, maybe a different attribute will end up accomplishing what I want! Check out the explainer for the “invokers” proposal over on Open UI. The idea is to extend the button element with a few more attributes.

The initial work revolves around declaratively opening and closing a dialog, which is an excellent use case and definitely where the effort should be focused to begin with.

But there’s also investigation underway into how this could be away to provide a declarative way of invoking JavaScript APIs. The initial proposal is already discussing the fullscreen API, and how it might be invoked like this:

button invoketarget="toggleFullsceen"

They’re also looking into a copy-to-clipboard action. It’s not much of a stretch to go from that to:

button invoketarget="share"

Remember, these are APIs that require a user interaction anyway so extending the button element makes a lot of sense.

I’ll be watching this proposal keenly. I’d love to see some of these JavaScript cowpaths paved with a nice smooth coating of declarative goodness.