
Wednesday session
Wednesday session
Interesting—this is exactly the same framing I used to talk about design systems a few years ago.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
A fascinating interactive journey through biometrics using your face.
How a writing system went from being a dream (literally) to a reality, codified in unicode.
This is easily my favourite use of a machine learning algorithm.
A fascinating crowdsourced project. You can read the backstory in this article in Wired magazine.
This explains rubber ducking.
Speaking out loud is not only a medium of communication, but a technology of thinking: it encourages the formation and processing of thoughts.
The removal of all friction should’t be a goal. Making things easy and making things hard should be a design tool, employed to aid the end user towards their loftiest goals.
Years ago, the world of web standards was split. Two groups—the W3C and the WHATWG—were working on the next iteration of HTML. They had different ideas about the nature of standardisation.
Broadly speaking, the W3C followed a specification-first approach. Figure out what should be implemented first and foremost. From this perspective, specs can be seen as blueprints for browsers to work from.
The WHATWG, by contrast, were implementation led. The way they saw it, there was no point specifying something if browsers weren’t going to implement it. Instead, specs are there to document existing behaviour in browsers.
I’m over-generalising somewhat in my descriptions there, but the point is that there was an ideological difference of opinion around what standards bodies should do.
This always reminded me of a similar ideological conflict when it comes to language usage.
Language prescriptivists attempt to define rules about what’s right or right or wrong in a language. Rules like “never end a sentence with a preposition.” Prescriptivists are generally fighting a losing battle and spend most of their time bemoaning the decline of their language because people aren’t following the rules.
Language descriptivists work the exact opposite way. They see their job as documenting existing language usage instead of defining it. Lexicographers—like Merriam-Webster or the Oxford English Dictionary—receive complaints from angry prescriptivists when dictionaries document usage like “literally” meaning “figuratively”.
Dictionaries are descriptive, not prescriptive.
I’ve seen the prescriptive/descriptive divide somewhere else too. I’ve seen it in the world of design systems.
Jordan Moore talks about intentional and emergent design systems:
There appears to be two competing approaches in designing design systems.
An intentional design system. The flavour and framework may vary, but the approach generally consists of: design system first → design/build solutions.
An emergent design system. This approach is much closer to the user needs end of the scale by beginning with creative solutions before deriving patterns and systems (i.e the system emerges from real, coded scenarios).
An intentional design system is prescriptive. An emergent design system is descriptive.
I think we can learn from the worlds of web standards and dictionaries here. A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.
I think it’s more important for a design system to be accurate than beautiful.
As Matthew Ström says, you should start with the design system you already have:
Instead of drawing a whole new set of components, start with the components you already have in production. Document them meticulously. Create a single source of truth for design, warts and all.
Over the last fifty years, we have come to recognize that the fuel of our civilizational expansion has become the main driver of our extinction, and that of many of the species we share the planet with. We are now coming to realize that is as true of our cognitive infrastructure. Something is out of sync, felt everywhere: something amiss in the temporal order, and it is as related to political and technological shifts, shifts in our own cognition and attention, as it is to climatic ones. To think clearly in such times requires an intersectional understanding of time itself, a way of thinking that escapes the cognitive traps, ancient and modern, into which we too easily fall. Because our technologies, the infrastructures we have built to escape our past, have turned instead to cancelling our future.
James writes beautifully about rates of change.
The greatest trick our utility-directed technologies have performed is to constantly pull us out of time: to distract us from the here and now, to treat time as a kind of fossil fuel which can be endlessly extracted in the service of a utopian future which never quite arrives. If information is the new oil, we are already, in the hyper-accelerated way of present things, well into the fracking age, with tremors shuddering through the landscape and the tap water on fire. But this is not enough; it will never be enough. We must be displaced utterly in time, caught up in endless imaginings of the future while endlessly neglecting the lessons and potential actions of the present moment.
When in doubt, label your icons.
When not in doubt, you probably should be.
I’ve been playing around with the newly-released Squoosh, the spiritual successor to Jake’s SVGOMG. You can drag images into the browser window, and eyeball the changes that any optimisations might make.
On a project that Cassie is working on, it worked really well for optimising some JPEGs. But there were a few images that would require a bit more fine-grained control of the optimisations. Specifically, pictures with human faces in them.
I’ve written about this before. If there’s a human face in image, I open that image in a graphics editing tool like Photoshop, select everything but the face, and add a bit of blur. Because humans are hard-wired to focus on faces, we’ll notice any jaggy artifacts on a face, but we’re far less likely to notice jagginess in background imagery: walls, materials, clothing, etc.
On the face of it (hah!), a browser-based tool like Squoosh wouldn’t be able to optimise for faces, but then Cassie pointed out something really interesting…
When we were both at FFConf on Friday, there was a great talk by Eleanor Haproff on machine learning with JavaScript. It turns out there are plenty of smart toolkits out there, and one of them is facial recognition. So I wonder if it’s possible to build an in-browser tool with this workflow:
Maybe the selecting/blurring part would need canvas? I don’t know.
Anyway, I thought this was a brilliant bit of synthesis from Cassie, and now I’ve got two questions:
Spot-on take by Ted Chiang:
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.
From Scott McCloud to responsive design, Dave is pondering our assumptions about screen real estate:
As the amount of information increases, removing details reduces information density and thereby increasing comprehension.
It reminds me of Edward Tufte’s data-ink ratio.
After Clearleft’s recent rebranding, I’m really interested in Happy Cog’s redesign process:
In the near future we’ll be rolling out a new website, followed by a rebrand of Cognition, our blog. As the identity is tested against applications, much of what’s here may change. Nothing is set in stone.
Some interesting insights from usability and accessibility testing at the Co-op.
We used ‘nesting’ to reduce the amount of information on the page when the user first reaches it. When the user chooses an option, we ask for any other details at that point rather than having all the questions on the page at once.
Shamefully, I haven’t been doing one-to-ones with my front-end dev colleagues at Clearleft, but I’m planning to change that. This short list of starter questions from Lara will prove very useful indeed.
Ryan describes the research and process behind putting together a device lab for Happy Cog in Austin. Good stuff, with handy links gathered together at the end.
Hexadecimal colours and their corresponding dictionary definitions. Cute.