Sanborn Fire Maps
A complete digital archive of the famous typography from the Sanborn Fire Insurance Maps
The lettering really is lovely!
A complete digital archive of the famous typography from the Sanborn Fire Insurance Maps
The lettering really is lovely!
If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.
I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:
We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.
If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.
Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!
These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:
I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.
I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.
I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.
I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.
Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.
Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left
, right
, width
or height
in a style sheet, it looks like a bug to me.
Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp
declaration on the body
element, which can also get the job done.
Okay, so those are the things that feel standard to me. So what could I play around with that was new?
View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.
Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…
Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.
You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.
That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.
Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference
@media query.
I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).
Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.
Speaking of serendipity, not long after I wrote about making a static archive of The Session for people to download and share, I came across a piece by Alex Chan about using static websites for tiny archives.
The use-case is slightly different—this is about personal archives, like paperwork, screenshots, and bookmarks. But we both came up with the same process:
I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.
And we share the same hope:
Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.
You should read the whole thing, where Alex describes all the other approaches they took before settling on plain ol’ HTML files in a folder:
HTML is low maintenance, it’s flexible, and it’s not going anywhere. It’s the foundation of the entire web, and pretty much every modern computer has a web browser that can render HTML pages. These files will be usable for a very long time – probably decades, if not more.
I’m enjoying this approach, so I’m going to keep using it. What I particularly like is that the maintenance burden has been essentially zero – once I set up the initial site structure, I haven’t had to do anything to keep it working.
They also talk about digital preservation:
I’d love to see static websites get more use as a preservation tool.
I concur! And it’s particularly interesting for Alex to be making this observation in the context of working with the Flickr foundation. That’s where they’re experimenting with the concept of a data lifeboat
What should we do when a digital service sinks?
This is something that George spoke about at the final dConstruct in 2022. You can listen to the talk on the dConstruct archive.
When I went up to London for the State of the Browser conference last month, I shared the train journey with Remy.
I always like getting together with Remy. We usually end up discussing sci-fi books we’re reading, commiserating with one another about conference-organising, discussing the minutiae of browser APIs, or talking about the big-picture vision of the World Wide Web.
On this train ride we ended up talking about the march of time and how death comes for us all …and our websites.
Take The Session, for example. It’s been running for two and a half decades in one form or another. I plan to keep it running for many more decades to come. But I’m the weak link in that plan.
If I get hit by a bus tomorrow, The Session will keep running. The hosting is paid up for a while. The domain name is registered for as long as possible. But inevitably things will need to be updated. Even if no new features get added to the site, someone’s got to install updates to keep the underlying software safe and secure.
Remy and I discussed the long-term prospects for widening out the admin work to more people. But we also discussed smaller steps I could take in the meantime.
Like, there’s the actual content of the website. Now, I currently share exports from the database every week in JSON, CSV, and SQLite. That’s good. But you need to be tech nerd to do anything useful with that data.
The more I talked about it with Remy, the more I realised that HTML would be the most useful format for the most people.
There’s a cute acronym in the world of digital preservation: LOCKSS. Lots Of Copies Keep Stuff Safe. If there were multiple copies of The Session’s content out there in the world, then I’d have a nice little insurance policy against some future catastrophe befalling the live site.
With the seed of the idea planted in my head, I waited until I had some time to dive in and see if this was doable.
Fortunately I had plenty of opportunity to do just that on some other train rides. When I was in Spain and France recently, I spent hours and hours on trains. For some reason, I find train journeys very conducive to coding, especially if you don’t need an internet connection.
By the time I was back home, the code was done. Here’s the result:
The Session archive: a static copy of the content on thesession.org.
If you want to grab a copy for yourself, go ahead and download this .zip file. Be warned that it’s quite large! The .zip file is over two gigabytes in size and the unzipped collection of web pages is almost ten gigabytes. I plan to update the content every week or so.
I’ve put a copy up on Netlify and I’m serving it from the subdomain archive.thesession.org if you want to check out the results without downloading the whole thing.
Because this is a collection of static files, there’s no search. But you can use your browser’s “Find in Page” feature to search within the (very long) index pages of each section of the site.
You don’t need to a web server to click around between the pages: they should all work straight from your file system. Double-clicking any HTML file should give a starting point.
I wanted to reduce the dependencies on each page to as close to zero as I could. All the CSS is embedded in the the page. Likewise with most of the JavaScript (you’ll still need an internet connection to get audio playback and dynamic maps). This keeps the individual pages nice and self-contained. That means they can be shared around (as an email attachment, for example).
I’ve shared this project with the community on The Session and people are into it. If nothing else, it could be handy to have an offline copy of the site’s content on your hard drive for those situations when you can’t access the site itself.
What are your own scribbles, your own ordinary plenty, not worth much to you now but that someone in the future may treasure?
At this point, it really does seem like “AI” is “bullshit you don’t need or is done better in other ways, but we’ve just spent literally billions on this so we really need you to use it, even though it’s nowhere as good as what we were already doing,” and everything else is just unsexy functionality that makes what you do marginally easier or better. I’m sorry we live in a world where enshittification is being marketed as The Hot And Sexy Thing, but just because we’re in that world, doesn’t mean you have to accept it.
- People only understand things relative to things they already understand
- People only understand things in context
- People rely on patterns and consistency
- People seek to minimize cognitive load
- People have varying levels of expertise and familiarity
- People are goal-oriented
- People often don’t know what they’re looking for
- Information is more useful when it’s actionable
For many archivists, alarm bells are ringing. Across the world, they are scraping up defunct websites or at-risk data collections to save as much of our digital lives as possible. Others are working on ways to store that data in formats that will last hundreds, perhaps even thousands, of years.
A library of CC-licensed photos.
Next time you’re tempted to use a generative “AI” tool to make an image for a slide deck, use this instead.
Google search is no friend to the indie web:
Well-known brands often see most of their content indexed, while small or unknown bloggers face much stricter selectivity.
There was life before Google search. There will be life after Google search.
Information that you might search for may never appear in Google’s results. Not because it doesn’t exist, but because Google has chosen not to include it.
David is on board. Who else?
I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.
A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.
The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:
I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.
Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.
All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.
I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.
I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.
Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.
Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.
I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?
And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.
There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.
A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”
What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?
I’m just throwing this out there. I’d love it if someone were to run with it.
The whole point of the web is that we’re not supposed to be dependent on any one company or person or community to make it all work and the only reason why we trusted Google is because the analytics money flowed in our direction. Now that it doesn’t, the whole internet feels unstable. As if all these websites and publishers had set up shop perilously on the edge of an active volcano.
But that instability was always there.
There was life before Google search. There will be life after Google search.
Google is not a huge source of traffic and visibility. I get most of my visits from RSS readers, other people’s links including fellow bloggers, or websites like Hacker News. It’s hard to tell at this point since I don’t track anything, but that’s an educated guess.
Removing my website from Google would have very little impact, so I was wondering if I should just do it.
My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.
I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.
But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.
Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.
Now it’s happening on the web.
I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:
Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.
But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.
I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.
Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.
Every email that lands in my inbox is like a little mini Turing test. Did a human write this?
Our guard is up. Our filters are activated. Our default mode is suspicion.
This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.
There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.
Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.
Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.
There was life before Google search. There will be life after Google search.
Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.
All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.
It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.
The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.
A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.
The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.
Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.
The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.
But we can always walk away.
An interesting idea from Tantek for an offline page that links off to an archived copy of the URL you’re trying to reach—useful for when you’re site goes down (though not for when the user’s internet connection is down).
In their rush to cram in “AI” “features”, it seems to me that many companies don’t actually understand why people use their products.
Google is acting as though its greatest asset is its search engine. Same with Bing.
Mozilla Developer Network is acting as though its greatest asset is its documentation. Same with Stack Overflow.
But their greatest asset is actually trust.
If I use a search engine I need to be able to trust that the filtering is good. If I look up documentation I need to trust that the information is good. I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”
“But”, the apologists will respond, “the results are mostly correct! The documentation is mostly true!”
Sure, but as Terence puts it:
The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as “that klutzy intern we had to fire.”
Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.
I am honestly astonished that so many companies don’t seem to realise what they’re destroying.