JSX.lol - Does anybody actually like React?
I’m subscribing to this RSS feed.
I’m subscribing to this RSS feed.
Put the kettle on; it’s another epic data-driven screed from Alex. The footnotes on this would be a regular post on any other blog (and yes, even the footnotes have footnotes).
This is a spot-on description of the difference between back-end development and front-end development:
Code that runs on the server can be fully costed. Performance and availability of server-side systems are under the control of the provisioning organisation, and latency can be actively managed by developers and DevOps engineers.
Code that runs on the client, by contrast, is running on The Devil’s Computer. Nothing about the experienced latency, client resources, or even available APIs are under the developer’s control.
Client-side web development is perhaps best conceived of as influence-oriented programming. Once code has left the datacenter, all a web developer can do is send thoughts and prayers.
As a result, an unreasonably effective strategy is to send less code. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios. Declarative forms generate more functional UI per byte sent. These improvements in resilience and reductions in costs are beneficial in compounding ways over a site’s lifetime.
Unfortunately, this is what all of the internet is right now: social media, owned by large corporations that make changes to them to limit or suppress your speech, in order to make themselves more attractive to advertisers or just pursue their owners’ ends. Even the best Twitter alternatives, like Bluesky, aren’t immune to any of this—the more you centralize onto one single website, the more power that website has over you and what you post there. More than just moving to another website, we need more websites.
I am going to continue to write this newsletter. I am going to spend hours and hours pouring over old books and mailing lists and archived sites. And lifeless AI machines will come along and slurp up that information for their own profit. And I will underperform on algorithms. My posts will be too long, or too dense, or not long enough.
And I don’t care. I’m contributing to the free web.
Last year I described how I syndicate my posts to different social networks.
Back then my approach to syndicating to Bluesky was to piggy-back off my micro.blog account (which is really just the RSS feed of my notes):
Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.
It worked well enough, but it wasn’t real-time and I didn’t have much control over the formatting. As Bluesky is having quite a moment right now, I decided to upgrade my syndication strategy and use the Bluesky API.
Here’s how it works…
First you need to generate an app password. You’ll need this so that you can generate a token. You need the token so you can generate …just kidding; the chain of generated gobbledegook stops there.
Here’s the PHP I’m using to generate a token. You’ll need your Bluesky handle and the app password you generated.
Now that I’ve got a token, I can send a post. Here’s the PHP I’m using.
There’s something extra code in there to spot URLs and turn them into links. Bluesky has a very weird way of doing this.
It didn’t take too long to get posting working. After some more tinkering I got images working too. Now I can post straight from my website to my Bluesky profile. The Bluesky API returns an ID for the post that I’ve created there so I can link to it from the canonical post here on my website.
I’ve updated my posting interface to add a toggle for Bluesky right alongside the toggle for Mastodon. There used to be a toggle for Twitter. That’s long gone.
Now when I post a note to my website, I can choose if I want to send a copy to Mastodon or Bluesky or both.
One day Bluesky will go away. It won’t matter much to me. My website will still be here.
The same small dataset visualised in a hundred different ways, with notes on the strengths and weaknesses of each one.
There are different kinds of buzzwords.
Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.
Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.
The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.
“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.
The biggest problem with the phrase “AI” is that there’s a name collision.
For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.
Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.
This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.
To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…
Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.
Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.
Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.
It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.
And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.
This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.
It’s almost as if the labelling of the current technologies was more about marketing than accuracy.
The slides from a lovely talk by Ana with an important message:
By having your own personal website you are as indie web as it gets. That’s right. Whether you participate in the IndieWeb community or not: by having your own personal website you are as indie web as it gets.
I read Madeline Miller’s Circe last year. I loved it. It was my favourite fiction book I read that year.
Reading Circe kicked off a bit of a reading spree for me. I sought out other retellings of Greek myths. There’s no shortage of good books out there from Pat Barker, Natalie Haynes, Jennifer Saint, Claire Heywood, Claire North, and more.
The obvious difference between these retellings and the older accounts by Homer, Ovid and the lads is to re-centre the women in these stories. There’s a rich seam of narratives to be mined between the lines of the Greek myths.
But what’s fascinating to me is to see how these modern interpretations differ from one another. Sometimes I’ll finish one book, then pick up another that tells the same story from a very different angle.
The biggest difference I’ve noticed is the presence or absence of supernatural intervention. Some of these writers tell their stories with gods and goddesses front and centre. Others tell the very same stories as realistic accounts without any magic.
Take Perseus. Please.
The excellent Stone Blind by Natalie Haynes tells the story of Medusa. There’s magic a-plenty. In fact, Perseus himself is little more than a clueless bumbler who wouldn’t last a minute without divine interventation.
The Shadow Of Perseus by Claire Heywood also tells Medusa’s story. But this time there’s no magic whatsoever. The narrative is driven not by gods and goddesses, but by the force of toxic masculinity.
Pat Barker tells the story of the Trojan war in her Women Of Troy series. She keeps it grounded and gritty. When Natalie Haynes tells the same story in A Thousand Ships, the people in it are little more than playthings of the gods.
Then there are the books with just a light touch of the supernatural. While Madeline Miller’s Circe was necessarily imbued with magic, her first novel The Song Of Achilles keeps it mostly under wraps. The supernatural is there, but it doesn’t propel the narrative.
Claire North has a trilogy of books called the Songs of Penelope, retelling the Odyssey from Penelope’s perspective (like Margaret Atwood did in The Penelopiad). On the face of it, these seem to fall on the supernatural side; each book is narrated by a different deity. But the gods are strangely powerless. Everyone believes in them, but they themselves behave in a non-interventionist way. As though they didn’t exist at all.
It makes me wonder what it would be like to have other shared myths retold with or without magic.
How would the Marvel universe look if it were grounded in reality? Can you retell Harry Potter as the goings-on at a cult school for the delusional? What would Star Wars be like without the Force? (although I guess Andor already answers that one)
Anyway, if you’re interested in reading some modern takes on Greek myths, here’s a list of books for you:
It’d be best to publish your work in some evergreen space where you control the domain and URL. Then publish on masto-sky-formerly-known-as-linked-don and any place you share and comment on.
You don’t have to change the world with every post. You might publish a quick thought or two that helps encourage someone else to try something new, listen to a new song, or binge-watch a new series.
Also, developers:
Write and publish before you write your own static site generator or perfect blogging platform. We have lost billions of good writers to this side quest because they spend all their time working on the platform instead of writing.
Designers, the same advice applies to you: write first, come up with that perfect design later.
Logical properties, container queries, :has
, :is
, :where
, min()
, max()
, clamp()
, nesting, cascade layers, subgrid, and more.
I like the approach here: logical properties and sensible default type and spacing.
I really liked this short story.
UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.
I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:
We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.
If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.
Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!
These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:
I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.
I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.
I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.
I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.
Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.
Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left
, right
, width
or height
in a style sheet, it looks like a bug to me.
Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp
declaration on the body
element, which can also get the job done.
Okay, so those are the things that feel standard to me. So what could I play around with that was new?
View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.
Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…
Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.
You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.
That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.
Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference
@media query.
I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).
Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.
I described using my feed reader like this:
I would hate if catching up on RSS feeds felt like catching up on email.
Instead it’s like this:
When I open my RSS reader to catch up on the feeds I’m subscribed to, it doesn’t feel like opening my email client. It feels more like opening a book.
It also feels different to social media. Like Lucy Bellwood says:
I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.
There’s also the blessed lack of any algorithm:
Because blogs are much quieter than social media, there’s also the ability to switch off that awareness that Someone Is Always Watching.
Cory Doctorow has been praising the merits of RSS:
This conduit is anti-lock-in, it works for nearly the whole internet. It is surveillance-resistant, far more accessible than the web or any mobile app interface.
Like Lucy, he emphasises the lack of algorithm:
By default, you’ll get everything as it appears, in reverse-chronological order.
Does that remind you of anything? Right: this is how social media used to work, before it was enshittified. You can single-handedly disenshittify your experience of virtually the entire web, just by switching to RSS, traveling back in time to the days when Facebook and Twitter were more interested in showing you the things you asked to see, rather than the ads and boosted content someone else would pay to cram into your eyeballs.
The only algorithm at work in my feed reader—or on Mastodon—is good old-fashioned serendipity, when posts just happened to rhyme or resonate. Like this morning, when I read this from Alice:
There is no better feeling than walking along, lost in my own thoughts, and feeling a small hand slip into mine. There you are. Here I am. I love you, you silly goose.
And then I read this from Denise
I pass a mother and daughter, holding hands. The little girl is wearing a sequinned covered jacket. She looks up at her mother who says “…And the sun’s going to come out and you’re just going to shine and shine and shine.”
This is a neat project form Dries:
This project is driven by my curiosity about making websites and web hosting more environmentally friendly, even on a small scale. It’s also a chance to explore a local-first approach: to show that hosting a personal website on your own internet connection at home can often be enough for small sites. This aligns with my commitment to both the Open Web and the IndieWeb.
At its heart, this project is about learning and contributing to a conversation on a greener, local-first future for the web.
“And so what we did is we started looking at, internally, all of the places where we’re using web technology — so all of our internal web UIs — and realized that they were just really unacceptably slow.”
Why were they slow? The answer: React.
“We realized that our performance, especially on low-end machines, was really terrible — and that was because we had adopted this React framework, and we had used React in probably one of the worst ways possible.”
This short essay by Richard Feynman is quite a dose of perspective on a Monday morning
While I’ve grown more cynical about much of tech, movements like the Indieweb and the Fediverse remind me that the ideals I once loved, and that spirit of the early web, aren’t lost. They’re evolving, just like everything else.
I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.