Ancient Web Browsers | tweedy
This is an archive of the very earliest Web browsers — the true pioneers, the Old Gods, the Ancients:
WorldWideWeb, LineMode, Viola, Erwise, Midas, TkWWW, Samba, Lynx, w3, FineWWW
This is an archive of the very earliest Web browsers — the true pioneers, the Old Gods, the Ancients:
WorldWideWeb, LineMode, Viola, Erwise, Midas, TkWWW, Samba, Lynx, w3, FineWWW
At the start of this month I was in Amsterdam for a series of back-to-back events: Indie Web Camp Amsterdam, View Source, and Fronteers. That last one was where Remy and I debuted talk we’d been working on.
The Fronteers folk have been quick off the mark so the video is already available. I’ve also published the text of the talk here:
How We Built The World Wide Web In Five Days
This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.
Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.
That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.
At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.
But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.
I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.
There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.
We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:
It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.
In the meantime, you can read the text, watch the video, or look at the slides (but the slides really don’t make much sense in isolation).
This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.
Our story begins with the Big Bang.
This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.
A physicist is the atom’s way of knowing about atoms.
By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.
To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.
They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.
Every year, CERN is host to thousands of scientists who come to run their experiments.
Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.
We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:
The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.
Now we have the machine to run this special software.
By some fluke the good people of the web have captured several different versions of this software and published them on Github.
So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.
Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.
To transfer the software from our machine, to the NEXT machine, we needed to use the network.
In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.
Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.
By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.
America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.
The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.
This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.
Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.
TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.
There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.
Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.
Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.
Now we finally have the software installed on the NeXT computer and we’re able to run the application.
We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…
Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.
But there’s something different about this software. Something that makes this more than just a word processor.
These documents, they have links…
Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.
He also coined the word “hypertext” in 1963. It is defined by what it is not.
Hypertext is text which is not constrained to be linear.
Ever played a “choose your own adventure” book? That’s hypertext. You can jump from one point in the book to a different point that has its own unique identifier.
The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.
He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.
Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.
Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.
On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.
It truly is The Mother of All Demos.
There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.
He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.
ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.
A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.
In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?
On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.
The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”
Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.
They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.
So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.
As Robert Cailliau told us, they were thinking “Well, we can always change it later.”
Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.
He thinks about a format for hypertext called the Hypertext Markup Language—HTML.
He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.
But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…
Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.
The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.
NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.
Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks
He called it: WorldWideWeb.
We finally have the software working the way it ran 30 years ago.
But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.
So we enter some of our blog urls. https://remysharp.com, https://adactio.com
But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.
So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.
And finally, we see…
We see junk.
We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.
<h1> <h2> <h3> <h4> <h5> <h6>
<ol> <ul> <li> <p>
These tags are probably very familiar to you. You recognise this language, right?
That’s right. It’s SGML.
SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.
SGML is supposed to be short for Standard Generalised Markup Language.
A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.
One thing he did add was a tag called A
for anchor.
Its href
attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.
The hypertext community thought this was a terrible way to make links.
They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.
That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.
Perhaps you’ve experienced broken links?
When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.
Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.
The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.
Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.
Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.
<link rel="..."
In this case, when the parser encountered <link rel="…"
it would see the <
.
<
“Yes, a tag; let’s slurp it up”.
<li
Then it reads li
and the parser is thinking, “This looks like a list item, good stuff.”
<lin
Then encounters the n
(of link
) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin
.
k rel="stylesheet" href="...">
With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link
, script
, video
and img
— which of course there was no image support in the world’s first browser.
This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.
So now we have all the reference we need to be able to replicate this browser:
So off we go.
While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.
Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.
Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.
The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.
Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.
So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.
Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.
This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.
This is entirely in the browser and was written using:
These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.
We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:
An excercise in global information availability
Why don’t we see how it looks…
There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.
(Which Jeremy was blaming me for.)
This is what you see when you visit the WorldWideWeb browser for the first time. We can see we are welcomed by the universe of hypertext. We’ve got these menus over here that you can drag off and open panels (I always thought this was an ordering bug but the operating system actually works like this).
We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)
We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.
One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.
But there’s one thing about navigating here that’s different. To open this link, I had to double-click.
The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.
To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.
Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.
Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.
But all of these browsers were missing something that was in the original WorldWideWeb browser.
The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.
I see Fronteers here is missing a heading. We want to welcome you all:
Welkom
We want to make that a heading. Let’s style that. It’s a heading.
So the browser was meant to edit documents. Let’s put a bit of text here:
Great talks from Remy and Jeremy
(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.
I can save this document as well. I’m going to call it fronteers.html
.
Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.
In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.
But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.
And so it wasn’t standardised and doesn’t exist today.
But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.
In the end, simplicity wins.
I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.
Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.
As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.
Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.
I feel like we lost something.
We head home after a week of hacking.
We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.
A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!
The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww
(a precursor to libcurl
).
The message read:
Sitting with Tim right now. He loves your browser!
Crushed it.
It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.
What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.
I wouldn’t be stood here today, if it weren’t for the web.
I wouldn’t even know Jeremy, if it weren’t for the web.
I wouldn’t have a career, if it weren’t for the web.
I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.
HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.
This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.
And we’ve got to work at keeping it simple.
When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.
She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:
What would you tell the next generation about how to use this wonderful tool?
She replied:
If you have something wonderful, if you do not defend it, you will lose it.
If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.
Defend the simplicity and resilience that’s so central to the web.
I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.
Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.
I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.
Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.
It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.
I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.
So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.
“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.
Lots of people entered facebook.com
or google.com
, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.
People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.
The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.
All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:
The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.
Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.
Martin gives a personal history of his time at the two CERN hack projects …and also provides a short history of the universe.
This is the lovely little film about our WorldWideWeb hack project. It was shown yesterday at CERN during the Web@30 celebrations. That was quite a special moment.
Remy’s keeping a list of hyperlinks to stories covering our recent hack week at CERN.
Here’s the source code for the WorldWideWeb project we did at CERN.
Prompted by our time at CERN, Remy ponders why web browsers (quite quickly) diverged from the original vision of being read/write software.
This is a lovely write-up of the WorldWideWeb hack week at CERN:
The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code.
Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h
:
#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */
Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.
Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A
tags.
So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.
The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.
Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.
The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.
So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.
I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.
Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.
I marked up each timeline as an ordered list of h-event
s:
<li class="h-event y1968">
<a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
<time class="dt-start" datetime="1968-12-09">1968</time>
<abbr class="p-name" title="oN-Line System">NLS</abbr>
</a>
</li>
With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.
The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc()
. I wanted to use translateX()
but I couldn’t get the maths to work for that, so I had use plain ol’ left
and right
:
.y1968 {
left: calc((1968 - 1959) * (100%/30) - 5em);
}
For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:
.y2014 {
right: calc((2019 - 2014) * (100%/30) - 5em);
}
(Each h-event
has a width of 5em
so that’s where the extra bit at the end comes from.)
I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.
As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.
I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:
1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.
1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.
1982: Literary Machines, Ted Nelson’s book which was on our desk at all times
I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.
The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.
The US Mission to the UN in Geneva came by to visit us during our hackweek at CERN.
“Our hope is that over the next few days we are going to recreate the experience of what it would be like using that browser, but doing it in a way that anyone using a modern web browser can experience,” explains team member Jeremy Keith. The aim is to “give people the feeling of what it would have been like, in terms of how it looked, how it felt, the fonts, the rendering, the windows, how you navigated from link to link.”
Here’s the CERN write-up of our week of hacking to produce the recreation of the WorldWideWeb browser.
Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.
Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.
Want to see the final result? Here you go:
That’s the website we built. The call to action is hard to miss:
Behold! A simulation of using the first ever web browser, recreated inside your web browser.
Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:
Document
from the menu options on the left.Open from full document reference
.https://adactio.com
Open
button.You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.
But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.
Document
menu you opened, select New file…
test.html
WorldWideWeb
menu, select Links
.Mark all
from the Links
menu.test.html
document, and highlight a piece of text.Link to marked
from the Links
menu.If you want, you can even save the hypertext document you created. Under the Document
menu there’s an option to Save a copy offline
(this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.
I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!
After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.
Remy did all the JavaScript for the recreated browser …in just five days!
Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.
Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.
John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.
Through it all, Craig and Martin put together the accomanying website. Personally, I think the website is freaking awesome—it’s packed with fascinating information! Check out the family tree of browsers that Craig made.
Photos from earlier this week:
In a small room in CERN’s Data Center, an international group of nine developers is taking a plunge back in time to the beginnings of the World Wide Web. Their aim is to enable the whole world to experience what the web looked like viewed within the very first browser developed by Tim Berners-Lee.
We got the band back together.
In September of 2013, I had the great pleasure and privilege of going to CERN with a bunch of very smart people. I’m not sure how I managed to slip by. We were there to recreate the experience of using the line-mode browser. As I wrote at the time:
Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.
In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.
I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:
- Give people an understanding of the user experience of the WorldWideWeb browser.
- Demonstrate that a read/write philosophy was there from the beginning.
- Give context—what was going on at the time?
That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.
Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.
I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…
Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.
We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).
We took a little time out for a tour of the data centre. Oh, and at lunch time, we sat with Robert Cailliau and grilled him with questions about the birth of the web. Quite a day!
Now it’s time for me to hit the hay and prepare for another day of hacking in this extraordinary place.