this is aaronland

a tale of gummy snakes (and spunk)

it's not magic – finding our way back to “just f*cking do it” from “move fast and break things”

I had the great privilege of being invited to deliver one of the four keynote speeches at the Australian Center for Moving Image's (ACMI) 2025 Future of Art, Culture and Technology Symposium (FACT). This is what I said. This is long, like close to 11,000 words long. I did spend a non-zero amount of time running it through a large language model asking for summaries and zippy hot takes (designed to arouse emotions, no less). This is still what I would say.

If you are reading this that means most (though probably not all) of the typos should have been caught but I have not finished adding all the links to this text and there are a lot of links to add.

Thank you very much for having me. Thank you to ACMI for inviting me to speak. Thank you, in the audience, for coming to listen. In the interest of time and because I am often prone to elaborate digressions I am going to stick pretty close to my notes and I’d like to start with a trio of quotes.

The first is from the British designer Jack Schulze, who said:

No one cares what you do unless you think about it. No one cares what you think unless you do it.

The second quote I believe to be by the English writer Zadie Smith, who said:

Anything which is presented as an inevitability is political.

I have had some difficulty tracking down the exact attribution for this quote but Zadie Smith is a good writer and a sharp thinker so I choose to let her have it.

The third quote is from the American sociologist Tressie McMillan Cottom:

We probably need to give up the transactional nature of our hope and do the thing that needs to be done because it needs to be done.

This was a comment made in the context of how to continue the struggle for civil rights in the United States following the 2024 election but I think it’s generally good advice, applicable to most things in life.

Don’t worry. I hear your silent groans and I share them. There is a lot to get through so this part of the talk will be mercifully brief.

If I studied anything at all it was painting at a time when the web started to happen and selling paintings on the internet seemed like a viable alternative to shackling myself to the gallery system or living from one government grant to another. It quickly became clear that I had a vested interest in understanding how this thing called the internet worked. So, in the end, not a lot of painting but a whole lot of experience working on the internet.

First, at a mom-and-pop internet service provider on a small island servicing an annual population of about 20,000 and a summer-time population of 150,000. Out of necessity we were one of the first ISPs to develop a comprehensive suite of web application for customers to manage their own accounts online since we couldn’t afford the time or staff to perform those tasks over the phone.

In 2004 I joined Flickr, the global photo-sharing service with millions of users hosting billions of photos. Flickr was the poster-child for Web 2.0 and, against some pretty steep odds over the years, is still with us today. It just celebrated its 21st birthday earlier this week. I could spend this entire talk simply cataloging, never mind detailing, everything we did at Flickr. So I am not going to.

From Flickr I went to Stamen Design, a client-services studio that pioneered data visualization and digital maps. Stamen has done work with a long list of names you’ve heard of and some of that work has been accessioned in to the collections of MoMA and the Smithsonian. Stamen is where everyone learned that I am not a client-services person but not before we launched prettymaps, a mapping project that aimed to push online mapping and client-side rendering as far as it could go in service of the idea that data could, and should, be treated like a bolt of fabric from which a universe of fashions might be cut.

In 2012, I went to the Smithsonian Cooper Hewitt National Design Museum where, along with a few other things, I helped design, develop and manufacture the Pen (a custom electronic hardware device given to every user) and all its associated software systems.

From 2015 to 2018 I worked at Mapzen, a mapping startup with the ambitious, bordering on crazy, goal of recreating the entirety of the Google Maps stack using openly licensed software and data. I helped design and architect the foundations for an openly licensed gazetteer of all the places on the planet, and which continues to operate today.

Since then I have been at the San Francisco International Airport Museum, the world’s only fully accredited museum in an airport. Started in the early 1980s the museum has mounted over 2,000 exhibitions and has a permanent aviation collection of 160,000 objects. With SFO’s annual foot-traffic approaching 58 million people, and on target for 74 million annually in the future, we are literally the world’s busiest museum. Even if only ten percent of those passenger see or visit the museum we have numbers on par with MoMA, the Met and the Louvre. At fifteen to twenty percent we easily outpace them. So we are the world’s business museum that no one knows about. My role is to figure out how we might use the internet to address this dilemma.

To put that in slightly more concrete terms we are trying to use the internet and the web, by which I mean cheap storage, fast retrieval, affordable compute and a global network, to connect everything across the history of the airport itself in such a way that there is no part of visiting the airport – from the gate you are standing at, to the airline you are flying, to the aircraft itself, to the place you are going to or coming from – which doesn’t have a straight line back to the museum’s collections and programming.

These are the sizes of the teams, at their highest number, that I have worked with directly on these projects. They were, in fact, often smaller.

There is a healthy debate about whether any, or all, of these projects were understaffed. I tend to think they were but, not by much. The biggest failure in the staffing models here is that they make transitioning from one group of principals to another difficult.

This is a problem which becomes more acute when you are operating at the scale and velocity of something like Flickr. That was a problem which eventually manifested itself and it was not pretty. But we also scaled the thing to over 5 billion photos with an engineering staff of no more than 15 before that happened.

We can debate whether or not these teams should have been two or three times as large but the point I want to make is that none of them needed be tens, let alone hundreds of times, their size.

If you remember nothing else from this talk, remember that these numbers are what “possible” – not easy but possible – looks like.

Before we go any further and while the prettymaps project is still front of mind I want to share a quick anecdote.

Stamen made a series of prettymaps prints with a company called 20x200 including one of New York City. One day I got a waiver request from the producers of a television show called Suits asking whether they could use that print as background art on their sets. I thought about it for a minute, signed the waiver and forgot about it.

Last year I finally watched all nine seasons of Suits and was delighted to see prettymaps show up in the break room of the legal firm, where the series is set, some time around season two. It’s the orange amorphous blob seen in the middle of this screenshot.

This screenshot is also of one of only a handful of times they managed to hang the print right-side up throughout the course of the show.

I like to think of this as a cautionary and humbling tale about that other amorphorous blob we call “success” or “making it”. Hold on to that idea. As this talk goes on you might think it is about one or another but it really about how we measure and value success.

The common theme, throughout all of this work I’ve done – the larger motivation and the measure of success beyond any given day-to-day task or even any given project – has been the idea of creating conditions where things may be revisited.

There is not enough time to discuss all of that work in detail so instead I am going to focus on just four projects and I am going to try to be quick about it.

I am proud of the work I have done and I can speak about it at length, with anyone who wants to after this talk, but there is a larger conversation which brackets these projects that I think is important, that is more important, to address than the handful of things I have done over the years.

In 2012 I moved to New York City to join Seb Chan and the Digital and Emerging Media team at the Cooper Hewitt.

Up to that point I had been growing more and more involved with the museum sector but always as an outsider and without any of the responsibility. The chance to work with Seb, and with Bill Moggridge, was a chance to roll up my sleeves and try to pitch in.

It might seem like a provocation to want to talk about the Pen ten years after its launch and five years after it was pulled from the floor and at ACMI of all places. It is not.

The reason for talking about the Pen is less about the Pen itself than a whole series of beliefs that the Pen forced us to give voice to as it was developed. Also, the Pen and the Lens are really just two sides of the same coin.

Before we built the Pen we reimagined the museum’s online presence, specifically centered on the collection.

One day someone from the museum asked me what we were trying to accomplish with all this work and I said that the ultimate goal was for people to link to us. Cooper Hewitt was, and still is, the “national design museum” but when people link to an Eames chair, for example, they link to Wikipedia. Wikipedia has earned those links fair and square.

But we should aspire to people linking to us in the same way.

Ultimately what we were trying to do was ensure that every object in the collection had a stable, permanent and reliable home on the web. We were trying to use the web to give these objects, which may or may not have had any meaningful metadata or imagery associated with them, weight and mass in a universe such that other things might be able to start orbiting around them.

We were not specific in how that behaviour ought to manifest itself – it would be different for different people and it could be as simple as sharing a link – but we were clear that these behaviors should be possible.

We were designing for recall.

We were designing for recall because without that recall nothing about the Pen, about the ability to tap, collect and revisit objects seen during a visit to the museum, would work.

We were also designing for recall as a way to get out of people’s way. We were designing the Pen as a way to allow people to visit the museum without spending all their time figuring out how to remember their visit.

After the second or third use, the Pen was meant to dissolve in to normalcy. The Pen was meant to enable an experience – remembering – not to be the experience itself.

For those of you who have never seen or heard of the Pen this is what it looked like. It was a custom-built capacitive stylus with a battery life of 28 days and an NFC reader at one end which visitors used to save items throughout the museum. We built this from scratch and many people will tell you it was an expensive mistake. I am not one of those people. I don’t think it was a mistake nor do I think it was overly expensive, not when you compare to anything else the museum sector has done and certainly not when you amortize the cost and learnings over time.

The museum did have to assume the overall management role for the project 18 months before it launched, though.

That was not part of the plan and it was very, very complicated. This is the “how” of what we did. These were all the moving parts. We did not anticipate having to assume these additional responsibilities and this amount of cat-herding but no one else, and certainly not any one of these individual vendors in isolation, was coming to save us.

If that was the how this was the “why” or the “what” of we were trying to build. It was much less complicated than the actual implementation details.

This is the piece that I worked on day-to-day. This is what powered the Pen.

This was, in fact, the collection website. The two were functionally indistinguishable because they were, in fact, the same thing.

This is what the external vendors worked on. They were free to operate as needed within the scope of their work but ultimately that work would be layered, and dependent, on our work.

Put another way: Our future work would not be dependent on their past work. Our ability to change would not be limited by their inability or unwillingness to change.

The Pen was always a controversial project inside the museum and, ultimately, it did not survive everything that followed in the wake of the COVID-19 pandemic.

That does not mean the Pen was not popular with people. It was incredibly popular with people, just not people at the museum.

This is born out in both the usage numbers and the conversion rates seen first at Cooper Hewitt and subsequently at ACMI with the launch of the Lens. There is a demonstrable appetite on the part of our visitors for this sort of thing.

In 2016 the Cooper Hewitt traveled the Pen to London.

In many ways this is what I am most proud of precisely because, having left the museum and moved back to the West Coast in 2015, I had nothing to do with it save for answering some technical questions and offering encouragement.

That was the larger meta project of the Pen. To put in place both the human and technical and operational (meaning financial) capacity for the work to outlast and evolve beyond the team which created it.

After both Seb and I left the Cooper Hewitt the people who were still there took all of the work which had been done in the service of the reopening, and which as a matter of expediency had been purpose-fit for that reopening, and retrofit it in to a white box implementation tailored for the London Biennale.

Cooper Hewitt’s contribution to that event was to allow visitors to use the Pen to collect (and revisit) works from all the other participating institutions and to develop a customized version of all the software and visitor services systems necessary for the Pen to work.

They did this in-house and on staff-time, with a single external contractor, in between all their other responsibilities, and that work took a fraction of the amount of time the Pen took to develop.

To those of you in the museum sector, I invite you to take a moment to imagine the size of the change order from an external vendor to do that same work.

As mentioned, in 2015 I returned to the West Coast, and to the private sector, working for a mapping startup called Mapzen. We were trying to build the totality of the Google Maps pyramid, which is really an iceberg, using only openly licensed data and software.

We were operating as a research and development project inside Samsung. For them Mapzen was a hedge against the annual licensing fees they paid for Google services, like Maps, on their phones.

Earlier, I mentioned the notion of stability and permanence for collection records on the web. That’s what I did at Mapzen but for places. Basically, the marriage of openly licensed coordinate data and the of idea stable, permanent identifiers on the web not so much for geographies but for places.

The work centered on the idea that we can usually agree a place exists even if we disagree what its boundaries are. Or in the case of history, precisely because those boundaries and those ideas of place change over time.

The point was to give that agreement that something exists stability and permanence rather than any specific details which may or may not be in debate. We give the thing we agree on weigh and mass in the universe which then allows all the disagreements to orbit it independent of interpretation.

We called this work a “gazetteer” which is just a fancy name for a phone book of places, rather than people. We called this gazetteer Who’s On First because I am a child. Speaking of children, if you or your kids have ever used the mapping functionality of SnapChat it is full of Who’s On First data. We use Who’s On First data extensively at SFO Museum.

All the work we did at Mapzen was governed by something I referred to as the reset to zero.

Imagine you and I have a business that depends on a third-party company, for example Mapzen, to provide mapping services. As is often the case, and was the case for Mapzen in 2018, that company goes out of business. The next day you and I come in to the office, look at each other and then ask: “Do you have the Google Maps account information or do I?”

That is the reset to zero. The idea that you have no choice but to return a vendor you don’t really want to use or whose cost is prohibitive but for which there is no alternative because the alternative simply vanished overnight.

It would be nice to believe that Mapzen shut down with, say, a reset to five. The reality is that Mapzen shut down with about a reset to two, maybe three in some cases.

Which is to say almost all the projects we worked on at Mapzen are still out there, are still being developed as open source projects and a few as commercial ventures in their own right. None of these projects are push-button easy enough to spin up without effort but if you were inclined you could run them on your own servers today.

That feels like a kind of slow and grinding progress. It was made possible by a very deliberate sensibility, and an understanding of the fragility of the work we were doing, which informed how that work was done.

We built the work in such a way that it might be revisited, even in the event of failure.

In 2018 I joined SFO Museum. On my first day there I was walking the terminals with the deputy director and made a casual joke about how slowly things happen in museums, what we in the business affectionately refer to as “museum time”. They looked at me and, with a straight face, ventured that “airport time” was probably even slower than “museum time”.

In that moment I wondered whether I had made a terrible, terrible mistake. But I also think its a good story because a museum in an airport forces a museum to think about time in a way that museums don’t and don’t want to.

Which is a convenient segue to a project I started working on, two years ago, to develop an in-house system for doing interactive wayfinding and routing at SFO Museum.

We are doing this using a very simple network graph of nodes and waypoints mapping galleries and public art works to common areas throughout the airport.

We are not trying to create a network that a robot could use to deliver coffee to you at your gate but something simple enough that you could use to get to the middle of a boarding area and then look around for a painting.

The reason this work is important for SFO Museum is that if few people understand that there is a museum at the airport even fewer understand that there are two dozen galleries and over a hundred public art works located throughout the airport.

The airport itself is, in fact, the museum.

The wayfinding project is about developing the infrastructure and the interfaces to show people all the things they didn’t even know they didn’t see on their way to catch a flight.

Did you know there are 47 museum related things to see between the hotel bar and gate B25?

Did you know that there are prints by Richard Deibenkorn in both the hotel lobby and in the waiting area for Gate B3, around the corner from the Vietnamese sandwich place?

Almost no one does.

These are the things we are trying to surface for passengers.

If few people understand that there is a museum at the airport and fewer still understand that there are two dozen galleries throughout the terminals almost no one knows there have been even more galleries over time or that we have mounted over 2,000 exhibitions since 1980.

This is our challenge.

We are competing for people’s attention as they are racing through the airport trying to catch a flight or managing any number of other distractions.

We will never win that competition. Ever.

Which means that we need to think about how we communicate these things to passengers not on our terms or on our schedules but on theirs.

To that end we have been experimenting with generating on-demand publications, from wayfinding routes, that people can print or download to their mobile devices to look at on the plane, after they’ve left the airport or before they’ve arrived.

The idea being that this is the sort of thing a teacher might print for students in advance of a school visit or something you might send to friends or family who you know will have a long layover at the airport.

We also generate these publications as e-books which are better suited for mobile devices.

The thinking here is that when you finally arrive at your gate and have managed to catch your breath you can request one of these books and download it before you get on the plane.

We would like nothing more than for you to be able to board an airplane with a 500 page e-book of not just the current exhibitions and public art you didn’t have time to see that day but also everything you didn’t even know exists.

We know and actually expect people to leave the building. That in and of itself is unusual for a museum.

We also know that a meaningful number of people will fly back through, or out of SFO, again.

We enjoy repeat visitation by default, also unusual for a museum, and it is fostering awareness of the museum during those future visits that we are trying to promote.

Because this was a conference whose attendees where largely museum and cultural heritage people when I said We enjoy repeat visitation by default there was an audible gasp from the audience.

We’ve also been experimenting with novel broadcast channels.

In 2025 the reality of social media has turned the promise of social media in to a raging tire fire.

Despite that there is a lot of interesting work being done by a lot of different people to try and recapture some of that promise while guarding against all the awfulness of the moment. This work is still nascent and full of projects with confusing and sometimes stupid names. One of those projects is ActivityPub.

ActivityPub is plumbing. It is plumbing built on top of the web, which coordinates the ability for independently operated services, for example a collection of federated social media services, to exchange messages with each other in a decentralized manner.

If that sounds like a bit like how email works that’s because it is. This is often what ActivityPub compares itself to. The salient point being that you don’t have to run your own email server but you can and it will still be able to interoperate with people who don’t.

So SFO Museum has built its own ActivityPub server for publishing social media style messages.

In doing so, it means that we can (and have) created social media accounts for 60,000 objects in our collection including this roll of Hello Kitty toilet paper from Eva Airlines.

If that sounds absurd consider that this is just another, arguably better and more approachable, way in which we lend the works in our collections weight and mass in the universe by which people might orbit them.

Ask yourself which commercial social media service would allow you to create 60,000 accounts on their platform.

Likewise we have created accounts for 6,000 unique aircraft – identified by their tail numbers – that have flown in and out of SFO over the years.

Every day they pick a random flight they have flown and broadcast a message along with something in our collection related to that flight.

And we have done the same again for all the terminals, past and present, at SFO. They broadcast installation photos from random exhibitions that they have hosted.

You might be able to see where all this work is going.

If you look closely you should also be able to see that, for all intents and purposes, all these ActivityPub accounts are both the Pen and the Lens, but better.

That’s a story for another day, though.

In the meantime, we have also been proactive in archiving our traditional social media accounts, particularly our Twitter and Instagram accounts, and hosting them on our own servers.

Bao Li, who manages these accounts, has been doing this work for over twelve years. Their work represents a valuable contribution to the museum’s programming and outreach that is worth preserving. We don’t say that enough about our social media people.

To highlight this work we created a dedicated ActivityPub account to republish a random Instagram post every day providing an avenue in to the museum’s past efforts for people who have only just discovered us.

Where the initial ActivityPub server took about two weeks implement, stretched over the course of a couple months, this account and others like it take maybe an hour to launch.

So, like I said, creating conditions where things may be revisited and in developing the capacity to allow those conditions to bear fruit over time.

It was suggested to me that it might be useful for me to focus more on practice and less on theory during this talk. If the goal is to create belief around what is possible we do that goal a disservice by not centering it in concrete proofs and examples.

At the same time the risk in simply standing up here and recounting tips, tricks and war-stories is two-fold.

First, it gets kind of boring.

Second, it is ultimately just a superhero story.

As often as not these stories end up being little more than “12 highly successful habits of some guy whose circumstances are not my own”. As a consequence it becomes easier to dismiss anything said as not being applicable beyond the boundaries of the tale.

The rest of this talk is going to be less about the day-to-day tactical strategies necessary to get things done and more about the sensibilities and the means which allow the work to succeed in the first place.

It is a talk about why it is necessary to promote and nurture that work into a broader scaffolding by which the entire notion of a cultural heritage might outlast the reluctance and fickleness of the present.

I have come to think that one of the defining characteristics of the modern condition is the tyranny of the analogue. Not "analog" like media but "analogue" like the linguistic or conceptual parallel.

I chose the phrase "tyranny of the analogue" deliberately to highlight the practice of associating whatever it is we are trying to promote by aligning it to the language most closely tied to the zeitgeist of the moment. But I am not pointing fingers. The worst part, for me, is that almost everyone does it these days regardless of the subject at hand.

In this way everything we speak about starts to take on a uniformity of urgency which aside from being unfair and unrealistic is confusing.

The theme of this symposium is broadly about infrastructure and sustainable systems so I am going to take a moment to try and articulate, clearly, what I mean when I speak about them.

I think infrastructure is a system which makes something else possible. It is something which enables higher order effects, and which, for intents and purposes, can be taken for granted.

Redundancies are complimentary or secondary systems that accompany an infrastructure which are designed to address imagined failures.

Resiliency is the capacity to address and adapt to unimagined failures.

Out of necessity resiliency is less about codification than it is about tolerance and temperament because, in the moment at least, it traffics in the unknown.

But an important characteristic of resiliency is precisely the understanding of what, and not how, something we call “infrastructure” exists to accomplish.

If we take it as a given that systems may break, or at least buckle, under unforeseen circumstances then a truly catastrophic failure is one where accomplishing a task outside of ideal conditions becomes impossible.

In 2025, given the state of the world, we may be forced to accept or at least prioritize the idea that the internet is not as important as some other things which we hold, or need, to be inalienable truths.

The last time I spoke at ACMI, in 2017, I pointed out that all our work on and about the internet is predicated on constant, on-demand electricity. We have seen, though, that in our modern times when a cold war turns hot it isn’t long before the power goes out.

As such, it seems reasonable to say that while the internet may be redundant by design it is becoming less and less clear how resilient it truly is. So, even just as an exercise, it might be worth recognizing the “what” and the “why” of the internet that we have come to value because the “how” is feeling a bit tenuous these days.

In a similar vein, these are three questions I have started asking people in the cultural heritage sector lately.

What does digital mean to you? What is the measure of success for any given initiative labeled “digital”? How long are you willing and able to wait for that success?

You might be looking at these questions and thinking to yourself they are the kinds of questions usually reserved for mission statements, organizational platitudes and the output of large language models.

That is sort of my point. This is how we have typically come to understand these questions and their answers, when in fact they are profoundly important, but meaningfully different, from one organization to another.

I actually think being able to answer these questions, on one’s own terms, is probably the most important thing anyone in the cultural heritage sector can do for themselves, their organizations and their peers.

There are many equally valid answers to these questions but because “digital” has been, and in many cases remains, the zeitgeist around which everything else orbits we end up tossing around the same terms with completely different intentions.

The cultural heritage sector has, of late, had a bad habit of looking to the private sector to help answer these questions. I find fault with this practice not because the private sector has “bad” answers but because their answers are rarely applicable to the cultural heritage sector.

The problem with modeling everything on the private sector is that there are as many companies as not who aren’t thinking about an operational timeline longer than a few years.

Sometimes that is because the marketplace doesn’t respond to whatever a company is offering but just as often it is because a business is built to “flip”, to be sold or acquired by another company who may or may not keep that business running.

If that is your business model, as a private sector venture, then you really don’t care if that’s the same business model as one of your vendors or your suppliers. All you care about is that your vendors don’t flip before you do.

Part of the reason for raising these issues is to distinguish between cultural heritage organizations who see digital initiatives in the same light versus those that don’t. That distinction is important because absent clarity it is almost impossible to talk about how to make digital initiatives sustainable or whether it’s even necessary.

Another important consideration about sustainability, and the mismatch between the private sector and the cultural heritage sector, is hiring and staffing.

There are some enterprises whose size and reach justifies their enormous headcount. But as often as not the private sector, and in particular the technology sector, hires as a performative act meant to signal size and strength to investors and competitors. Sometimes people are hired for no other reason than to prevent them from signing with someone else.

The cultural heritage sector is in no position, financially, to parrot these practices but more importantly: They don’t need to.

I take an expansive view of what constitutes “digital”, encompassing automation, storage, retrieval, audience, community, networks and everything in between.

I also subscribe to the idea that success is measured both in the long game and in the systems and structures which enable that long game to be financially and organizationally feasible.

The reason I think this is important is because I believe that the thing which distinguishes “culture” from “entertainment” is the act of revisiting. This distinction is not meant as a value judgment. There is a time and a place for both. The point is simply to recognize what distinguishes one from the other.

Crucially, entertainment becomes culture all the time but it is in the act of revisiting that this transformation occurs. What defines culture, then, is less the subject itself than the act of engaging with, of revisiting, that subject.

As such I believe that the role and the function of cultural heritage institutions is to foster revisiting. If we are not doing that then I think we need to ask ourselves first: Why we exist; second: What distinguishes us from common entertainment in a vast and over-saturated attention economy and third: Whether we are even remotely prepared to compete in that landscape.

To that end, I understand the web, whose distinguishing characteristic is asynchronous recall on a global scale, as the technology which makes revisiting possible in a way that has genuinely never existed before the web.

That bears repeating: The web is the how which makes the what – revisiting – possible in a way that was previously impossible.

It makes these things possible because the barriers to entry and the costs of distribution are as close to zero as they’ve ever been, certainly when compared to what came before the web.

It’s not a very complicated argument. It is really just a story about means. But it is for exactly this reason that, the cultural heritage sector has a vested interest in understanding the web, and the technologies and staffing requirements which enable it, as core elements in the function of our missions rather than just a service we outsource to satisfy the whim and folly of the present.

This is a tricky subject because that whimsy is historically how the cultural heritage sector has approached its relationship with technology. We tend to latch on to whatever is new and shiny, trying to integrate it in to our programming, as a way to seem “modern” and “relevant” in the eyes of younger audiences.

But a funny thing happened 30 years ago. The web happened and it was the cool, new shiny thing so the sector quickly became enamored by it. Unlike most of the cool, new shiny things that came before it, though, the web also happened to be the technology most closely aligned with our values and the reasons we tell ourselves, or at least the public, that we exist.

The web makes possible things – like our missions as cultural heritage organization – which before were impossible or so financially prohibitive as to seen impossible. The web is not just cool, it is viable.

The internet and the web have realized a kind of taken-for-granted normalcy in a remarkably short amount of time. As someone who lived through that transition and who was, and remains, excited about what the internet can make possible I am happy that the post-internet generations are so-called for a reason.

The whole point of being able to take something for granted is not having to suffer through an endless celebration of its very being. The challenge though is that by not doing that, even a little bit, I think we have forgotten those things which distinguished the web and have made it so successful.

The first is its simplicity. At its core it is nothing more than a network of connected, but independent, documents. Although we have engineered an eye-watering amount of complexity in to contemporary web browsers at its core the web is straightforward enough that anyone can write a web page in an afternoon and many people could write a rudimentary web browser in a weekend.

The second reason is more important: The web was given to the world as a gift.

It was given to the world by Tim Berners-Lee, with no licensing restrictions and it is hard to overstate how important that decision was. Tim Berners Lee explained how web browsers and web servers should work and communicate with each other and that was about it. Anyone was free to participate on their own terms without the need to ask for permission or blessings and without the need to pay tribute in order to do so.

It is difficult to understand the impact of those actions in a world where the web is taken for granted and where we don’t talk about what things looked like before the web.

Another challenge is that we only have so much time in our lives for foundational myths and the time it takes to re-enforce them before they become trite, tiresome and ultimately tuned out. Stories about absence are a hard sell.

Everyone likes to argue about what does or doesn’t constitute an inflection point in human history. Events which mark a separation between the past and the present, which is usually a short hand for “things we take for granted”.

Even if we agree that the web, and everything it has made possible, was a genuine inflection point it is still not the new, shiny thing anymore and another characteristic of the modern condition is that we are all addicted to inflection points.

Everyone wants to be present at the moment of creation, to say “I was there” rather than simply inheriting all the scars and war stories of someone else’s past.

One of the ways this dynamic has manifested itself when it comes to the web, and to the internet in general, is the idea that if you share, publish, or do something online and it is not immediately successful or “viral” then it is deemed a failure.

As a result everything we do strives to become its own manufactured inflection point.

Which is insane on simple first principles but also ignores just how difficult and prohibitive it used to be to publish anything with the equivalent reach of the internet before the web.

Massive, worldwide, overnight success – sometimes referred to as hockey stick growth – is a good story. If you happen to be party to it, like I was at Flickr, it can be as rewarding as it is challenging.

It is also not normal.

There are many reasons why its not normal but let me focus on just one: Sometimes it simply takes a while for people to respond to an idea.

What the web has made possible are the economics of keeping something, something which has not enjoyed “hockey stick growth”, around long enough for people to warm up to it. Or to survive long past the moment when people may have grown tired of it.

If your goal is to build something which is designed to flip inside of ten years, like many things in the private sector, that may not seem like a very compelling argument.

If, however, your goal is to build something to match the longevity of the cultural heritage sector, to meet the goal of fostering revisiting, or for novel ideas to outlast the reluctance of the present and to do so at a global scale, or really any scale larger than shouting distance, then I will challenge you to find a better vehicle for doing so than the internet, and the web in particular.

In its simplest form the technical architecture of the web consists of a document layer – the stuff you see in a browser – and a transport layer.

The craziest part about the transport layer, which most people don’t see, is that it underpins nearly everything these days whether or not its used to deliver web pages. Its simplicity and its open licensing means it has become the de facto transport layer for nearly everything.

Even when a service or a product develops a custom transport mechanism as often as not they still end up using a web rendering layer for their graphical interfaces.

As a general rule plywood and two-by-fours are the structural underpinnings of nearly everything in the built environment no matter how posh, luxurious or expensive the surface trappings. For example, this building we have all gathered in.

In that same way at least one, but usually both, of the core technologies that define the web still run most of what we call “modern” despite it being old and outdated. Crucially, the reason that plywood, two-by-fours and the web are so important is not because they self-assemble. They don’t.

They are important because the qualities which define them make the process of assembling and, critically, re-assembling affordable in time, money and cognitive overhead. They are the means not the ends.

That may not seem “sexy” or “exciting” until you consider that the alternative, certainly for the cultural heritage sector, is usually vendor lock-in and six-figure change orders.

Everything I’ve worked on, from the work I’ve shown you to the work I haven’t, takes the web as its foundational layer. The web is what makes it possible for teams as small as those I’ve described to do the work they do. The web is also what makes it possible to weather mistakes and bad decisions.

Not web pages and web servers, per se but the of ease of production, deployment and distribution paired with licensing, or more importantly not having to worry about the cost and burden of renegotiating the terms of that licensing. This is what allows ideas to be made manifest on a schedule and budget that can make even a failed execution valuable because it proves (or disproves) something which otherwise would have remained a hypothetical.

If there is one software engineering lesson that I took with me from Flickr to the Cooper Hewitt and to every place since it is this: The speed with which a code base can change is the most important consideration, above all else. Which means three things:

First, it is taken as a given that, sooner or later, circumstances will necessitate change. That might be a security issue. That might be features which had previously been deemed unnecessary now being considered the most important thing in the world. It might be, you should be so lucky, sudden and unexpected growth.

Second, time is still a finite constant so when we are talking about the need for change we are gating that change inside a boundary.

Finally, it is more valuable to prioritize temperament over codification.

That last comment is the kind of thing you say when you want to spark a holy war in software engineering and management circles. Lots of people will disagree with me about that and, in some cases, they will be correct.

Most, if not all product development, trends towards commodification and codifying the boundaries of what something can or can’t do is an important part of that. This is the process of taking the work of an individual and turning it into the work of arbitrary replaceable cogs.

This is why I have spent as much time as I have trying to drive home the necessity to be able to answer the question of what digital technologies mean, and are meant to accomplish, for you and your organization. If the goal is no more ambitious than something which produces passive income generation then by all means, codify away.

But if the goal is to develop a capacity which can adapt to the operational, financial and intellectual churn of the cultural heritage sector, not to mention culture itself, then I don’t see how we can not preference temperament.

Which brings us to this guy...

I was told there are some things we don’t talk about here so I am not going to mention he who shall not be named, by name. But I don’t think it’s possible to talk about or understand the technology landscape, and in particular the software development landscape, of the last ten to fifteen years without talking about this guy.

I wake up every morning imagining a world where this guy never happened because I think at its core this guy’s story is impossible not to understand as anything but a celebration of the class system, at best, and the caste system at worst.

I know these are fighting words, because a lot of people grew up with this guy, but I don’t really know how else to understand the voluntary submission of one’s self to a magical “sorting hat” except to say it was probably the canary in the coal mine of artificial intelligence we’ve all fallen in to in 2025.

As if that weren’t bad enough the world is then further reduced to wizards and, let’s not mince words, untouchables whose only hope in life is to be graced by the presence and noblesse oblige of their betters, akin to a mortal human in the presence of an invisible Greek god.

His is a story of rote-learning and codification, dressed up as magic and spell-casting, in the service of crystal-palace world views and uniform organizing principles metered out by the benevolence of its practitioners.

To my great surprise there was some genuine uncertainty about who he who shall not be named is...

These are attractive father-knows-best power fantasies for a lot of people but especially people working in technology and so we see the craft and rituals of an anointed tribe of very special people metastasizing in to what Kellan Elliott-McCrea, in his excellent series of essays titled Software and its Discontents, calls “aspirational complexity”. He writes:

As an industry we’ve always been enamored with new technology and shiny objects. For years it was almost definitional, otherwise why did you go into this industry? Interestingly, even as the job has mainstreamed, the infatuation with complexity has remained, and even grown.

First, complexity lies at the heart of our industry’s mythologies. New people joining the industry are taught our myths about Google, Facebook, Amazon, and a sense that these companies’ approaches are what software is “supposed to” look like. And fewer and fewer people are in position to have a wide enough scope of responsibility to learn pragmatic counter lessons the hard way.

Second, during the era of abundance, when OpEx was easier to deploy than CapEx, cloud and SaaS exploded. These services come backed with significant marketing budgets whose job is to convince you that you need the complexity. Why deploy a database when you could deploy a non-relational data cluster, why deploy a server, when you could deploy a Kubernetes cluster, why build simple web pages when you could use React...

...We’ve developed an aesthetics of complexity: the sense that a good system is a complex one and the idea if you aren’t on the latest technology you’re wasting your time, and potentially damaging your career.

So, the recent history of technology and software development becomes a story about the short-term over-indulgence and bedazzling of a group of people – “rock stars”, “ninjas” and “10x engineers” – with he who shall not be named as mascot and marketing parrot.

A group of people who we are coming to see were probably also just foot soldiers and cannon fodder for the training sets of machine learning and artificial intelligence projects and the long-term planning of a larger private sector initiative to document, codify and replace most human workers.

In case it’s not clear everything about the last slide, from top to bottom, was a very deliberate provocation. In doing so I am opening myself up to the critique that, in its simplest form, I am just an old man shouting at the sky.

I am not here to deny the future, progress or novelty but I am here to point out that futures are not inevitable and that futures which are marketed to us as inevitabilities ought to be treated as suspect, ought to be treated as political.

I am here to call bullshit on a rhetoric of the future which seems to depend on erasing everything which came before it simply because it is not the future or, to use that most-cherished phrase of the technology class, modern.

I am here to call bullshit on a future which sacrifices the real and genuine advances of the past, things which made a meaningful difference in how we live our lives and what we are able to accomplish, for passing conveniences and novelties whose costs are unknown or, as is often the case in 2025, deliberately obscured.

The reason it is important to speak about these things is to say, and to try to make you believe, that: It doesn’t have to be this way. It never did.

It is easy to look around, in 2025, and feel like the web has been outpaced and in many ways it has. That is the nature of technology but it often feels like the web is gaslighting itself or being made to gaslight itself into irrelevance.

Another way to understand the web, though, is that it represents a tiny break, or at least a quivering, in a long march towards corporate walled gardens and vendor lock-in that started almost as soon as long-distance telecommunications, if not the industrial revolution, became a reality.

It is important to recognize that none of the established media or telecommunications companies imagined the web. If they imagined anything like the web it was a crippled version reflecting the fears and desires of the marketplace rather than anything people actually wanted to use.

That doesn’t mean people didn’t use those offerings but the speed with which they were abandoned the moment the web came along should tell us something.

The web happened to these companies and ever since that moment they have been maneuvering to fence it back in and make it something they can control and shape.

If you are young enough not to have had to live through the insanity that Microsoft inflected on the world with Internet Explorer 6 then you are lucky.

It means you’ve only had to live through the insanity that Google has wrought with Chrome, the crippling lack of functionality for web applications on iOS devices or the bitter seeds Facebook has sown with React. Or Twitter not allowing links to Instagram, or Instagram to TikTok. The list goes on. The companies come and go, the behavior remains the same.

But it is not enough to simply point fingers and lay blame at the feet of platform vendors.

We are long past the moment where we need to admit that we did this to ourselves, where this is the Faustian bargain with technology platforms whose behavior we have conveniently un-seen in the service of reach, reward and imagined audiences instead of doing the hard work to develop internal capacity. We are being forced to pay for that bargain now.

As part of that reckoning it is worth remembering that the web, the web that was new and shiny and full of promise, never went anywhere. But like plywood it does not self-actualize, nor does it self-discover. And this seems to me to be the elephant in the room.

The reason I keep stressing the importance of understanding and articulating what digital technologies represent for an organization is that I think for many of them those technologies are seen as little more than the cost of getting noticed.

They are the means to mask the stark reality that we don’t know how to stand out, or how to champion our programming in a media landscape that resembles the micro-plastics which occupy every part of the food chain and our bodies now.

And the idea of the web outside of established and reliable discovery channels is a difficult prospect for everyone. If we only imagine the web as the output of benevolent corporate overlords who promise us to not be evil – until they are – then I agree that things might seem a bit grim right now.

The original promise of reach and discoverability – the illusory and often false promise of overnight success, of maybe even going viral – is being undermined on all fronts. The alternatives which are beginning to occupy that space are even less certain and even more transactional in their nature.

If we want any chance of escaping this quicksand I think we are all, collectively, going to have to re-imagine what “discovery” means and how it’s success is measured in both time and impact. I think we are probably going to need to spend a little more time than we have in the past telling ourselves, our peers and our audiences why the web is different. Why it is special and worth preserving even if it lacks the sparkle, shine and the immediacy of the new because the alternatives are feeling pretty retrograde if not worse.

We “go where the eyeballs are” we tell ourselves, to try to figure out where the cool kids are. And then, in our digital efforts, we try to crash that party like an awkward dad. You won’t believe what happens next.

We do this instead of simply being good at what we do – because we are good at what we do – and in finding ways to make being patient enough for people to discover that work a practical, an operational and financial, reality. And then to make those realities durable enough to foster discovery and revisiting on the time scale of the humanities rather than the marketplace.

In the meantime we risk abandoning the one platform that was, and still is, purpose-built to make those kinds of efforts possible.

This last bit of the talk was choppier than I would have liked. Because I was bumping up against the time for the keynote I did an abbreviated version of the text that follows. In retrospect I should have just stood there and kept talking. My apologies to the Adafruit and Tangara crews. It won't happen again.

A few years ago I was asked what I would say the “future of museums” looked like to a group of museum directors. What I said then, and what I would still say now, is go to the new products section of the Adafruit website and extrapolate from there.

Adafruit is a hardware and electronics manufacturer and reseller in New York City started by Limor Fried. They are one of many resellers operating today but what has always set Adafruit apart for me is their commitment to promoting and nurturing a community of makers and tinkerers and in fostering the infrastructure and the open-by-default licensing which makes projects of increasing functionality and complexity both affordable and available to as broad an audience as possible.

To make this idea concrete I want to show you something which finally arrived on my doorstep last month. It traveled all the way from Australia to California and so I kind of feel like I am bringing it home for a visit.

This is the Tangara. It is a contemporary take on the original scroll wheel iPod music player. It was developed by four people, just up the road, in Sydney. Both the hardware and software specifications are open source. It required a round of crowd-funding to cover production costs, overseas, but these devices still only retail for 250$ USD which means the unit cost, factoring in everything else besides materials necessary to sell a product in foreign markets complete with regulatory approval, is probably around 175$.

To recap: A functional iPod equivalent with 2TB of storage, a screen, an intuitive and much-loved touch-based interface, Bluetooth connectivity, the ability to customize the software itself and without any of the surveillance capitalism which has infected everything else in life. 250 bucks. Four people. As a finished product, it is still rough around the edges but I think this is amazing and this is what I mean when I talk about “extrapolating the future” from places like Adafruit (or Tangara). Good work, Australia!

Now that ACMI has launched the Lens Seb is obliged to argue that it is inherently superior to the Pen which he and I helped launch at Cooper Hewitt. But in as much as Cooper Hewitt did the opposite of ACMI and developed an actual hardware device the great tragedy of the Pen was the failure to not do anything with it once they had. Remember the Cooper Hewitt white-boxed its entire software stack to travel the Pen with three people and one contractor in less than a year.

If the Tangara is what four random people in Sydney can do in a year and a half, in 2024, then imagine what might the Pen have become, or birthed, after ten years of iterating at the speed of hardware advances and miniaturization. This is what I meant when I said that commodification is probably the one place we should look to, and learn from, the private sector.

Ask yourself what 175$ or even 250$ buys you from a technology vendor in 2025. In many cases that is probably just the markup on the snacks for the meeting to determine the scope of work for a project.

The reason I want to finish by mentioning the open hardware community is that it feels like one of the very few efforts to expand the promise of the web, to expand it out in to the physical world, without also forfeiting or sacrificing those qualities – openness, simplicity and licensing – which give the web not just its breadth but its depth.

So, finally, in closing I will once again leave you with the words of Tressie McMillan Cottom:

We probably need to give up the transactional nature of our hope and do the thing that needs to be done because it needs to be done.

Thank you.