Member preview
Loading…
0:00
9:59

Can software be good for us?

We must face a deep challenge to design software for meaningful interaction and time well spent

This was originally called “Dear Zuck (and Facebook Product Teams)” and is written as a letter to Mark Zuckerberg. But it’s for all designers of social software.


Dear Zuck,

A few days ago, you announced that your number one goal for 2018 was to make Facebook “Time Well Spent”.

I was especially pleased, because five years ago I coined this term. It was in a conversation with Tristan Harris, who’s worked tirelessly since to elaborate the concept, turning it into a movement.

Back in 2013, Tristan and I were worried about the entire tech industry, but your News Feed was then (and still is) our best example of what needed to change. And that was before election manipulation, fake news, teen depression & suicide, worries about children’s videos, and so on.

Now you’re worried about these things too. So let’s get practical: how can a company like Facebook be retooled around “meaningful interactions” instead of engagement?

The first step is to understand why it’s hard. Popular articles place blame in certain places: the advertising business model, centralization, tech bro culture, tech-giant monopolies, or just capitalism-as-usual.

But I don’t think it’s that simple. I think that a difficulty with meaningful interactions starts with the nature of software itself. I believe even the most well-intentioned teams, operating in the best possible culture, would still struggle to write software that’s time well spent.

I’ll tell you why I think so, and then give two ways you could respond to this problem. Both require profound changes to how software gets made; changes bigger than others your company has gone through, such as the adoption of machine learning, or the transition from web to mobile.

But that’s what it will take.

On Social Software and Meaningful Interactions

Sometimes new social software works out well. Few people would come out to a protest of Wikipedia, Couchsurfing, or Meetup, for example. These products — and the social changes that came with them — were welcomed, even embraced.

A mock-up. This guy was actually protesting something useful.

But people are less enthusiastic about Facebook, Twitter, the “Fake News” ecosystem, Uber, AirBnb, and even smartphones themselves. Why are reactions to these systems different? I think we need the concept of values to understand:

Values: The ideas a person has about how they want to live, especially ideas about what kinds of relationships and what kinds of actions are of lasting importance in their life.¹

Values are like vertebrae: even if you never think about them, you have them, and they structure much of what you do. Values are ideas about the manner in which you want to act, rather than the outcomes you want. Let’s say you’re planning a social event, like the F8 conference you put on each year. You might have a goal in mind, maybe “getting a lot of people to participate”. But while you craft your invitation, you also have a manner in which you pursue that goal—perhaps you want to write honestly or cleverly. We always have a way we want to approach things. Our values.

And here’s the problem: generally speaking, your product (Facebook) makes it more difficult for all of us to live according to our values.

When a person spends hours on News Feed before bed, are they cultivating the type of social relationships they believe in? Are they engaged in acts of personal meaning?

Maybe! Facebook can be used in all sorts of ways. Perhaps, at bedtime, this person was planning a political revolution, or collecting moves for a breakdance video.

These two feel okay about their Facebook binge, I think.

But many of us wake up the next day feeling like our late-night scrolling was a waste of time. That’s because living according to our values doesn’t happen automatically. Some social environments make being honest more difficult, while others make it easier. It is similar with courage, creativity, and with every other manner in which a person wants to act or relate to others.

As we’ll see below, social software simplifies and expedites certain social relations, and certain actions. If these actions and relations that are made simple don’t match a particular user’s values, then the software makes it harder for that person to live by their values, and leaves them feeling that their time was not well spent.

For example, it may be harder to live by the value of honesty on Instagram, if honest posts get fewer likes. Similarly, a courageous statement on Twitter could lead to harassing replies. On every platform, a person who wants to be attentive to their friends can find themselves in a state of frazzled distraction.

As users, we end up acting and socializing in ways we don’t believe in, and later regret. We act against our values: by procrastinating from work, by avoiding our feelings, by pandering to other people’s opinions, by participating in a hateful mob reacting to the news, and so on.

This is one of the hidden costs of social software. Let’s call it the cost of values-misaligned systems.

Any social environment can be misaligned with our values, but social software makes this misalignment much worse.

How Software Structures Choice

Compared to past social systems — governed by social conventions or laws — software gives less space for personal reinterpretation or disobedience. It tends to code up exactly how we are intended to interact.

Look at social conventions. They certainly shape our lives: teenagers get ostracized for wearing the wrong clothes, adults for spouting unpopular beliefs. But it’s still possible to flout convention. And sometimes it pays off: by operating outside of convention, a person might initiate a new trend or subculture. With software, on the other hand, acting in a way the designers didn’t intend is often impossible: a user can’t sing “Thrift Shop” to a stranger on Tinder and can’t wear their Facebook cover photo on the bottom of the screen. The software has structured the sequence and style with which they interact.²

We see something similar if we compare software with laws. Imagine if Twitter were implemented through government regulation: there’d be a law about how many letters you used when you spoke, and an ordinance deciding who wore a checkmark on their face. Imagine bureaucrats deciding who’s visible to the public, and who gets ignored. Could a law make you carry around and display everything you’d recently said?

In practice, laws can’t structure social life that tightly. (Even in the worst dictatorships—when the Nazis had Jews wear stars—they couldn’t ensure complete compliance.) As law, the “Twitter Code” would be impossible to enforce. But as software, it’s impossible not to comply.

Social software is therefore different from laws and social conventions.³ It guides us much more strictly through certain actions and ways of relating. As a result, we have less of a chance to pursue our own values. The coded structure of push notifications makes it harder to prioritize a value of personal focus; the coded structure of likes makes it harder to prioritize not relying on others’ opinions; and similar structures interfere with other values, like being honest or kind to people, being thoughtful, etc.

This doesn’t just cause problems for individuals. The social problems I mentioned at the outset (election manipulation, fake news, internet addiction, teen depression & suicide, the mistreatment of children) are fueled by the same thing—that the actors are guided along in ways that don’t accord with anyone’s values. (For more about that, see the followup essay.)

What to do

Returning to the issues that you, Zuck, are struggling with: since they’re connected to the nature of software, there won’t be an easy fix. But here are two approaches that could work:

  1. In the long-term, you (and other technologists) can learn to build software that’s less constraining, software that works more like social conventions, which can be defied, expressively reinterpreted, and remodeled by the user. But realistically, that will take decades of research, innovation, business change, and cultural evolution to achieve.
  2. The only other option (besides rejecting the idea of social software entirely) is to learn a lot about values, and to explicitly redesign everything to be as value-aligned as possible, making room for the huge diversity of values among your users.

I’ve written a guide for you (and for other technologists) if you choose option #2. It means thinking about products in a new way. They must be considered as spaces: virtual places where people struggle to live out the acts and relationships they find meaningful.

Teams must face questions like these:

  • What values do users have?
  • For each such value, are there features of social spaces which make practicing it easier?
  • How do users decide which values to bring into their socializing? How can software support this decision?
  • Are there more or less meaningful kinds of conversations? Is there a way to identify less value-aligned talk?
  • Can we accomplish all of this without imposing our own corporate or personal values?

It may seem impossible, but I believe this focus on values and meaning is part of what’s made Couchsurfing, Meetup, and Wikipedia less objectionable than Facebook has been. In designing their software, these companies were more focused on users’ values, less on goals or preferences.

I believe you’re serious about making a time well spent Facebook, and serious about addressing the harms to democracy and society. So, I guess you’ll try to redesign everything to focus on users’ values. In that case, you’ll want to read the next post carefully, and try out the worksheets and the exercises.

Up for the challenge? Read on:
➡️ How to Design Social Systems (Without Causing Depression and War)