Do one thing…

I don't want barely distinguishable tools that are mediocre at everything; I want tools that do one thing and do it well.

350px-Pudu_jail_west_wallI’ve been lamenting the demise of the Unix philosophy: tools should do one thing, and do it well. The ability to connect many small tools is better than having a single tool that does everything poorly.

That philosophy was great, but hasn’t survived into the Web age. Unfortunately, nothing better has come along to replace it. Instead, we have “convergence”: a lot of tools converging on doing all the same things poorly.

The poster child for this blight is Evernote. I started using Evernote because it did an excellent job of solving one problem. I’d take notes at a conference or a meeting, or add someone to my phone list, and have to distribute those files by hand from my laptop to my desktop, to my tablets, to my phone, and to any and all other machines that I might use.

But as time has progressed, Evernote has added many other features. Some I might have a use for, but they’re implemented poorly; others I’d rather not have, thank you. I’ve tried sharing Evernote notes with other users: they did a good job of convincing me not to use them. Photos in documents? I really don’t care. When I’m taking notes at a conference, the last thing I’m thinking about is selfies with the speakers. Discussions? No, please no. There are TOO MANY poorly implemented chat services out there. We can discuss my shared note in email. Though, given that it’s a note, not a document, I probably don’t want to share anyway. If I wanted a document, even a simple one, I’d use a tool that was really good at preparing documents. Taking notes and writing aren’t the same, even though they may seem similar. Nor do I want to save my email in Evernote; I’ve never seen, and never expect to see, an email client that didn’t do a perfectly fine job of saving email. Clippings? Maybe. I’ve never particularly wanted to do that; Pinboard, which has stuck to the “do one thing well” philosophy, does a better job of saving links.

While this might sound like an Evernote rant (all right, it is), the problem isn’t just Evernote. Everything is turning into an indistinguishable mush. Gmail was a pretty good Web-based email client, and it does a great job of eliminating spam. But when you add chat, when you add connections to hangouts, when you add interfaces to the calendar, when you add pop-up pictures of your email contacts, it becomes just one more ill-defined mess. Gmail always annoys me with some kind of pop-up that obscures the message I’m trying to read. Google Maps was more useful before it tried to point out restaurants and tourist attractions, and before it filled up with junk snapshots. (BTW, what is “RAT Race Timing?” They’re practically my neighbors.)

I could say the same about just about every tool I use. Whether it’s Skype, Twitter, Google Docs, Flickr, or something else, everything seems to be converging into a single application that doesn’t do anything well, but does everything poorly. Even Dropbox is getting into the act. Pro tip: Don’t add email, chat, photo sharing, or videoconferencing services to your app. Unless your app is an email client, a chat service, a photo sharing service, or a videoconference. As Nancy Reagan said, “Just say no.”

There’s a reason for this regression to the mush that doesn’t have to do with the megalomaniac plans of product managers (“hey, if we add a chat client, we could eat AOL’s lunch”). Unix has pipes, which make it easy to build complex applications from chains of simpler commands. On the Web, nobody may know you’re a dog, but we don’t have pipes, either. There’s no good way to connect one Web application to another. Therefore, everything tends to be monolithic; and in a world of monolithic apps, everyone wants to build their own garden, inevitably with all the features that are in all the other gardens.

What’s wrong with this picture? Why can’t I pipe an email message into an unrelated videoconferencing app? Sharing Google docs works wonderfully: why can’t I just pipe my Evernote note into Gdocs and have done with it? Evernote might think they’re losing out on this deal, but it’s the reverse. Evernote already convinced me not to use their document sharing, so if I write a note that I might eventually share, I make it a Gdoc from the start. We have Web services with APIs; why can’t we use them? IFTTT is headed in the right direction, though it doesn’t quite get me to where I want to be. IFTTT’s biggest weakness is that it requires too much forethought and ceremony. With the Unix command line, you can just say “well, I can grep this, pipe the result into sed, and use wc to tally up the results.” Unix is great for one-time applications that you’ll never use again. The Web isn’t, but it could be. The first person to create a tool that can pipe a table from a browser into a spreadsheet, a Google doc, or even a text file without massive pain will be my hero.

I don’t want anyone’s walled garden. I’ve seen what’s inside the walls, and it isn’t a palace; it’s a tenement. I don’t want barely distinguishable tools that are mediocre at everything. I want tools that do one thing, and do it well. And that can be connected to each other to build powerful tools.

Image by Cmglee on Wikimedia Commons.

tags: , , , ,
  • PaulTopping

    Part of the problem is the OS makers have not really created a platform on which one-purpose tools like you are describing can live. There really is no modern OS equivalent to standard input and output, piping, etc. that makes those Unix tools sing.

    Part of the problem is security. The richer the inter-application interaction allowed between apps, the easier it is for bad guys to exploit them for nefarious purposes.

    Another problem is complexity. Unlike in the Unix days (or with most Linux users now), most people that use computers are not programmers or particular computer-savvy. I am sure the OS makers fear that rich inter-application interactions will make their devices hard to understand for the typical user. I think there are some good examples of that. Windows apps interact with each other using OLE (Object Linking and Embedding) but it can be difficult to know what is going on for many users. Same with Android’s Intents. When it asks me whether I want some link to go to YouTube once or always, I am always mystified as to the scope of that decision.

    I somewhat agree with your thesis but I also see the reasons why it hasn’t happened.

    • drhowarddrfine

      I think you are pretending “the Unix days” are something in the past when they’re not. As a FreeBSD user, those simple tools still sing a pretty song and I use them all the time, every day, and FreeBSD is as modern an OS as any of them. The same is true for Linux and we write these things without fear of bad guys exploiting anything.

      But these tools have never been intended for non-technical users to use so that point is not valid.

      • PaulTopping

        I think you are misunderstanding my point. We all know about Linux and these tools. I’m a programmer and I like them. This post is about whether there could be an analog in the point-and-click world in which most computer users live these days. As you point out, the tools you are talking about are not intended for non-technical users.

  • http://dkretzmann.blogspot.com Doug K

    from 1989, Kuperberg’s law:
    The Law of Software Development and Envelopment at MIT:
    Every program in development at MIT expands until it can read mail.

    The problem is, there isn’t much profit outside the walled garden. If you are an academic trying to do the right thing, you write Unix. If you are a capitalist trying to maximize profits, you do other things including building a walled garden to capture your customers.

    RAT Race Timing is a triathlon and running race timing organization.. racingunderground.com are my equivalent neighbours ;-)

    • tom

      Hence the attractiveness of FOSS, such as Unix, Linux, the Internet and the Web. Ever since TBL took the momentous decision to give away the Web for free, parasites have been trying to find ways of making money out of it – even though they contributed absolutely nothing to its creation and improvement. The worst parasites are those who threaten to partition the World Wide Web into a multitude of paid-for walled ghettos. The FOSS movement has come an awful long way – Mike Loukides has just given us a handy requirement for future work.

      • Night Hawk

        Hence the attractiveness of FOSS, such as Unix, Linux, the Internet and the Web. FOSS is sadly dying because FOSS is most certainly not AT&T’s Unix Clone with broken SSL Wrappers and weakened security due to a ploy that was hatched at Bell-Labs in the 1970’s to doom everybody into buying AT&T’s security upgrades. System-V Unix no thank you, I’d prefer to use Bell-Labs Unix any day of the week. Lets look at the CVE’s for FreeBSD, OpenBSD or Gnu/Linux … The CVE’s that never stop coming, now lets go look at Bell-Labs Unix and the one CVE it’s had in it’s entire lifetime.

        The American government is the only institution in the world, that has allowed a Fraud of huge and epic proportions to propagate the entire computer industry. “The worst parasites are those who threaten to partition the World Wide Web into a multitude of paid-for walled ghettos!” no I would disagree and say the worst parasites are the ones still clinging to AT&T’s Unix clone, supporting a communist programming style of everything must be free (including the never-ending bugs) – like leeches!

        “There really is no modern OS equivalent to standard input and output, piping, etc. that makes those Unix tools sing!” You n00b who needs a modern equivalent when the existing code-base from 1973 works extremely well. They work so well in fact that they bury it behind standards and leave everyone else to scratch there heads looking at the GNU going “why doesn’t this work?” It doesn’t work because it was never designed too! It was designed from the outset to be a fudge of pure unmaintainable “cruft!” so when you hear people saying “Linux users are extremists!” you can nod your head sagely and say “Yes, they are and they’re thick as well…”

        Microsoft’s solution (No) they conspired with AT&T to break the tcp/ip Layer, Apple FreeDarwin then (No) Apple is crapple, Linux then (No) Linus is a fudge-pecker of epic proportions, BSD then (no) just go fsck yourself now!

  • http://broadcast.oreilly.com/david-collier-brown/ davecb

    APIs encourage one to reach out and use pre-existing code, but not to flow one’s processing through a series of steps. You can, but you tend to get something like the C compiler: a “driver” program that calls the steps behind the scenes and concentrates the option-setting and therefore the complexity in one place.

    Personally, I’d love to have a language where I could compose a bracket tournament out of a series loops running previously-written matches, each writing their winners into the inputs of the matches of the next round (I work for a tournament organizer (;-))

    I know how to implement the run-time, but I sure don’t know how the language itself would look, much less work!

    • evanplaice

      The API can be defined to be whatever you want it to be.

      What you’re describing is asynchronous processing. Specifically a map/reduce algorithm.

      For instance. You could have a central node that manages state.

      1. At the start of the tournament a message is broadcast by the central node to discover all the potential competitors. Participants register by responding the broadcast and wait for further instruction.

      2. After a predefined timeout, the central node would map the participants to an in memory model of the bracket state (incl assignments for the initial bracket) and broadcast the state to all participants.

      3 The central node would then sit and wait for the results. At the end of the match the participants fire off a request with the result and wait for a response for the next step.

      4 Upon receiving results from all registered participants. The results would be reduced to determine first round winners. The remaining competitors would then be mapped to new positions in the next bracket. Finally, the updated state would be broadcast to all the participants.

      Repeat steps 3 & 4 until all brackets are complete and broadcast a final update with the results.

      If you want, you could limit the amount of information sent by only notifying participants of their competitor in the next bracket and waiting until the end to broadcast all the results.

      If the scale is large enough you could implement a truly parallel processing model by delegating the map/reduce to the participant nodes. The results would simply propagate up the tree until it reaches the root node (ie the central node).

      If the tournament is held at one location, you could handle the messaging within the application using simple event callbacks. If it’s geographically dispersed, you can just assign a central server with client nodes representing participants. Then the messaging would take place via simple REST API request/responses.

    • Steven

      sounds like you just need something to create nodes, and nodes to poll the ones downstream of them for the winners/losers (or downstream nodes to push notifications). I also do tournament scheduling – but we have lots of competitors in multiple teams and with various timing constraints, it’s more like a knapsack problem you just have to brute force.

  • robotwatch

    and then there’s systemd.

  • http://stackoverflow.com/users/775516/jason-sebring Jason Sebring

    microservices?

    • http://broadcast.oreilly.com/david-collier-brown/ davecb

      Definitely help, but if and only if you’re trying to figure out how to do a transformation pipeline (like “spell”) by concatenating microservices. Otherwise they just allow smaller components of a does-everything-including-email program. Just FYI, I once did a tiny little send-email library that could be piped to (;-))

  • Brian

    The “Unix philosophy” wasn’t killed by the web. If anything, the web has kept it alive a lot longer than it otherwise would have been, with its line-oriented text interfaces, and simple socket interfaces (which are essentially network pipes).

    What beat it was the ITS philosophy, or (depending on your background) the Lisp philosophy. You put everything in one namespace, and let rich objects talk to each other. No need for each program to squeeze its data into line-oriented ASCII, or have its own text parser. Just use a high-level language, with high-level objects!

    Lisp systems on ITS worked this way, so Emacs worked this way, and so did Java, and Javascript, and on down the line.

    The last time I heard anyone really promoting “one thing” was back when some of us thought Docbook had a future (early 2000’s?). Of course, the downside is that you needed about 5 completely different (and poorly-documented) programs to get from your Docbook source file to something you could read. But hey, it was plain text, and each program only did one thing! Whatever that’s worth.

    Today the last remnants of the “Unix philosophy” on the web are dying. HTML5 means that webpages are no longer static HTML, but a living DOM that you manipulate with Javascript (the ITS way), and HTTP2 means the wire protocol is no longer line-oriented text.

    Good riddance. Those of us who grew up with the old ITS-derived systems saw that they were the future decades ago. Unix pipes of line-based text are not the only way for unrelated programs to interoperate, and not even a very good one.

    • http://catcode.com/ jdeisenberg

      Ignorant question: What does “ITS” stand for?

      • Brian

        Incompatible Timesharing System — it’s the first link at https://en.wikipedia.org/wiki/ITS

        It’s the anti-Unix, and it’s a travesty that everyone knows Unix but nobody seems to know ITS, the other half of our field’s history. There are lots of complaints that we’re forgetting the Unix way, but nobody seems to be celebrating that we’re re-discovering the ITS way.

        • http://catcode.com/ jdeisenberg

          Thank you!

    • ahallock

      Sounds like you’re just salty because you never had the pleasure of using xargs or any of the other amazing command line tools, which are not dying off by any means. The Unix philosophy is still alive and well and really has nothing to do with HTML5, unless you’re piping an HTML5 doc through a command line parser and extracting some data. And we’ve had dynamic HTML pages via JS forever, not just with the advent of HTML5.

      • Brian

        I don’t know why you think that. I use xargs every day. But that’s only because higher-level tools haven’t quite caught up yet. Nobody is proposing rewriting their Python programs as shell scripts, and not just because performance would be terrible.

        xargs isn’t exactly new, either. Just because an old tool is still getting used doesn’t mean the philosophy that bore it is alive and well. My computer still has a “creat()” syscall but that doesn’t mean anyone still follows the philosophy of using 5-letter function names.

        • Henri

          All functions should be single letters (A-Z). If you require more than 26 functions in your program then it is obviously doing more than one thing well.

      • evanplaice

        “Unless you’re piping an HTML5 doc through a command line parser and extracting some data”

        Actually, I do this on a daily basis.

        I document everything in Markdown because:

        – it’s just plaintext
        – it can be formatted without a ui (context switching to a mouse is distracting)
        – it maps 1:1 with html so I can style/publish it online
        – it’s supported in most editors (incl syntax highlighting)
        – it’s works on any platform
        – etc…

        From my editor, if I want to see the markdown as HTML, I can trigger a quick keyboard command within my editor to load a preview in a new browser tab.

        I also have a cli tool (ie that uses the same backend library) where I can launch a preview via the command line.

        I use all sorts of other cli tools to do analysis/transformation on source code for a wide range of formats/languages.

        I have watchers that can monitor a directory tree for changes and apply the transformations automatically.

        A lightweight webserver that — with no arguments specified — sets up a directory watcher, spools up a minimal webserver, opens the browser, and injects a snippet to enable live-reload. FYI, live-reload monitors the file tree and auto-refreshes the page when changes are made.

        A task automator where I can define long chains of commands to match the various workflows I use.

        I can trigger Alfred (ie virtual assistant) to launch applications, sleep/reset/shutdown the os, search files, search contacts, check the dictionary, etc…

        Curl was the last holdout but after encountering some frustration while fetching data via an oAuth API, I discovered the Postman browser extension.

        Anything more complex than simple filesystem commands can easily be done in a REPL where the error outputs are consistent and high quality.

    • evanplaice

      Agreed.

      Pipes do one thing and do one thing well. Limit new developers to a 1980’s unix developer mindset.

      A pipe is nothing special, it’s basically functional currying at the filesystem level. The only problem is, any code you write using pipes will be dependent on UNIX-specific tools.

      Hypothetically, lets say the posix standard tools (ie curl, sed, awk, etc) were all ported to a truly portable scripting language like python or node. The tools would be truly platform-independent. Defining/requiring tools as project-level dependencies would be trivial. The tools could be separated into independent modules and be developed/improved/replaced incrementally to quickly adapt to new technological discoveries. It would finally be possible to build a modular system free of excess bloat and overhead typical of all current operating systems.

      Instead, any OSS project that leverages these tools will also require a complete POSIX environment as a dependency.

      Say goodbye to the ability to compose minimalistic application/modules. So long to building higher-level interface abstractions.

      As a result, the POSIX toolchain hasn’t seen a significant improvement in years. A single monolithic architecture is the only option. What could otherwise have been a thriving ecosystem of experimentation, differentiation, evolution, and advancement is locked into ‘good enough’ by default.

  • randcraw

    Much of the difficulty in translating the Unix model to modern computing is visibility — if a service isn’t immediately visible to the eye, most users won’t think to use it. And if they do recall it, many won’t remember how to access it unless there’s a button.

    The Unix model worked only because users lived long enough in that (stable) ecosystem to learn its tools and infrastructure. With the plethora of different apps and devices and OSes everyone must use daily, and the rapid changes in available services and ways to access them, few folks are willing to invest the time to master their tools.

    For example, who reads the entire owner’s manual when they buy a car any more? Or masters the many varying vicissitudes of OS/X vs Linux?

    Product managers know this and recognize that their customers want to be coddled not enabled. But this does make me wonder… what’s the prospect for spoken interfaces if users must remember a substantial set of verbal commands for services that aren’t visibly apparent?

    • hope none

      No one read the owner’s manual.

  • http://www.savraj.co.uk/ aerodyno

    people need reasons to get promotions within these organizations, so they hack on additional features.

  • Matt Doar

    Agreed. If an app does everything it probably doesn’t do what I want very well.

    I’m also reminded of Zawinski’s Law

    http://www.catb.org/jargon/html/Z/Zawinskis-Law.html

    “Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.” Coined by Jamie Zawinski (who called it the “Law of Software Envelopment”) to express his belief that all truly useful programs experience pressure to evolve into toolkits and application platforms (the mailer thing, he says, is just a side effect of that). It is commonly cited, though with widely varying degrees of accuracy.

  • evanplaice

    So… What you’re saying is, you’ve never heard of Slack, Zapier, HubBot, etc.

    The web developer world already has it’s own form of pipes. It’s they’re called APIs.

    It’s not like O’Reilly Radar created Disqus comments. In fact, you probably use dozens of online services every day and don’t even realize it.

    Unix pipes are a prime example of a ‘worse is better’ philosophy that old school devs still cling to for dear life.

    Pipes are the ‘basic bitch’ of APIs. A modern API passes structured data, can support authorization/authentication, can adapt to different circumstances/environmests, is platform independent, etc…

    Pipes are a platform specific tool, the practical definition of a walled garden.

    • http://broadcast.oreilly.com/david-collier-brown/ davecb

      Pipes are a particular implementation of production systems, first proposed in 1943 (!) They’re really quite different from APIs, at least as different as batch processing is from conversational systems using an API. I love them both, sorta: on even-numbered days I write and validate APIs and curse pipelines, on odd-numbered day the reverse.
      See also http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA019702

      • evanplaice

        tl;dr: my issue isn’t so much about piping as a processing model, it’s with the limited usefulness of UNIX pipes in particular and the dogmatic support they receive from the OSS community.

        Very good point. I completely agree that — as a generalized architectural model — pipes are extremely useful.

        As far as APIs go, it really depends on the implementation. While it’s true that most API are designed to cater to an imperative/synchronous model (ie mutable/stateful), the platforms themselves are adapting to support functional/asynchronous concepts.

        Web development/automation tools are a prime example piping used at a higher level of abstraction. Consider the following workflows.

        *Development:*

        watch -> transpile -> polyfill -> live-reload

        *Contributing:*

        transpile -> lint -> style-check -> test -> bundle -> commit -> push

        *Integration:*

        pull-request -> continuous integration test -> merge -> release -> deploy

        Internally and externally, the modules can be modeled to process tasks concurrently (ex via map/reduce), chained as a set of transformations (ex filters), or a combination of both.

        In addition, most CLI tools are implemented as a thin layer that simply maps API calls to the command line. Decoupling the API from the CLI means it can be reused as a library for virtually anything.

        For instance, a linting (ie syntax error checking) can be used as:

        – a standalone CLI tool
        – an automation task
        – a compiler/parser extension
        – an editor plugin that checks code as you edit (ie similar to spell check)
        – an online validation API
        – etc…

        Tools like these can easily support the traditional `input/output | input/output` mode of operation **and** they can be used in a much more flexible manner.

        The best part is, platform-specific details are abstracted away by the VM so they work on any platform the VM supports. Including platforms (ex embedded) where the operating system is nothing more than a thin hardware abstraction layer.

        Building effective, high quality user applications is a problem of composition (ie how well smaller parts work together) not granularity (ie the scope of functionality).

        The idea that ‘every application in unix will eventually become email’ can be used as an example to demonstrate a lack of creativity/ability within the community to develop higher order systems.

        The ‘UNIX way’ is a single deterministic perspective accepted en mass as a cultural identity. One, I honestly think that has stunted the growth/progress of those who follow it without question.

        It shouldn’t be “do one thing and do it well”, it should be “build modules that do one thing well and use them to compose applications that create an ecosystem of amazing”.

  • Persons Name

    It’s been attempted a couple times, most obviously by Yahoo Pipes https://en.m.wikipedia.org/wiki/Yahoo!_Pipes

    And then there’s the level above piping plain text… Like the power shell which is probably based on some hotness from 1979 that everyone’s forgotten.

    • Brian

      Yahoo Pipes was never meant as a substitute for real pipes. About all it was really good for was stringing RSS feeds together. The whole point of Unix pipes in the shell is that they’re general-purpose.

      PowerShell gets some of the fundamentals wrong, but it’s a lot closer.

  • Markus Sandy

    wtf? ever hear of npm? gulp? what are you talking about?

  • hzhou321

    Wasn’t UNIX itself a monolithic walled garden?

  • http://broadcast.oreilly.com/david-collier-brown/ davecb

    Just for reasons of clarity, I’m going to distinguish pipes and programs under unix from other tools and programs in a web-based system on any platform….
    Unix programs did only one thing to an input that was required to be plain text, and immediately produced an output witout having any side-effects on anything else. Pipes were a tool to connect them together and run them concurrently. They had a definite flavor of batch processing, even though some famous ones (spell) were used interactively.

    Web services take a command and some data, and run the command in some environment, usually one with a database and which implement a huge number of individual commands that use the data to change the state of the program (and DBMS), and may or may not return a result.

    They’re addressing different problem spaces, so they’re not goingt to be very similar.

    My variant of Mike’s question is “how can we invent something as powerful in the web world as pipes were in the command-line world?”

  • mohanarun

    Correction: Do one thing, and let that one thing cover multiple other things. Single interface, multiple sequential tasks completed upon user interaction with single interface. Imagine a Shopping OS that lets you do tasks with Zappos, Fab, American Apparel, NYTimes, General Motors. Or any five brands that you regularly use. Then you can delete American Apparel app, NYTimes app, GM app, Fab app from your smartphone and use this single mobile portal app to interact with all of these as icons.

  • Chris Allen

    Just as you wrote this, I happened to try Insightly – and their approach is exactly what you are looking for. They do CRM and project management but they dont do email and document management – for that you have to connect.

    And for connecting on the web, you need to authenticate. In that sense, the glue of the Web that was the pipes of Unix is …. oAuth