“Taco Bell Programming” is the idea that we can solve many of the problems we face as software engineers with clever reconfigurations of the same basic Unix tools. The name comes from the fact that every item on the menu at Taco Bell, a company which generates almost $2 billion in revenue annually, is simply a different configuration of roughly eight ingredients.
Many people grumble or reject the notion of using proven tools or techniques. It’s boring. It requires investing time to learn at the expense of shipping code. It doesn’t do this one thing that we need it to do. It won’t work for us. For some reason—and I continue to be completely baffled by this—everyone sees their situation as a unique snowflake despite the fact that a million other people have probably done the same thing. It’s a weird form of tunnel vision, and I see it at every level in the organization. I catch myself doing it on occasion too. I think it’s just human nature.
I was able to come to terms with this once I internalized something a colleague once said: you are not paid to write code. You have never been paid to write code. In fact, code is a nasty byproduct of being a software engineer.
Every time you write code or introduce third-party services, you are introducing the possibility of failure into your system.
I think the idea of Taco Bell Programming can be generalized further and has broader implications based on what I see in industry. There are a lot of parallels to be drawn from The Systems Bible by John Gall, which provides valuable commentary on general systems theory. Gall’s Fundamental Theorem of Systems is that new systems mean new problems. I think the same can safely be said of code—more code, more problems. Do it without a new system if you can.
Systems are seductive and engineers in particular seem to have a predisposition for them. They promise to do a job faster, better, and more easily than you could do it by yourself or with a less specialized system. But when you introduce a new system, you introduce new variables, new failure points, and new problems.
But if you set up a system, you are likely to find your time and effort now being consumed in the care and feeding of the system itself. New problems are created by its very presence. Once set up, it won’t go away, it grows and encroaches. It begins to do strange and wonderful things. Breaks down in ways you never thought possible. It kicks back, gets in the way, and opposes its own proper function. Your own perspective becomes distorted by being in the system. You become anxious and push on it to make it work. Eventually you come to believe that the misbegotten product it so grudgingly delivers is what you really wanted all the time. At that point encroachment has become complete. You have become absorbed. You are now a systems person.
The last systems principle we look at is one I find particularly poignant: almost anything is easier to get into than out of. When we introduce new systems, new tools, new lines of code, we’re with them for the long haul. It’s like a baby that doesn’t grow up.
We’re not paid to write code, we’re paid to add value (or reduce cost) to the business. Yet I often see people measuring their worth in code, in systems, in tools—all of the output that’s easy to measure. I see it come at the expense of attending meetings. I see it at the expense of supporting other teams. I see it at the expense of cross-training and personal/professional development. It’s like full-bore coding has become the norm and we’ve given up everything else.
Another area I see this manifest is with the siloing of responsibilities. Product, Platform, Infrastructure, Operations, DevOps, QA—whatever the silos, it’s created a sort of responsibility lethargy. “I’m paid to write software, not tests” or “I’m paid to write features, not deploy and monitor them.” Things of that nature.
I think this is only addressed by stewarding a strong engineering culture and instilling the right values and expectations. For example, engineers should understand that they are not defined by their tools but rather the problems they solve and ultimately the value they add. But it’s important to spell out that this goes beyond things like commits, PRs, and other vanity metrics. We should embrace the principles of systems theory and Taco Bell Programming. New systems or more code should be the last resort, not the first step. Further, we should embody what it really means to be an engineer rather than measuring raw output. You are not paid to write code.
Nice.
Since I quit my first job, started my own company and became the sole person responsible for tech I’m continuously trying to change my thinking towards this. It is easy to get caught up in your own grand design – with patterns, code principles etc etc. – without thinking much of what it actually adds to the business.
As to measuring… It is not easy to measure the value that an engineer adds – definitely not as easy as measuring the value of a sales person. Lines written or merged pull requests is not a good indication – but IT IS something that we can measure.
“not a good indication” is an understatement for both your metrics. Lines written is *negatively* correlated with work quality. Solving the same problem with more lines is worse. Removing lines is an improvement.
Counting merged pull requests is counting of whether someone has lots of small tasks or a small number of big tasks. While the former is sometimes slightly preferable, it depends on the kind of work being done.
The measurement of engineering value and quality is a tough one. It is going to require the combination of code written, test results, project/product release success as well as many other factors.
It’s a problem we’re trying to tackle with our smart assistant for teams – stratejos.
I’ve worked in many production support projects in the last 4 years. Every project would have a Batch job infrastructure to load data from external sources into the core system or to export/FTP data to downstream systems.
Except one project, every one of the Batch systems would involve countless stored procedures, SSIS packages, .Net executables performing the validation and loading activities.
As a prod support engineer I would spend most of time fixing issues in the batch system rather than the actual core system.
Now a single project had a batch system entirely written in KSH (AIX system) and awk scripts by a single person. The scripts would employ simple unix functions in numerous ways to modify data and validate them and load them. As the heavy lifting was done by the unix standard functions, the actual code for the batch system was extremely small and as a result had minimal bugs. Also the scripts were blazing fast. Where other projects would be able to load 60mb of data in 5 hours ( involves countless validations of course) the Unix system will load 2 gb in 1 hour.
I wonder, nowadays if a new project is being designed, would a manager approve just Unix scripts to design the entire system or flashy Business Intelligence tools?
The Unix script was faster and less KSH code than the other .NET code, but it also introduced an entirely new ecosystem with many new tools. Does the team now need to be experts in both? Are you going to rewrite all the old .NET code into KSH? This solution was short-sighted, it did improve on the immediate issue, but is going to cause a lot of long term support-ability issues.
Well there is a reason why Taco Bell while profitable for them is not a place to eat. There is one thing you are not taking into account and that is competition. Yes you can slap something workable together from lego blocks, be that unix subsystem or npm libraries. In fact thousands do. But in the end what will differentiate them? Only the talented programmer willing to hack the system for the best will stand out and survive.
That’s why we have a fads of no-programming programming coming out of wood works every 3-5 years, promising wonders and then disappearing into obscurity. They do what they promise, but they don’t allow 100% control and that’s why some IT department will be fine with them, but software company won’t — it’s not competitive.
“Well there is a reason why Taco Bell while profitable for them is not a place to eat.'”
This is contradictory, no?
I believe that high-level product/solution differentiation is more important here; Differentiation for the sake of differentiation at the software level is not an important factor. In fact if you can do things faster using already written and debugged libraries/utilities you are more efficient and may make it to market faster, overcoming your “100% control” (YAGNI) solution and the extra bugs that come along with it. Great example of ‘not invented here’ syndrome.
is this post related to this one? I found it very similar
http://widgetsandshit.com/teddziuba/2010/10/taco-bell-programming.html
The responsibility here falls somewhat on the stakeholders of the project in addition to the development team. Specifically when designing or introducing “enhancements” to Lob systems, stakeholders can get carried away with the specifics on various implementations for which it is “imperative” they must be customized to fit with a specific business expectation. An open mind here from project leadership can remove any need to code in the first place.
I cringe at how often my team or I have to write code to bolt onto a system simply because the stakeholders refuse to consider training users on how to actually use the system as delivered by the vendor
@Rafael, The “Taco Bell Programming” link at the first of the article references that article.
none of this made sense, this post is silly