“Taco Bell Programming” is the idea that we can solve many of the problems we face as software engineers with clever reconfigurations of the same basic Unix tools. The name comes from the fact that every item on the menu at Taco Bell, a company which generates almost $2 billion in revenue annually, is simply a different configuration of roughly eight ingredients.
Many people grumble or reject the notion of using proven tools or techniques. It’s boring. It requires investing time to learn at the expense of shipping code. It doesn’t do this one thing that we need it to do. It won’t work for us. For some reason—and I continue to be completely baffled by this—everyone sees their situation as a unique snowflake despite the fact that a million other people have probably done the same thing. It’s a weird form of tunnel vision, and I see it at every level in the organization. I catch myself doing it on occasion too. I think it’s just human nature.
I was able to come to terms with this once I internalized something a colleague once said: you are not paid to write code. You have never been paid to write code. In fact, code is a nasty byproduct of being a software engineer.
Every time you write code or introduce third-party services, you are introducing the possibility of failure into your system.
I think the idea of Taco Bell Programming can be generalized further and has broader implications based on what I see in industry. There are a lot of parallels to be drawn from The Systems Bible by John Gall, which provides valuable commentary on general systems theory. Gall’s Fundamental Theorem of Systems is that new systems mean new problems. I think the same can safely be said of code—more code, more problems. Do it without a new system if you can.
Systems are seductive and engineers in particular seem to have a predisposition for them. They promise to do a job faster, better, and more easily than you could do it by yourself or with a less specialized system. But when you introduce a new system, you introduce new variables, new failure points, and new problems.
But if you set up a system, you are likely to find your time and effort now being consumed in the care and feeding of the system itself. New problems are created by its very presence. Once set up, it won’t go away, it grows and encroaches. It begins to do strange and wonderful things. Breaks down in ways you never thought possible. It kicks back, gets in the way, and opposes its own proper function. Your own perspective becomes distorted by being in the system. You become anxious and push on it to make it work. Eventually you come to believe that the misbegotten product it so grudgingly delivers is what you really wanted all the time. At that point encroachment has become complete. You have become absorbed. You are now a systems person.
The last systems principle we look at is one I find particularly poignant: almost anything is easier to get into than out of. When we introduce new systems, new tools, new lines of code, we’re with them for the long haul. It’s like a baby that doesn’t grow up.
We’re not paid to write code, we’re paid to add value (or reduce cost) to the business. Yet I often see people measuring their worth in code, in systems, in tools—all of the output that’s easy to measure. I see it come at the expense of attending meetings. I see it at the expense of supporting other teams. I see it at the expense of cross-training and personal/professional development. It’s like full-bore coding has become the norm and we’ve given up everything else.
Another area I see this manifest is with the siloing of responsibilities. Product, Platform, Infrastructure, Operations, DevOps, QA—whatever the silos, it’s created a sort of responsibility lethargy. “I’m paid to write software, not tests” or “I’m paid to write features, not deploy and monitor them.” Things of that nature.
I think this is only addressed by stewarding a strong engineering culture and instilling the right values and expectations. For example, engineers should understand that they are not defined by their tools but rather the problems they solve and ultimately the value they add. But it’s important to spell out that this goes beyond things like commits, PRs, and other vanity metrics. We should embrace the principles of systems theory and Taco Bell Programming. New systems or more code should be the last resort, not the first step. Further, we should embody what it really means to be an engineer rather than measuring raw output. You are not paid to write code.
Follow @tyler_treat
Nice.
Since I quit my first job, started my own company and became the sole person responsible for tech I’m continuously trying to change my thinking towards this. It is easy to get caught up in your own grand design – with patterns, code principles etc etc. – without thinking much of what it actually adds to the business.
As to measuring… It is not easy to measure the value that an engineer adds – definitely not as easy as measuring the value of a sales person. Lines written or merged pull requests is not a good indication – but IT IS something that we can measure.
“not a good indication” is an understatement for both your metrics. Lines written is *negatively* correlated with work quality. Solving the same problem with more lines is worse. Removing lines is an improvement.
Counting merged pull requests is counting of whether someone has lots of small tasks or a small number of big tasks. While the former is sometimes slightly preferable, it depends on the kind of work being done.
“Solving the same problem with more lines is worse. Removing lines is an improvement.” – untrue. You can’t say this categorically. Code should be written as simply as possible, sometimes more lines for clarity and sometimes less. While I agree that it’s generally a good approach to provide solutions with fewer lines, I think the dev needs to be agnostic to the idea of more = bad, less = good.
Less = Flexible, More = Not so Flexible. The more lines of code you introduce, the bigger the chance for potential bugs. My philosophy is to reduce lines of code, write lots more smaller units – in this sense, more does not imply bad, but good. Remeber, there were no bugs until one starts to write code. ;)
It is not about number of lines of code, what is important is readability and maintainability of code.
The measurement of engineering value and quality is a tough one. It is going to require the combination of code written, test results, project/product release success as well as many other factors.
It’s a problem we’re trying to tackle with our smart assistant for teams – stratejos.
I’ve worked in many production support projects in the last 4 years. Every project would have a Batch job infrastructure to load data from external sources into the core system or to export/FTP data to downstream systems.
Except one project, every one of the Batch systems would involve countless stored procedures, SSIS packages, .Net executables performing the validation and loading activities.
As a prod support engineer I would spend most of time fixing issues in the batch system rather than the actual core system.
Now a single project had a batch system entirely written in KSH (AIX system) and awk scripts by a single person. The scripts would employ simple unix functions in numerous ways to modify data and validate them and load them. As the heavy lifting was done by the unix standard functions, the actual code for the batch system was extremely small and as a result had minimal bugs. Also the scripts were blazing fast. Where other projects would be able to load 60mb of data in 5 hours ( involves countless validations of course) the Unix system will load 2 gb in 1 hour.
I wonder, nowadays if a new project is being designed, would a manager approve just Unix scripts to design the entire system or flashy Business Intelligence tools?
The Unix script was faster and less KSH code than the other .NET code, but it also introduced an entirely new ecosystem with many new tools. Does the team now need to be experts in both? Are you going to rewrite all the old .NET code into KSH? This solution was short-sighted, it did improve on the immediate issue, but is going to cause a lot of long term support-ability issues.
From a historical perspective, it is the windows and .net toolchain that’s the new kid on the block.
These *nix platforms are designed to develop systems with from the very start. And even though I started developing on windows, today I and many others are pleased ny what this old platform offers, right out of the box. Think package management, networking tools, system services, performance monitoring, logging… It’s all been there from the start, well understood, stable, secure,…
Well there is a reason why Taco Bell while profitable for them is not a place to eat. There is one thing you are not taking into account and that is competition. Yes you can slap something workable together from lego blocks, be that unix subsystem or npm libraries. In fact thousands do. But in the end what will differentiate them? Only the talented programmer willing to hack the system for the best will stand out and survive.
That’s why we have a fads of no-programming programming coming out of wood works every 3-5 years, promising wonders and then disappearing into obscurity. They do what they promise, but they don’t allow 100% control and that’s why some IT department will be fine with them, but software company won’t — it’s not competitive.
“Well there is a reason why Taco Bell while profitable for them is not a place to eat.'”
This is contradictory, no?
I believe that high-level product/solution differentiation is more important here; Differentiation for the sake of differentiation at the software level is not an important factor. In fact if you can do things faster using already written and debugged libraries/utilities you are more efficient and may make it to market faster, overcoming your “100% control” (YAGNI) solution and the extra bugs that come along with it. Great example of ‘not invented here’ syndrome.
is this post related to this one? I found it very similar
http://widgetsandshit.com/teddziuba/2010/10/taco-bell-programming.html
The responsibility here falls somewhat on the stakeholders of the project in addition to the development team. Specifically when designing or introducing “enhancements” to Lob systems, stakeholders can get carried away with the specifics on various implementations for which it is “imperative” they must be customized to fit with a specific business expectation. An open mind here from project leadership can remove any need to code in the first place.
I cringe at how often my team or I have to write code to bolt onto a system simply because the stakeholders refuse to consider training users on how to actually use the system as delivered by the vendor
@Phil – SPOT ON.
I’m on the stakeholder/PO side of a medium-sized product set, and it is unbelievable how obstinate certain stakeholders can be about the simplest of features. “Why can’t we automate?” is something I hear all the time, but as soon as we automate, the question becomes “Why can’t we customize?”
Its really difficult to explain these sorts of pitfalls to people who really, REALLY don’t want to hear it. I’ve been the perpetrator of your cringe-worthy moments because I can’t get non-technologists to understand the depth of what they’re really asking for (and how little they actually need it).
@Rafael, The “Taco Bell Programming” link at the first of the article references that article.
none of this made sense, this post is silly
You are paid to add value, code is how you do it. No one but you cares how much code it takes … except perhaps some manager who doesn’t understand value.
This article discusses some real issues. There are two extremes in tech, never change the system or always use the latest and greatest. I think both extremes can lead us to issues. If we never found flaws with the older systems, maybe all smartphones would still be the old Windows mobile devices and not iPhones. If we always use the latest version there might be a security issues that costs a corporation millions.
I think it is easy for us to think our ideas are better than what was done before without having all the info available. It is probably easy to see the flaws in a system that has been worked with for years, and generally if our work and position is in place to replace the system already in place we have financial motivation to do so.
On new projects I have little knowledge of team size, work conditions, budget, time, etc that was put into the legacy system (by calling it legacy I reinforce my point as well, not always fair to say about a usable working system). New development and new ways of thinking can inspire transformations and create amazing new products, and if people didn’t try new systems we would have never had computers in the first place. Sometimes we stumble on old ideas that we think of as “new” because we hadn’t heard about them or read about them. The reinvention of old ideas can inspire new generations. I think that is why there seems to always be a new language being created over the years. A new set of languages might inspire a new set of programmers, or might give them opportunities. For instance if someone hasn’t worked with Java or .NET for years it might be difficult for them to find work with those languages yet might be able to find work with some of the newer languages. Not to say the new languages don’t try to tackle some real issues with other languages, just saying different flavors can sometimes provide value outside of a pure technical choice of which is better.
This doesn’t necessarily mean all change is good or done for the right reasons. Sometimes I wish technology and science wasn’t ever adulterated by money and politics, the best most efficient choice is always what wins. Sadly we don’t live in that ideal world, so I have to look for the good in the choices that are made, and try to make the best choices given all the variables (some of which might not be just what will make the best product). What I think is the best choice might not be what others think is the best choice, the bigger the system the more politics and more money that is involved.
Working on a system ourselves we get to make all the choices, where we are too strict and where we are too lenient will be part of the end product. Group settings we give up control for help and ideas from others which can also lead to confrontation and compromise. At its worse we could have people on the same team actively working against each other. I think of the part from HBO Silicon Valley when they break up over tabs or spaces. At best we could have the best ideas from a diverse set of people that creates an amazing system.
The point about being paid to create solutions, not code, is an excellent one. And, in the same vein, we’re not paid to haul in heavy Java EE stacks and queue systems and whatever, throwing man years at mundane integration tasks which could be just as easily solved with a few hundred lines of Python or Ruby or Javascript.
Except sometimes we are. The example above is not hypothetical; it describes a real project that I have had the pleasure of working on. I think what goes wrong in this particular case can be summarized as “enterprise”. The customer of enterprise software has deep pockets because enterprise software is expensive by nature. You have decision makers at the customer end with little motivation for thrift and you have salespeople on our end who will push that fact to the max. So money flows in and the software development organization bloats. There is a bunch of management, the paycheck of each one being justified by the headcount of their little fiefdom. You have architects running rampant, each one throwing their favourite 3rd party platform or subsystem into the soup. You have an abundance of developers churning out code, and let’s face it, not all developers are the brightest bulbs either, but management doesn’t know the difference.
Once you have a setup like this, with enough layers between the code monkey and the customer, the code monkey is essentially paid to write code (or, more precisely, to burn hours that can ultimately be billed to the clueless customer). There is no reward for just solving the problem and making the customer happy. Imagine the collective scream of terror from all of the people who would turn out to be expendible :)
Excellent comment Martin. Those of us that do work for larger companies know that story all too well.
I am paid to write code in – for some of you – low level language. Yes, if you ask your gpl aunt, there’s a solution for everything in theory, but in practice one often finds that they won’t cover this perfectly reasonable use case in compatibility or performance-to-scale.
I am unfortunately witnessing a trend where ‘if this functions does 95% of what it’s supposed to do, and it’s callee does 95%, that’s okay’ is acceptable. It’s not an entirely new trend. It’s where all of those IIS ‘error serving /’ with full stack trace came from.
I weep for this future of mediocrity.
Whether you’re paid to ‘just’ write code or do more depends on the company and its culture you’re working for.
I’ve encountered both. – companies where they had a role for every step of the process, where some (non-technical) analyst wrote down what you were supposed to make, sometimes crossing the boundaries into technical domain. And even if there were huge tremendous mistakes in the analysis, you were only allowed to implement it like that. When the shit hit the fan, it was still the developers fault, but they wouldn’t even approve it for release if you wouldn’t implement it exactly as specified by them, and arguing was of no use. Sometimes there is no time/money for elaborate testing or even documentation. This might and probably even will explode in your own face some time in the future, but if your manager does not want to give you sufficient time for this and piles you with other new and urgent tasks, you don’t always have a choice.
– On the other end you also have pure technical companies where there is much less structure, and here you, as a developer, are more than just a developer: you deliver working, tested solutions, and perhaps also train some people or do some pre-sales activities.
The only danger in technology-driven companies is to get lost in technical details and spending lots of money on design and code quality, but losing sight of functionality and user-friendliness.
But a developer should always do ‘the smart thing’. Like using an existing library if you want to convert say a docx file to a PDF, and do not try to write this code yourself, which could take years to do right(!!), if this is not a core functionality of your application!
And believe me, some try to do these kind of stuff!
This also means : don’t be too lazy and take a third party library for everything, also the easy stuff, since one day in future, you might be regretting this. I.e. if the open-source team stops supporting the library, or your requirements change slightly to what the library has to offer,…
But “developer” is also a term that’s too broad. That’s like saying that someone is a doctor. Is the person a doctor in Physics, Chemistry, Medicine? Is the person a surgeon or a house doctor? Specialised in lungs or heart?
Developers too can be just coders and a bit more, like the house doctor having a general knowledge. But they also can be ‘technical specialists’, like heart surgeons. They can also be ‘product or domain specialists’, like doctors in medicine working for a research lab, they know the illnesses, they know some techniques, and they try to find a working solution to cure the disease, and they might need to write papers, convince colleagues and do presales for selling it to some pharmaceutical company.
No programmer should care why they are being paid. They should not care what the smart thing is. Tool choice is irrelevant to management. CFO’s control the world now. Leave ROI to them. Leave keeping the job interesting to yourself. If the job is not interesting, leave. EOS.
Great post! I just wrote something on the same subject: https://blog.guilhermegarnier.com/2016/12/software-developers-and-digital-artists/