Big Software Firm Bleg

I haven’t yet posted much on AI as Software. But now I’ll say more, as I want to ask a question.

Someday ems may replace humans in most jobs, and my first book talks about how that might change many things. But whether or not ems are the first kind of software to replace humans wholesale in jobs, eventually non-em software may plausibly do this. Such software would replace ems if ems came first, but if not then such software would directly replace humans.

Many people suggest, implicitly or explicitly, that non-em software that takes over most jobs will differ in big ways from the software that we’ve seen over the last seventy years. But they are rarely clear on what exact differences they foresee. So the plan of my project is to just assume our past software experience is a good guide to future software. That is, to predict the future, one may 1) assume current distributions of software features will continue, or 2) project past feature trends into future changes, or 3) combine past software feature correlations with other ways we expect the future to differ.

This effort may encourage others to better clarify how they think future software will differ, and help us to estimate the consequences of such assumptions. It may also help us to more directly understand a software-dominated future, if there are many ways that future software won’t greatly change.

Today, each industry makes a kind of stuff (product or service) we want, or a kind of stuff that helps other industries to make stuff. But while such industries are often dominated by a small number of firms, the economy as a whole is not so dominated. This is mainly because there are so many different industries, and firms suffer when they try to participate in too many industries. Will this lack of concentration continue into a software dominated future?

Today each industry gets a lot of help from humans, and each industry helps to train its humans to better help that industry. In addition, a few special industries, such as schooling and parenting, change humans in more general ways, to help better in a wide range of industries. In a software dominated future, humans are replaced by software, and the schooling and parenting industries are replaced by a general software industry. Industry-independent development of software would happen in the general software industry, while specific adaptations for particular industries would happen within those industries.

If so, the new degree of producer concentration depends on two key factors: what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry. Regarding this second factor, it is noteworthy that we now see some pretty big players in the software industry, such as Google, Apple, and Microsoft. And so a key question is the source of this concentration. That is, what exactly are the key advantages of big firms in today’s software market?

There are many possibilities, including patent pools and network effects among customers of key products. Another possibility, however, is one where I expect many of my readers to have relevant personal experience: scale economies in software production. Hence this bleg – a blog post asking a question.

If you are an experienced software professional who has worked both at a big software firm and also in other places, my key question for you is: by how much was your productive efficiency as a software developer increased (or decreased) due to working at a big software firm?  That is, how much more could you get done there that wasn’t attributable to having a bigger budget to do more, or to paying more for better people, tools, or resources. Instead, I’m looking for the net increase (or decrease) in your output due to software tools, resources, security, oversight, rules, or collaborators that are more feasible and hence more common at larger firms. Ideally you answer will be in the form of a percentage, such as “I seem to be 10% more productive working at a big software firm.”

Added 3:45p: I meant “productivity” in the economic sense of the inputs required to produce a given output, holding constant the specific kind of output produced. So this kind of productivity should ignore the number of users of the software, and the revenue gained per user. But if big vs small firms tend to make different kinds of software, which have different costs to make, those differences should be taken into account. For example, one should correct for needing more man-hours to add a line of code in a larger system, or in a more secure or reliable system.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://don.geddis.org/ Don Geddis

    My experience suggests the opposite. Small firms are nimble, and can quickly move to exploit new opportunities. Large firms end up with a lot of process, as an attempt from the most senior executives to exert some kind of control over the huge organization. You wind up with numerous policies and procedures that may be useful “on average”, but often are inconvenient for the small, local team. Goals are set and evaluated through internal politics and coalitions, rather than external feedback from the market. Large company resources and profits cause the organization to be fat and happy, rather than lean and hungry. Christensen’s classic The Innovator’s Dilemma details many of the reasons why large, successful firms have a difficult time succeeding with subsequent technology innovation.

    My direct answer to your question, is that I find my software engineering to be *less* productive at a big firm, than it is at a smaller one.

  • Joshua Fox

    In my experience, software development output of the same professionals is about 5x or more higher in a very small company than a very large one. The business considerations are of course different between a startup agilely looking for product-market fit and a mature company with a product that is selling well but needs incremental development.

  • Thomas_L_Holaday

    Productivity and output may be tricky to measure due to different sizes of the userbase for small and large software companies. For example, the story of the Windows Shutdown told in this blogpost …

    https://moishelettvin.blogspot.com/2006/11/windows-shutdown-crapfest.html

    … says that the output for an able developer on the Shutdown feature was a couple hundred lines in a year. That does not sound like many lines, but there were 384 million Vista licenses sold. Should that factor in?

    Linus Torvalds was able to increase his own productivity (as perceived by him) by writing his own version control system (git). This would be difficult for a developer employed by a large firm. It was also difficult for developers employed by large firms to persuade their large employers to move to git.

    In em-world, perhaps all the people mentioned by Moishe Lettvin would be clones of the same programmer, so discussions would be much faster. In em-world, perhaps Linus Torvalds could make ad-hoc copies of himself to develop git while other copies of himself continued to work on the kernel.

    • http://overcomingbias.com RobinHanson

      I mean not to include the number of users in my productivity measure. That is determined by market position, not developer productivity.

  • Chip Morningstar

    On a per developer basis, my experience is that developers are vastly more productive at smaller organizations. Large organizations, however, can exploit developer parallelism on a scale that is not available to smaller ones, so that they can cover a larger area of whatever product space they are going after (doing a better job at producing what marketing folks refer to as the “total product” — covering more edge cases in feature space, providing more documentation, more support, etc.). Consequently, they can produce a more competitive and sometimes superior product even though the developers are individually much less effective.

    Note that a lot of this supporting stuff is not actually produced by software developers, and a lot of it is produced by people who actually *are* developers but aren’t developing what you’d think of as the core of the thing being developed (e.g., infrastructure, tooling, test code, example code, interfaces to legacy systems, etc.). If you want, I suppose you could account for all the stuff produced by this enormous ancillary organization on a per-core-developer basis as part of what they produce and then argue that the latter folks are thus more productive.

    Some invented numbers just to illustrate what I mean: Small company, 10 developers (all of whom are “core developers”), each of whom produces 10 units of output, versus large company, 50 “core developers”, each producing 5 units, plus 1000 other supporting roles, each producing 1 unit, for an aggregate of 25 units of output per “core developer”.

    Note that for a software product, most of the costs are in the fixed cost part of the cost equation, so a successful company can be relatively insensitive to individual contributor inefficiencies.

  • Oleg Eterevsky

    I work for Google and can try to answer. First, I’ll address the main question of the post (i.e. effect of productivity), and then I’ll speculate a little bit on possible other reasons why successful software companies tend to grow to tens of thousands of employees.

    First of all, instead of describing all of Google’s infrastructure in this comment, I’ll provide a link to an article that has a good description of engineering process and tools in Google: https://arxiv.org/ftp/arxiv/papers/1702/1702.01715.pdf

    There are several positive and negative effects that working within this culture have on productivity. Positive effects are:

    – Uniformity. Standardized building, testing, version control, programming style make it extremely easy to work and integrate with other libraries, tools, services, written within Google. You can always just click on the link on any function that you use and see how it is implemented. And if you see a bug there, you can just fix it yourself, and send the patch for review within minutes. Lately this is partly mirrored by modern open source projects, hosted on GitHub, but Google codebase is on an entirely different level of uniformity.

    – Existing libraries and services. Whenever you have a problem, it has often already been solved in a half a dozen ways. Whether you need to get a list of restaurants near user’s location, run a spellchecker or classify an image, there’s likely one or two ready solutions, with which you can integrate with within a day or two. And best of all, you will never have problems with incompatibility. It can never happen, that the library that you want to use is written in Fortran, or is not compatible with the latest version of your Linux distribution.

    – Infrastructure. It is fairly easy to store petabytes of data and to use thousands of CPUs for a computation. 90% of engineers never have to even think of the intricacies of hardware on which their code is run. It is not uncommon to not know or care, on which continent your code is running.

    – People. A few tens of thousand of people is a small enough community that you won’t have any troubles reaching out to any other googler. And this community contains world-class experts on many software-related topic. Writing a question in the appropriate maillist (or even to a common miscellaneous group) will almost always result in a thoughtful and comprehensive answer.

    Negatives are:

    – Unnecessary complexity. Remember the first positive item, uniformity? It has another side. It means that every binary that you produce will out of the box support a number of conventions, parameters and so on. “Hello world”, compiled with Google’s infrastructure will probably produce a 10+ megabytes binary. Use just a few neat libraries and services and suddenly you have a 700M binary on your hands. All building and testing infrastructure is in the cloud, you usually can’t build your project with one poor desktop. This makes things slower and harder than they could have been.

    – Bureaucracy. Not as bad as it sounds, and probably not as bad as in some other companies, but still. Each new feature in your project has to be approved by twice or thrice as many people, as in a typical startup.

    – Geography. Working with the co-workers across the globe means, that their working time often starts, when your’s has already ended.

    I listed just the most obvious positive and negative effects, but one might already see, that these features have different effect, depending on your project. In general the impact on a small and/or simple project may be negative, while the impact on a big and complex project may be positive, and in some case not just positive, but critical.

    It is hard to tell, whether this is the main reason of the tendency of enlargement of software companies. One other explanation that seems to me more likely is related to the economy of software and online services. The key difference from the traditional industries is that a relatively small team, with relatively small investments can produce a highly successful product, that will reach a lot of customers in a relatively short timespan.

    Imagine for a second that you are producing not software, but shoes. What would it take to sell hundreds of millions pairs a year across the globe? In all likelihood, it will take decades and a huge amount of money.

    The situation is different when it comes to online services. It is extremely easy to write a prototype of a service like Twitter. Open TechCrunch and you’ll read of a new startup every day. Most of those startups fail miserably, but some succeed, and very shortly have 10-digit evaluation. The common wisdom is that you shouldn’t kill the goose that lays golden eggs, but should rather invest in it. This creates a climate in which small successful companies are encouraged to grow as fast as possible.

  • J

    Big companies have an advantage at big problems, and disadvantages at small problems.

    I tend to work on problems that can be solved by a handful of programmers, and was more productive at tiny startups than at big companies. When solving a new problem in software, I’m not comfortable unless I have at least 3 ways to solve a problem (3 different libraries for putting an image onto a screen, for example). This is because the first one or two things I try often don’t work for mysterious reasons (the library isn’t available on my OS version, simply doesn’t work, or has some unexpected shortcoming). Big companies tend to constrain the options to a single preferred solution that may be better supported, but you’re stuck investing a lot more work into it if it doesn’t quite meet your needs.

    Big companies can tackle big problems like search that require huge space, processing and monitoring resources. They get attention from vendors: I filled in a web form asking for samples of an electronic part, and less than 30 minutes later a rep had personally delivered them to my office. Shipping stuff costs easily 2x more if you don’t have negotiated rates with fedex.

    Build cycle time is critical to programmer productivity. Typically we only write a few lines of code before we want to see if it’ll compile. On a single-person project from scratch, it can take less than a second to build and run the code. But if you’re working on, say, MS Outlook, build times might be tens of minutes. Not only does that mean the turnaround time is hundreds of times higher, but we can’t keep all that state in our heads while waiting multiple minutes. We get bored and distracted, so the write/compile/run cycle also involves a task switch. Cycle time of more than a few seconds tends to incur this penalty. Most of this hinges on the size of the project you’re working on, but big companies with big applications have build systems optimized for those big apps, so compiling “hello, world” might take many seconds (and just as bad, may have a 2-sigma tail of over a minute). But for a large app, it may have distributed compilation across hundreds of machines, and take only a minute to compile something that would take hours on a single workstation.

    • http://overcomingbias.com RobinHanson

      Yes the costs are different to do a big system than a small system. That is different from costs being different to make a given size system because of the size of the organization.

      • J

        Right, in my last paragraph, my first proposition is that small systems have an advantage in cycle time.

        The second proposition is perhaps harder to see: big companies have infrastructure optimized for big systems, and this makes it harder to build small systems.

        The build, monitoring and deployment systems may each be a marvel of big system engineering, more complicated than a small company’s entire operation. Small companies can’t touch it, but nor are they forced to put up with their inertia and long-tail latency that make those systems painful for use in small projects.

  • Brian Slesinsky

    I personally felt most productive working at a small startup with a tight-knit team. However, the startup died and the software was never used, so, technically, my productivity there was zero.

    I’d say this is true in general. The largest waste in software is building things that are then thrown away. This happens both at big and small firms. Also, even for software that is used, some of it gets used a lot more than others. It’s a hit-driven business.

    “[T]he reason we can’t measure [software] productivity is because we can’t measure output.”
    https://martinfowler.com/bliki/CannotMeasureProductivity.html

    Large firms have cancelled projects and failed products too. However, often they’re better at building and maintaining large-scale infrastructure due to a built-in, captive audience. Also, externally, their product announcements get more attention, so they have a built-in advantage when getting users.

  • http://don.geddis.org/ Don Geddis

    On the topic of effective large-scale software development, I might also recommend some of the insights from Eric Raymond in “The Cathedral and the Bazaar” ( http://catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/ ). He essentially makes the argument for bottom-up distributed software development, instead of traditional top-down monolithic development. (Basically: Microsoft Windows style vs. Linux style.)

  • Ben Morin

    Hey Robin, love the blog. I am not a developer, but was an implementer at Epic (market leader for EHR, large company) and observed the development of our products vs. others.

    The biggest advantage Epic had as a big company was to easily build new products connected to their existing ones. So for example, they quickly took over the software market for hospital pharmacies once they had the market for inpatient care. Same for lab testing, billing, etc. However, they did not have any noticeable advantage on software in completely new areas, for example hospice care, insurance plan management, etc. Most of the new products in these areas were introduced by smaller startups, despite Epic having many more programmers working on them.

    From my perspective, once we moved out of areas that were dealt with by our existing customers, and so lost easy access to customers who could give quick feedback, we fell behind to startups run by people who had worked directly in those areas. I do see slight exceptions in the wider market, for example Microsoft in the 1990s and Google now, who were able to branch out and be successful in unrelated areas (for example, search algorithms into mobile phone operating systems). Even still, many of the new products that “came out of Epic” actually came from employees leaving to create startups. I see the same with Google, limiting the reach of any one company into multiple areas. I’d expect these trends to continue, so would not expect very few companies to dominate the overall market for software.

  • http://overcomingbias.com RobinHanson

    It seems that so far the answer is that small firms are just much more productive in making small simple systems, but that big firms can be more productive at making big complex systems. So one reason that there are big firms is that there are big complex systems.

    • Petter

      I concur. I am a software developer at Google and when I write a highly-available and global service for millions of users I think I am thousands of percent more productive since many of the tricky problems (security, database, redundancy, backup) are already solved and one can focus on the business logic. I couldn’t do all that from scratch.

      Just trying out something small could be delayed a bit by bureaucracy.

      But today there are intense competition in the cloud space so key Google technologies like Spanner are available to anyone to rent. So perhaps this was more true 10 years ago.

  • Lord

    The greatest difference is they work on different problems and it is impossible to separate value from use. Large firms face large problems and small firms small ones. Large firms can scale resources to the task, but small firms won’t have large problems, and many large problems are simply too large and complex for them to handle. As a result I would say the output is proportional to value. with software just being necessary overhead.

  • Emmett Shear

    I run a large tech organization, so I think about this problem a lot.

    “Scales to lots of users” is a very important and expensive feature to add to any product, so meeting that requirement will add massive drag to any programmer but comes with the benefit of dramatically increasing impact on real users. So if you’re asking “how much software will they write?” the answer is much less, if you’re asking “what will be the impact of that software on users?” the answer is much more.

    The code written at a large company will generally be of higher “quality”, in that much more effort will go into making sure the code has good security qualities, that it localizes well into every language, that it manages accessibility requirements for blind people, that it doesn’t suffer from single points of failure, etc. etc. Whether this should be considered “more productive” is hard to say, it’s critical if you want to scale code up to many users.

    Bottom line: All those software tools, rules, collaborators etc. work to make your code work reliably and securely at scale at big companies, with the cost of slowing you down dramatically for actually getting software written.

    ——

    One really interesting thing is the impact of Google Cloud or AWS. These internet services give a lot of the benefit of working at a large corporation to a software developer: instantly turn on complex and difficult to scale/manage services with great APIs you can develop on top of. The productivity multiplier as those services get better will only go up: cooperating via API is so much more efficient than cooperating via management, it’s not even funny.

    If I had to put forward a reason why we see a tendency to concentration for big tech companies like Google and Facebook, it’s data advantage. The real advantage you have writing software at Facebook is access to that incredible private database of users and relationships, which makes it possible to write software there that you couldn’t possibly write elsewhere. (Microsoft is more of traditional platform lockin with apps, Amazon I won’t comment on as they are my employer,

  • Joe

    Regarding your update to the post, would you also hold inputs constant? More specifically: might the prominence of some large software firms like Google or Facebook be partly due to their prestige, which enables them to attract the best programmers by bestowing onto them status that smaller firms just can’t offer?

    • J

      If you follow the money, Google is an advertising company. Techies are more attuned to its shiny technical products, but I suspect the shiny things have a lot to do with Larry & Sergey maintaining majority stake and rather than retiring when they hit it big, preferring to have armies of top-notch engineers to help them make fun shiny things.

      But take away the advertising revenue and all that goes away. So perhaps your “magnet for good programmers” theory is correct, but less to do with prestige and more to do with companies that hit the jackpot with ads or taxis or whatever, and then use high salaries to attract a critical mass of sharp programmers to diversify.

  • J

    > what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry.

    This is a hard question. Sounds like you already know enough about software to understand the significance of zero marginal cost. If I get everyone to standardize on metric hex cap screws, I can start a big firm producing them by the millions, and small firms can’t touch my economy of scale. But software is weird: Dan Bernstein can singlehandedly write all the crypto for the entire world: http://www.metzdowd.com/pipermail/cryptography/2016-March/028824.html

    Cloud software is a weird move toward *higher* marginal costs for the producer. It’s easier for the producer to maintain and support, and opens the door to lucrative tracking and advertising. But it’s much harder than just putting a tarball up on github.

    A lot of the comments in this thread reflect that zeitgeist: scaling to a billion users means running complex cloud services, and here big companies have an advantage.

    So that makes it easy to overlook the commoditized software infrastructure underpinning everything from big cloud services to your mobile phone.

    Linux is a big project now with lots of contributors, but it’s not owned by a big firm. Lots of the absolutely essential drivers and libraries — the metric hex cap screws — are maintained by single individuals in their free time.

    So yes, there’s a huge long tail of niche software easy for small firms to serve. But unlike mechanical industries, we shouldn’t expect the commodity building blocks with huge volume to come from a big firm.

    • J

      (continuing) So what’s with big firms, then?

      One answer is that search and amazon and facebook have to be big complicated systems in the cloud. Amazon needs a big warehouse. Google’s web index doesn’t fit on your phone. Facebook puts all your friends in one place. So those are necessarily big firms.

      But more lucrative and less savory are the walled gardens. Windows and Intel x86 dominated for decades because everybody’s apps ran on it. Their biggest strength was also their biggest curse: backwards compatibility kept the customers captive but made it increasingly hard to change anything.

      Apple’s the biggest company in the world because it figured out how to vertically integrate the hardware supply chain, and produce a single model of phone that works well with the software.

      Qualcomm is worth studying: they’re cleaning up in the Android market because phones have a huge mess of radio stacks, each with tons of regulatory and IP constraints, and they’ve been consolidating the ability to put that in a single SoC.

      So I guess my intuition is that a lot of the market for big software companies has to do with friction from IP, regulators, standards bodies, and hardware integration. When that friction can be overcome, some hacker in a basement takes care of it for everyone, and creates a public good that nobody pays attention to because it no longer appears on anybody’s balance sheet.

      • J

        (continuing). But perhaps what you mainly care about are big hypothetical futuristic firms that would compete against ems?

        Perhaps the firms you’re imagining would have limited AIs that do complicated tasks more efficiently than a human. So perhaps a firm solves the self-driving car problem. Or the babysitting problem (keep Junior or great-Grandpa from hurting themselves and provide basic needs). Or writing legal briefs or doing radiology.

        In such cases, one key question is whether they can run on the customer’s hardware. If a service today needs to query a petabyte datastore, it’s going to run in the cloud and be owned by a big company.

        Another barrier is IP. Perhaps that petabyte datastore requires sending cars out to map all the streets in the world, or an army of engineers to regularly tweak the neural net.

        I’m less concerned about the IP in the core algorithm: usually the techniques end up in academic journals and patent applications. Openstreetmap.org works fine, and its rough edges improve over time, but it’s hard to compete with google’s map database.

        So if we project today’s supply market onto the future problem space, my prediction would be lots of open source stuff that lets anybody do some subset of tasks like language translation or image understanding. It took mankind decades to solve the problem, but the solution runs from a tidy library on your phone.

        The messier problems like, say, maintaining a database of all the roads and places in the world, or the computationally expensive problems that have to run in a datacenter, would be the domain of the big firms.

  • http://stochastication.com Nathan Wilson

    Work at one of AppGooAmaFaceSoft. Probably around 50% – in smaller companies, depending on the company a good 50% of my time would go to (deployment, release, testing, security) and that burden is gone due to tooling that automates a lot of the manual work. There is also a gain that is harder to quantify that comes from having consistently more competent peers to review and assist with catching code problems.

  • Nat M

    Work at one of AppGooAmaFaceSoft. Probably around 50% – in smaller companies, depending on the company a good 50% of my time would go to (deployment, release, testing, security) and that burden is gone due to tooling that automates a lot of the manual work. There is also a gain that is harder to quantify that comes from having consistently more competent peers to review and assist with catching code problems.

  • Vitalik Buterin

    I would argue that a very large source (the main source?) of the marginal productivity gain of working at big firms is the simple fact that big firms have access to a large base of people who are already willing to consume their outputs and do not need additional marketing.

  • Sharper

    A well known general rule for software development is that as you add programmers to a software development task, the marginal contribution of each programmer decreases, in large part due to communications and coordination overhead.

    As a general case, programmers at small firms are currently able to use essentially the same tools as programmers at a large firm, so there is no inherent scalability advantage there.

    In a small company, someone may not be able to specialize to the extent they could in a large company (separate QA groups, configuration management, production support, etc…), but that is only an advantage up to about 20-30 people, after that the benefits stop increasing.

    “In the wild”, I don’t know of any software development teams which scale very far while working together. Looking at publicly available records, the Linux kernel is primarily the work of about 20 developers. There are lots more (10K?) with little tiny contributions, but they contribute way less each than what the top 20 do. For Windows 7, Microsoft purportedly had 23 feature development groups of about 40 people each, for less than a 1000 total, but I think you’d really count “40” as the number working on the “same” software.

    So you can make more different products by adding development groups, but you rapidly reach the point of diminishing returns when trying to add developers to the same product. This would seem to imply that a small company of 40 developers would compete just fine with just about any scale of larger company, because they’d end up with a group of about 40 also working on the same problem.

    You can consider “complex” software problems, but my experience is that complex software is typically a collection of simpler software which are then packaged together as an overall solution. Maybe someone out there makes a massive monolithic piece of software, but I don’t know who would want to. There are advantages to buying a collection from one company to ensure interoperability, support and compatible upgrade cycles, but typically that still leaves room to compete for replacing components if the replacement is good enough and doesn’t really speak to the question of developer productivity, just sales ability.

    So in short, I’d agree with the others above and say based on experience, “Developers are likely 60% _less_ productive on average working at a big software firm”, but that certain sales and packaging advantages keep them there for some software markets.

    In games, there are some market-failing exceptions with large teams, but flagship (at the time) products like Sega’s Sonic Adventure had a complete team of 30 people. Sonic Heros had a team of 19 people. There are lots of examples. Valve makes a major game engine (Source), lots of games, Steam, etc… with about 100 employees total.

  • http://gworley3.github.io/ G Gordon Worley III

    I’ve at times generated more output at small companies than large ones. Hard to say exactly by how much but maybe 200 to 300% more productive at a small firm. Mostly a matter of motivation: in big firms I am not much needed to be more productive and additional productivity has little marginal impact on the business so there’s little reason to work harder than necessary. At a small firm my work is often necessary for the firm’s continued existence, and the more the firm’s existences hinges on my work the more productive I am. But then at small firms I’ve been no more or less productive than at big firms when my work didn’t matter much. But to be fair this is all perceived productivity; I have no outside way to measure well how much work I did other than how much work it felt like I did.

  • Allan MacGregor

    Attributing a percentage might be difficult, as developer productivity in general is not a well defined not does it have a commonly agreed on definition.

    Developer productivity to should be mostly concerned with the outputs – which at the end of the day is still code – the quality of such outputs and the cost of those outputs.

    For the last 7 years I been working for the same company which grew from the original 5 people team to a 100 team; and boy I can tell you that at least as far as I’m concerned we got a lot slower.

    Put as a percentage I would say some tasks are now taking 200% more time to complete, and while is true that now we see larger projects and we have the ability to tackle much more complex tasks; is the simple ones, the trivial ones that I’m still shocked to see are taking too long to complete.

    A larger organization faces a considerable amount of challenges to make sure knowledge transfer, communication and repeatable details are still achieved; a smaller team has the advantage of just turning left/right and asking for help or more information.