The Mythical Man Month is a classic book from Frederick Brooks that considers the problem of large software projects which tend to move slowly and have low productivity. There are two frustrating aspects to this. The first frustrating aspect is that everyone gets slowed down. A programmer who can produce 100 debugged lines per day on a small project might only manage 10 on a big project. The second frustrating aspect is that it seems impossible to speed up by adding labor. Brooks noted that "Adding manpower to a late software project makes it later" (Brook's Law). He also pointed out that "Nine women can't make a baby in one month."
Brooks hypothesizes that this problem stems from a sort of communication overload. If N people are working on a project they have N^2 ways to talk to each other. That gets to be a lot of work as N increases. He then suggests some ways to reduce communication load through specialization (for example, a tool master) and modularization, breaking the project into functional groups that only communicate inside a small part of the hierarchy.
It's clear that big projects are slow, hard to manage and hard to accelerate. However, I am not persuaded by Brooks' analysis of the problem. There are 6 billion people in the world but I don't have to send them all a Christmas card or write on their Facebook walls. Nobody forces you to communicate with them.
I propose an alternate hypothesis based on dependencies. One person cannot be slowed down waiting for himself. In a big project, a lot people are waiting for components from other people. If 100 people are working and 50 people are waiting for something, then you are already down to 50% of potential productivity. The problem could actually become a lot worse if everyone is waiting for a few critical components.
We could model a project mathematically as an "NK network" where N people are each dependent on K components in active development. The behavior of this type of model is sensitive to K but not very sensitive to increases in N. In our example it would be sensitive to dependencies rather than the number of people. NK networks don't behave smoothly. They have a tendency to "phase shift," or lock up suddenly. After the lockup most people would be waiting for something. Maybe that is one reason that continuous delivery teams can scale to a large size. The continuous delivery system is tuned to detect and fix lockups.
The two theories both agree that you should maximize the amount of encapsulation - libraries and services - to keep most communication and dependencies inside one team.
From there the two theories lead you in different directions. If you believe that you have a communication problem, then you want to minimize the amount of communication. If you believe that you have a dependency problem, then you believe that more communication is better because it helps people work around the obstacles.
Fortunately, we have broken through the Brooks Barrier and we can see how. We use transparency and sharing. For example, the Linux kernel has grown by a steady 10% per year even as the number of contributors has increased to more than 1000. In the dependency theory, open source projects have a scaling advantage because all code is shared. If someone is waiting for a component and they are frustrated enough and talented enough, they can just fix the problem. The open source answer to the dependency problem is more communication and more potential overlap.
Internet projects tend to communicate in writing on tickets, blogs, mailing lists and wikis that are accessible to all team members. This changes the communication from a network of conversations that take time from a lot of people to a simple text search that a team member can do individually. It collapses the N^2 network. Perhaps this is why the time spent on conference calls, which do fit in the N^2 network, is a good indicator of management problems.
You can escape Brooks' Law if you are only adding people, and not dependencies between people. This could happen in theory if you had a centrally planned set of services with perfect encapsulation. There are a couple of other cases where you can escape, and you should try these if you are under a lot of time pressure.
Pile on at the beginning. You can add a large number of people at the beginning of a project. At the very beginning of a project, nobody knows what they are doing. It doesn't drag down the average expertise to add more people. In most projects, the number of contributors starts small and increases through the first big post-beta release. If you are under a lot of time pressure, you can put in a bulge of people at the beginning. Then, you weed out the ineffective contributors, and you even take some good people away to trim down the team size. Now, you have a hidden advantage, because you can bring those people back as the project ramps up, and they will understand the project. I do this, and it works.
Do it twice. You can run two completely separate efforts to solve a problem, and then take the solution that arrives first, or the one that gives you the best result. You don't have any scaling problems because the projects are completely separate. This is a common way to handle architecture questions. You ask two people to try two different approaches, and you take the one you like best. It's also what you do when failure is not an option. When the United States was racing against Germany to build an atom bomb, they funded three different uranium enrichment tactics and two different bomb designs.
Engineers hate having two people or teams assigned to the same task. It seems inefficient and it really bothers them. However, business people should love this idea. It's a lot cheaper to pay two people to fix something fast, than it is to pay 100 people to wait around for one person to do it more slowly.