Software development doesn't scale
Adding more people to a software project doesn't necessarily improve productivity this is from the mythical man month or known as Brook's Law
Adding people to a project makes it later or slower due to the communication needed to get up to speed.
This category is for ideas on making software development scale from 1 person to a couple. To a couple to many.
Reading code is harder than writing code. Starting from blank is easier than modifying an existing code base. This category is to explore ideas that scale software development so large teams can be formed that work on the same software system.
Imagine we could define behaviour or code about an attribute of a large system in isolation from other concerns and have them tied together automatically into a whole.
A very important category, actually. Off-top of my head, there are a few approaches:
(A) writing small highly reusable and tested modules (when they are small, others can easily understand)
(B) rewriting with a team (after the complex concept system is done, rewrite from scratch together)
There are other heuristics, like the SOLID, that I've heard of from a friend, can definitely help here, but it doesn't help with boosting development of legacy code, as what it implies is that the legacy code would have to be refactored into lots of small bits (libraries), and then combined again, which is not a small feat.
In most open source projects there is someone who does most of the work, understands the code and contributes the most.
Then there are a group of small contributors who contribute little fixes.
I think part of the problem is making code understandable. Ruby on Rails and Django get you part way there by providing a mechanism to accomplish most things.
The problem I have is that mature codebases are very hard to understand and read. The reference implementation or happy path is polluted by all sorts of exceptions or added feature concerns
The cornerstones of software such as Postgres, Linux and web servers like nginx are all hard to understand and read because they are so feature packed. You cannot see the forest for the trees.
I avoid using open source libraries that dont have good documentation - I expect examples of each API and example code in the README or in the documentation.
It's about complexity. Any complex system has those problems. Solution is modularity, commonality, composability, reusability. We need common standards, we need true and tried solutions to become basis for those standards and commonality. It's an incremental process. But, imo, there's another side of the coin. It's about people and about their work culture. The corps are driving us into very narrow roles, and that is driving software to be highly modularized. At extreme, it's not healthy either. There needs to be more architects, more visionaries that can see across divisions. It's a happy Ballance that matters, but right now there's not enough of jacks of all trades and to many of the follow the old pattern type of people. It's the people that write software, in their own image.
I agree with you complexity is a huge factor.
Most codebases start simple but then become over encumbered with lots of mess and become spaghetti.
I want everything to read like psuedocode or a reference implementation. In other words it is so simple it is readable.
The problem is that code accretes features and those features are poorly namespaced from the core algorithm.
A btree is actually quite simple! But a database needs lots of features that make a btree complicated because it has to handle locking, security etc.
My idea for a layered problem language is to separate features from each other and layer the code together automatically. Like an intelligent subclassing. But for automatic code accretion.