I've been toying around with an idea for a while. It's sort of half baked, but inspired by some good conversations about it at AAOSConf and GOTO I'm going to go out on a limb and sketch it out below.
Lean software development (LSD) has been around for a while. The Poppendiecks book took the teachings of Lean Manufacturing and turned them into a set of principles and tools for thought that made sense for software development processes. In contrast to most writings on software development processes the "LSD" book does not prescribe (or even describe) a particular process. Rather it describes a number of principles and a number tools, that can be used to analyse your current process in order to find out where it can be improved. LSD provides a framework for developing your own process that is efficient in your particular situation. The point is to continuously optimize the process towards faster delivery and shorter cycle times.
What is a good software architecture? That's a hard question to answer. The following is an answer I've used for a few years now in connection with an architecture training course:
"A good architecture shortens the time between understanding user needs and delivering a solution. It does this by eliminating rework and by providing a de-facto standard framework based on the end user domain model." --James O. Coplien.
It's from Cope's Lean Architecture book, and it's about optimizing for cycle time and for fast delivery. But how? How does a software architecture help optimize for fast delivery? I think some of those tools for thought from LSD can help us out not only if we apply them to process, but also to the software architecture and to the code.
Value Stream Mapping in Code
First and foremost I think we can learn a lot about what makes a good software architecture by applying value stream mapping to our code bases. What I mean by this is: Map out what needs to be done in the code for a typical feature/user story/use cases - whichever is your unit of work - to be implemented. Then take a look at which steps takes about which amount of effort.
Note that when I say code here I include config-files, tests, deployment scripts, migration scripts and so on. Mapping this out gives us insights into which parts of the architecture needs to become better: Is most of the time spent dealing with a particularly hairy integration? Maybe that needs to be better isolated. Is a lot of time spent writing and rewriting data migration scripts? Maybe we aren't using the right persistence solution. Is it very time consuming to build out the GUI? Maybe that part of the code isn't testable enough. These are just made up examples. The point is that I think value stream mapping is a good tool to find those hot spots within the software architecture.
Another tool I think we can apply to the code is seeing waste: Basically this just says that we should be on the look out for anything that creates waste - in terms of effort or in terms of resources. Applying this to software architecture means asking questions like: Do we have methods or even whole classes or layers that just delegate to other parts the code without any additional logic? 'coz that's just wasteful. It takes time to write, it takes time to debug through, and it's takes (a little bit of) time to execute. All of that is waste. We then need to ask ourselves if that waste is justified by some other goal fulfilled by those delegations. If not they need to go.
Another useful question is if there are parts of the architecture that are only really warranted for a few features in the systems but impact all features. E.g. if 80% of the features in an application is basic CRUD, but we do all data access through repositories because there are 20% of cases where more than CRUD is needed. Then isn't it wasteful to funnel everything through those repositories? -Even simple code takes time to write and test. Any code that isn't clearly justified is wasteful.
Options thinking - in my mind - applies pretty directly to code. Options thinking is about keeping options open while not just stalling. It's about finding ways to postpone decisions until more information is available without slowing down progress. This - in code - is done pretty directly with well designed abstractions. That's the obvious one. I also think techniques like feature toggles and a/b testing along with looking for minimal viable implementations support options thinking. These techniques can help with trying out more than one direction in parallel and getting real world feedback. If this is done quickly - e.g. by focusing on only doing minimal implementations - this allows us to postpone deciding between the different directions a little while and gather more information in mean time.
Coding pull style: TDD. That's it.
Summing up these are not fully formed ideas. But now they are out there. Any feedback will be much appreciated.