Thursday, October 11, 2012

Code and Lean


I've been toying around with an idea for a while. It's sort of half baked, but inspired by some good conversations about it at AAOSConf and GOTO I'm going to go out on a limb and sketch it out below.

Lean
Lean software development (LSD) has been around for a while. The Poppendiecks book took the teachings of Lean Manufacturing and turned them into a set of principles and tools for thought that made sense for software development processes. In contrast to most writings on software development processes the "LSD" book does not prescribe (or even describe) a particular process. Rather it describes a number of principles and a number tools, that can be used to analyse your current process in order to find out where it can be improved. LSD provides a framework for developing your own process that is efficient in your particular situation. The point is to continuously optimize the process towards faster delivery and shorter cycle times.

Software Architecture
What is a good software architecture? That's a hard question to answer. The following is an answer I've used for a few years now in connection with an architecture training course:

"A good architecture shortens the time between understanding user needs and delivering a solution. It does this by eliminating rework and by providing a de-facto standard framework based on the end user domain model." --James O. Coplien.

It's from Cope's Lean Architecture book, and it's about optimizing for cycle time and for fast delivery. But how? How does a software architecture help optimize for fast delivery? I think some of those tools for thought from LSD can help us out not only if we apply them to process, but also to the software architecture and to the code.

Value Stream Mapping in Code
First and foremost I think we can learn a lot about what makes a good software architecture by applying value stream mapping to our code bases. What I mean by this is: Map out what needs to be done in the code for a typical feature/user story/use cases - whichever is your unit of work - to be implemented. Then take a look at which steps takes about which amount of effort.
Note that when I say code here I include config-files, tests, deployment scripts, migration scripts and so on. Mapping this out gives us insights into which parts of the architecture needs to become better: Is most of the time spent dealing with a particularly hairy integration? Maybe that needs to be better isolated. Is a lot of time spent writing and rewriting data migration scripts? Maybe we aren't using the right persistence solution. Is it very time consuming to build out the GUI? Maybe that part of the code isn't testable enough. These are just made up examples. The point is that I think value stream mapping is a good tool to find those hot spots within the software architecture.

Seeing Waste
Another tool I think we can apply to the code is seeing waste: Basically this just says that we should be on the look out for anything that creates waste - in terms of effort or in terms of resources. Applying this to software architecture means asking questions like: Do we have methods or even whole classes or layers that just delegate to other parts the code without any additional logic? 'coz that's just wasteful. It takes time to write, it takes time to debug through, and it's takes (a little bit of) time to execute. All of that is waste. We then need to ask ourselves if that waste is justified by some other goal fulfilled by those delegations. If not they need to go.
Another useful question is if there are parts of the architecture that are only really warranted for a few features in the systems but impact all features. E.g. if 80% of the features in an application is basic CRUD, but we do all data access through repositories because there are 20% of cases where more than CRUD is needed. Then isn't it wasteful to funnel everything through those repositories? -Even simple code takes time to write and test. Any code that isn't clearly justified is wasteful.

Options Thinking
Options thinking - in my mind - applies pretty directly to code. Options thinking is about keeping options open while not just stalling. It's about finding ways to postpone decisions until more information is available without slowing down progress. This - in code - is done pretty directly with well designed abstractions. That's the obvious one. I also think techniques like feature toggles and a/b testing along with looking for minimal viable implementations support options thinking. These techniques can help with trying out more than one direction in parallel and getting real world feedback. If this is done quickly - e.g. by focusing on only doing minimal implementations - this allows us to postpone deciding between the different directions a little while and gather more information in mean time.

Pull Systems
Coding pull style: TDD. That's it.

Summing up these are not fully formed ideas. But now they are out there. Any feedback will be much appreciated.

Monday, October 1, 2012

Rapid Releases - Chatting with Sam Newman


I'm at GOTO this week and I got the chance to sit down with Sam Newman who's doing a talk on rapid releases tomorrow, which I already expected will be really good. My chat confirmed this. When I read the abstract for the talk I got quite interested, because Sam touches of something that I've spent some time thinking about too: How does the software architecture/design of our systems effect our ability to push out software quickly?

I started off asking Sam to give the elevator pitch for why everybody should go to his talk tomorrow. Paraphrasing we should go because most people - when designing systems - start off thinking about "the traditional" set qualities (scalability, availability security, performance and so on), but ignore "ease of change". Ease of change should often be an important quality attribute though, since this is a necessity for speeding up release cycle. In the talk Sam will go into some techniques, or patterns if you will, that can help you move towards making small incremental changes to your systems easily. Which is what you need to do if you want to move to a continuous delivery (or even deployment) model.

I got the impression that Sams talk supplements most of the other talks about continuous delivery, that you might hear, because most of those talks focus a lot on the build pipeline and on automating IT infrastructure, whereas his talk goes into the architecture and design of the system. This angle is important (too). In fact in Sams experience working with clients wanting to move towards continuous delivery some of the first steps most teams need to make are around the software architecture  That is, often he sees situations where there are architectural or design problems in the system that either seriously slows down the development or which hinders doing changes in small increments, which in turn hinders doing small, focused, low risk releases, which ultimately hinders releasing rapidly.

Part of the cure for this is to go into the systems and cutting them up into small well factored services. Aka (in my words) doing SOA right. I asked Sam how this related to DDD - my thinking being that these services follow bounded contexts. The response to this was pretty interesting I think: Sam agrees that these well factored services will and should follow bounded context boundaries  but points out that the way to get there is not by focusing the services around entities or even aggregates, but rather around business capabilities. I think that is a really helpful way to put it.

As I said, talking to Sam only made me more keen on going to his talk, and if any of the above resonates I'd encourage you to go too. I'm sure it'll be both fun and educational.