Friday, May 27, 2011

Take-Away Points From my Software Craftmanship 2011 Session

At the Software Craftsmanship 2011 conference yesterday I had the pleasure of running a workshop on how to deal with "legacy" test suites. A part from having a lot fun I also learned a lot from the discussions during the workshop.

The premise of the workshop was a problem that I think is becoming more and more common: Even though more and more code bases have test suites around them, the quality of those tests are low. This lack of quality can stem from a number of things. First of all most automated tests, in my experience, still aren't written in a TDD workflow, they are written some time after the production code. Sometimes right after, sometimes days or weeks later. Secondly automated testing efforts sometimes seem to lack a clear purpose. Lastly test suites are often (as in very often) abandoned as soon the project comes under any sort of pressure - from deadlines, budgets, changing requirements or whatever else. These things lead to test suites that are treated as second or third class code; code that is not seen as important. That leads to a range of problems, such as tests with bland names - like test001 - tests that assert nothing or next to nothing, tests that just mimic the structure of the implementation without showing or exercising the behaviour from a domain/specification perspective and to low coverage of both code and features.

To get concrete we took a hard look at the tests for the NerdDinner sample application. In there we found most of the problems listed above, which begs the question given that we where taking over maintainance and future development of NerdDinner how would we proceed with regard to the tests. Clearly the existing tests are not at a quality level that would allow us to have confidence in the changes to NerdDinner we make just based on the tests running. On the other hand they do cover about a third of the code, and most of them do assert stuff. So if one of them was to break that likely indicates a problem.

The discussion during the workshop seemed to divide people into two camps: One that said, that these test are not worth carrying forward because they will require maintainance while not providing any real value as regressions guards or as specifications. The other camp said, keep them and incrementally improve or delete them, keeping in mind that we dont know how valid the individual tests are.

To sum up the main points I got from the throw the test away camp were:
  • Since coverage is way too low, we're better off with a small number of broad acceptance tests. We can write those in the initial iteration, where we need to familiarize ourselves with the code base anyway.
  • Tests that don't assert much just gives us false confidence, so they are a liability that we don't want to keep them around.
  • Tests that don't work as specifications are mainly a confusion to us, because we don't have access to the people that wrote, and consequently can't find out what the purpose of those tests was.
  • It's simply quicker to write a bunch of exploratory tests, than it is to try to understand and maintain the existing ones.
And the main points I got from the keep the tests camp were:
  • Even though there are problems with the tests, each of them were written with some sort of a goal in mind. As such we suspect that they can point out something we haven't realized yet.
  • The tests are just another part of the code base, and as such it's natural that we will have to refactor - and improve - them over time.
  • We can put the legacy tests in some kind of a quarantine, where we keep running them but don't necessarily trust them, and when one breaks we take look at it and then decide to either delete the test or fix it and move it from the quarantined suite to our normal test suite.
  • The Boy Scout Rule applies to tests as well. Use that as the discipline to incrementally improve the tests.
In general I'm in the keep the tests camp, but I guess it really comes down to making an informed decision on a case-by-case basis. Which camp are you in? Or which points do think we missed?

Wednesday, May 4, 2011

Building DCI Context with ASP.NET MVC

In this post I'll address one of the questions that I tend to get when talking about DCI: How is the DCI context built at runtime?

To that end lets expand a bit on the DCI with ASP.NET MVC sample that I showed in an earlier post. The focus of that post was how to fit ASP.NET MVC and DCI together in a nice cohesive architecture. In this post I will zoom in on how the controller can build up the DCI context object in a conversation with the user.

The example I'm using is still that of a very simplified online bank, where the user can trasfer money between accounts. The core DCI parts of this example is explained in this post, and the connection with ASP.NET MVC in this post. Whereas the sample in the last post simply asked the user to input all the information about the transfer in one screen, this version asks for the information one thing at a time in a wizard like fashion. Whether this is good UX, is besides the point of this post. I just want to use an example where the context is build in a series of interactions with the user.

Interaction Between User and Code
First off the user is asked to choose a source account on this simple (ugly) page:



Serving up this page is done by this rather straight forward action method:

    1  public ActionResult SelectSource()
    2  {
    3      Session["Context"] = new TransferMoneyContext();
    4      return View(new SelectAccountVm
    5                  {
    6                      SelectedAccountId = string.Empty,
    7                      Accounts = accountRepo.Accounts
    8                  });
    9  }
   10 

Worth noticing though, is that a MoneyTransferContext object is created and stored in the session.
When the user hits the Next button this action is hit:

   55 [HttpPost]
   56 public ActionResult SelectSource(SelectAccountVm model)
   57 {
   58     var ctx = Session["Context"] as TransferMoneyContext;
   59     ctx.Source = 
   60       accountRepo.GetById(int.Parse(model.SelectedAccountId)) as TransferMoneySource;
   61     return View("SelectDestination", 
   62                 new SelectAccountVm
   63                 {
   64                     SelectedAccountId = string.Empty,
   65                     Accounts = accountRepo.Accounts
   66                 });
   67 }
   68 

This code updates the context with the chosen source account, and serves the next view in the interaction, where the user is asked for a destination account:



When the user hits this Next button this action is hit:

   69 [HttpPost]
   70 public ActionResult SelectDestination(SelectAccountVm model)
   71 {
   72     var ctx = Session["Context"] as TransferMoneyContext;
   73     ctx.Sink = 
   74       accountRepo.GetById(int.Parse(model.SelectedAccountId)) as TransferMoneySink;
   75     return View("SelectAmount", new SelectAmountVm());
   76 }
   77 

which does almost the same as the previous action, except it puts the chosen account in the Sink property on the context, and then shows this page:



The post from this page does a bit more work. First it updates the context with the amount given by the user. At this point the MoneyTransferContext is complete and ready to be exectued. Finally a result page is shown:

   78 [HttpPost]
   79 public ActionResult SelectAmount(SelectAmountVm model)
   80 {
   81     var ctx = Session["Context"] as TransferMoneyContext;
   82     ctx.Amount = model.SelectedAmount;
   83 
   84     ctx.Execute();
   85 
   86     return ResultPage(ctx);
   87 }
   88 

and the result page is:



Summing Up

To sum up: To build up a DCI context in ASP.NET MVC simply store a context object in the session context when a use case is started and execute it when it makes sense from a functionality perspective. This way the controller code is kept very simple, concentrating only on building up the context through interaction with the user, and the details of the use case execution is - from the perspective of the cotroller - hidden behind the context. Furthermore because of the way the controller builds the context the context object only knowns the domain objects through the roles they have been assigned by the controller - or in fact the user - allowing for the full strenght of DCI to play out there.

You can find the complete code from this post on GitHub.