Thursday, December 20, 2012

XAML or JavaFx?

This a quick, subjective, unauthorative and very non-scientific comparison of building applications with XAML and with JavaFx. The comparison is based on my personal experience working with each. On the XAML side this means WPF and Win8 store apps. On the JavaFx side this means a Windows 7 application.

JavaFx
JavaFx is touted by Oracle (and Sun back in the day) as new and modern way to build UIs. It's focused on supporting the flexibility and graphical niceness that modern UIs demand. JavaFx can run in and out of browser, and on various OSs.

My experience with JavaFx is building a good looking and quite modern desktop application targeting only Windows 7. We opted for writting the view code in ScalaFx, a declarative JavaFx DSL in Scala. This works fine: The application looks good, performs fine, and the code base is pretty clean. But there are few things bothering me:

  • The UI code is just not as declarative as I would have liked. In general we've found that the built-in controls do not suit our needs, so most of the UI is build from .pngs, that have mouse events attached to them. These events change the .pngs around for hover, clicked and so on. While this works just fine, it means that there is more logic in the views than I'd have liked.
  • In part as a consequence of the above, it's hard to create something like a XAML datatemplate and then bind the data into it. Again this results in more imperative logic in the views.
The things I really like are:
  • The fact that we can use Scala in the views really cuts down on the amount of noise in the view code, compared to Java counterparts, in my opinion. For instance attaching the mouse event handlers is a lot easier with a language where functions are first class citizens.
  • It's fairly easy to run the view code in headless mode from tests. This enables a decent albeit slightly slow TDD workflow for the view code.
XAML
XAML is used in a range of Microsofts UI technologies including WPF and the Win8 store app (formerly Metro) and Silverlight. The XAML UI frameworks are also focused on supporting the needs of moderns UIs, and also runs both in and out of browser.

My experience with XAML is from WPF and Win8 store apps, where we've followed the MVVM approach without any particular framework, but with some homegrown conventions inspired by Caliburn.Micro to ease some of the repetition in XAML - particularly around binding.

This has also worked out well, resulting in nice applications, but again there are some things that bother me:
  • First and foremost I detest writing code in XML. So I'm not a fan of the XAML language at all. You might, at this point, say that XAML is meant to be generated by tools, not written by humans. And you may be right. Nonetheless, my experience is that you do write most of your XAML by hand. I find doing that to be much quicker and more maintainable than using Blend or Visual Studio to edit XAML.
  • I don't like the code behinds. We've kept them quite small in the projects I've worked on, but they are still there tempting developers at weak moments to hide untestable code.
The things I really like are:
  • As with JavaFx it's fairly simple to run the views headlessly for testing. Again this enables an acceptable TDD workflow. Again albeit somewhat slow.
  • Databinding works very well. Especially with some conventions in place to cut down on boilerplate code.
  • Datatemplates in particular enable keeping the views declarative.

Conclusion?
Hmm, I don't think I have enough data here to conclude much. What I will conclude though is that for building Windows applications my experience is that both JavaFx and XAML will get the job done, but it's faster to work with XAML, because the databinding is easier, the datatemplates help a lot and there seems to be less hand rolling controls.
Beware though: The projects I'm comparing are not the same, so there is a certain level of comparing apples and oragnes going on. Also this very subjective. So YMMV.

Update - 2012-12-21
Based on feedback I've decided the conclusion above is too vague. To clarify: In my experience both technologies work, but XAML is faster (and thus cheaper, and faster to market) to work with.

Sunday, December 16, 2012

Speaking at Warm Crocodile

In about a month from now I'll be at Warm Crocodile conference in Copenhagen. While there I'll be giving two talks, both on the Thursday. The titles and abstracts are:

  • Introduction to Nancy: Nancy is a lightweight .NET web framework. It provides an easy to use, to-the-point alternative to the most commonly used .NET web frameworks. Nancy does not try to be everything to everyone. But it does try to be the super-duper-happy path to web development on .NET and Mono. Come get introduced to Nancy, and judge for yourself.
  • Layers Consider Harmful : Layers are killing us. All the time. They are killing our communication. They are killing our speed. They are killing the others speed as well. We need to do better. Which we can. I'll tell why layers are killing us. I might even tell you something about how to do better.
Warm Crocodile is a new conference organized by, among others, Daniel Frost, who e.g. has been one half of making Community Cph. happen. If Warm Crocodile can take that fun, sharing and welcoming atmosphere of Community Day and combine that with the high quality content it the impressive international speaker line up promises, it'll an absolutely great couple of days-

Hope to see to you there for some geeking out and a good discussion!

Sunday, November 18, 2012

Thursday, October 11, 2012

Code and Lean


I've been toying around with an idea for a while. It's sort of half baked, but inspired by some good conversations about it at AAOSConf and GOTO I'm going to go out on a limb and sketch it out below.

Lean
Lean software development (LSD) has been around for a while. The Poppendiecks book took the teachings of Lean Manufacturing and turned them into a set of principles and tools for thought that made sense for software development processes. In contrast to most writings on software development processes the "LSD" book does not prescribe (or even describe) a particular process. Rather it describes a number of principles and a number tools, that can be used to analyse your current process in order to find out where it can be improved. LSD provides a framework for developing your own process that is efficient in your particular situation. The point is to continuously optimize the process towards faster delivery and shorter cycle times.

Software Architecture
What is a good software architecture? That's a hard question to answer. The following is an answer I've used for a few years now in connection with an architecture training course:

"A good architecture shortens the time between understanding user needs and delivering a solution. It does this by eliminating rework and by providing a de-facto standard framework based on the end user domain model." --James O. Coplien.

It's from Cope's Lean Architecture book, and it's about optimizing for cycle time and for fast delivery. But how? How does a software architecture help optimize for fast delivery? I think some of those tools for thought from LSD can help us out not only if we apply them to process, but also to the software architecture and to the code.

Value Stream Mapping in Code
First and foremost I think we can learn a lot about what makes a good software architecture by applying value stream mapping to our code bases. What I mean by this is: Map out what needs to be done in the code for a typical feature/user story/use cases - whichever is your unit of work - to be implemented. Then take a look at which steps takes about which amount of effort.
Note that when I say code here I include config-files, tests, deployment scripts, migration scripts and so on. Mapping this out gives us insights into which parts of the architecture needs to become better: Is most of the time spent dealing with a particularly hairy integration? Maybe that needs to be better isolated. Is a lot of time spent writing and rewriting data migration scripts? Maybe we aren't using the right persistence solution. Is it very time consuming to build out the GUI? Maybe that part of the code isn't testable enough. These are just made up examples. The point is that I think value stream mapping is a good tool to find those hot spots within the software architecture.

Seeing Waste
Another tool I think we can apply to the code is seeing waste: Basically this just says that we should be on the look out for anything that creates waste - in terms of effort or in terms of resources. Applying this to software architecture means asking questions like: Do we have methods or even whole classes or layers that just delegate to other parts the code without any additional logic? 'coz that's just wasteful. It takes time to write, it takes time to debug through, and it's takes (a little bit of) time to execute. All of that is waste. We then need to ask ourselves if that waste is justified by some other goal fulfilled by those delegations. If not they need to go.
Another useful question is if there are parts of the architecture that are only really warranted for a few features in the systems but impact all features. E.g. if 80% of the features in an application is basic CRUD, but we do all data access through repositories because there are 20% of cases where more than CRUD is needed. Then isn't it wasteful to funnel everything through those repositories? -Even simple code takes time to write and test. Any code that isn't clearly justified is wasteful.

Options Thinking
Options thinking - in my mind - applies pretty directly to code. Options thinking is about keeping options open while not just stalling. It's about finding ways to postpone decisions until more information is available without slowing down progress. This - in code - is done pretty directly with well designed abstractions. That's the obvious one. I also think techniques like feature toggles and a/b testing along with looking for minimal viable implementations support options thinking. These techniques can help with trying out more than one direction in parallel and getting real world feedback. If this is done quickly - e.g. by focusing on only doing minimal implementations - this allows us to postpone deciding between the different directions a little while and gather more information in mean time.

Pull Systems
Coding pull style: TDD. That's it.

Summing up these are not fully formed ideas. But now they are out there. Any feedback will be much appreciated.

Monday, October 1, 2012

Rapid Releases - Chatting with Sam Newman


I'm at GOTO this week and I got the chance to sit down with Sam Newman who's doing a talk on rapid releases tomorrow, which I already expected will be really good. My chat confirmed this. When I read the abstract for the talk I got quite interested, because Sam touches of something that I've spent some time thinking about too: How does the software architecture/design of our systems effect our ability to push out software quickly?

I started off asking Sam to give the elevator pitch for why everybody should go to his talk tomorrow. Paraphrasing we should go because most people - when designing systems - start off thinking about "the traditional" set qualities (scalability, availability security, performance and so on), but ignore "ease of change". Ease of change should often be an important quality attribute though, since this is a necessity for speeding up release cycle. In the talk Sam will go into some techniques, or patterns if you will, that can help you move towards making small incremental changes to your systems easily. Which is what you need to do if you want to move to a continuous delivery (or even deployment) model.

I got the impression that Sams talk supplements most of the other talks about continuous delivery, that you might hear, because most of those talks focus a lot on the build pipeline and on automating IT infrastructure, whereas his talk goes into the architecture and design of the system. This angle is important (too). In fact in Sams experience working with clients wanting to move towards continuous delivery some of the first steps most teams need to make are around the software architecture  That is, often he sees situations where there are architectural or design problems in the system that either seriously slows down the development or which hinders doing changes in small increments, which in turn hinders doing small, focused, low risk releases, which ultimately hinders releasing rapidly.

Part of the cure for this is to go into the systems and cutting them up into small well factored services. Aka (in my words) doing SOA right. I asked Sam how this related to DDD - my thinking being that these services follow bounded contexts. The response to this was pretty interesting I think: Sam agrees that these well factored services will and should follow bounded context boundaries  but points out that the way to get there is not by focusing the services around entities or even aggregates, but rather around business capabilities. I think that is a really helpful way to put it.

As I said, talking to Sam only made me more keen on going to his talk, and if any of the above resonates I'd encourage you to go too. I'm sure it'll be both fun and educational.

Saturday, September 15, 2012

Agile Architecture Open Space Conf 2012 - Impressions

I ran AAOSConf Thursday and Friday this week. This is the second year. And this is the second year my employer, Mjølner Informatics, has been kind enough to sponsor it all. Based on last year, going into this one I had huge expectations. Happily my expectations were met. Once again, I got to spend two days with a bunch of sharp minds and dedicated software professionals. That alone is great. But combining it with the fact that they are there to share, discuss and push their and each others understanding of that surprisingly complex activity called software development, and you get a very unusual opportunity for learning.

This years conference really confirmed to me that there is very real and very broad push for simplification going on. This includes trends like

  • CQRS and ES
  • Using document stores
  • Cutting down on layering
  • Chopping systems into thinner simpler slices
  • Moving towards continuous delivery
Which were all hot topics that drove some very interesting open space sessions.

On the process side of things there were some very good discussions on the finer points of topics like adopting TDD, the value of co-location, DDD, the value of business alignment vs. the value of efficiency in software development and on managing technical debt.

On the one hand there are the broad trends, and on the other there are the detail pesky details of some of the really hard stuff: That's what makes the open space format great, it's so thoroughly tied to the actual practice going on in real projects, and people are so honest and sharing about their problems and successes that you get the inside track on both the direction things are moving and on which details are the problematic ones.

Enough on my experience with the conference. Here are a few of the tweets from other attendees:












Seems we all had fun learning a lot. Thanks for that! Hope to see you next year!

Sunday, September 9, 2012

GOTO 2012: The Minus Side

Continuing my look at this years GOTO 2012 program; what's missing? Well, on the one hand it's easy to out point topic after topic that isn't on the program. On the other hand the conference can't and shouldn't include everything under sun. Choices have to made. I get that, and I'm happy that the GOTO program committee does too. I like that GOTO sometimes is a bit opinionated (in fact that one of it's main strengths IMO). I also assume they get lots of "there's not enough XYZ" and are probably sick and tired of it ... but that's what I'll giving here anyway. But I will try to qualify.
Getting the to point there are two things the really strike me as missing from this years program:

  1. Unusual .NET. What I mean by this is content about the large and thriving, but to a lot of devs hidden, eco system of community supported .NET (related) technology. Like Nancy or OpenRasta, or ServiceStack or Dapper or Rebus or MassTransit or ChuckNorris etc. There's enough .NET content on the program for my taste. It's just that it's too mainstream. I think it should be more challenging. The danish .NET community is very strong and very knowledgeable. We need and expect a challenge from GOTO. 
  2. Ruby. GOTO is a cross platform, cross community conference. That's another one of it's main strengths. But when I went through the program I saw only one talk with any relation to Ruby: Karl Krukow Calabash talk - which really isn't Ruby oriented at all, but since Calabash supports Cucumber syntax, there's some sort of link to Ruby. I think this is both a pity and strange: The danish Ruby and Rails communities is alive and kicking as far as I can tell form the outside, but with the current program Rubyinst or Railsist would have I hard time justifiying coming to GOTO. Which is a shame, since that makes the conference narrower. I would have loved to see DHH give a keynote. I would have loved to get the "what's new in Rails 3.whatisthelatest" talk or a "state of DSLs in Ruby" talk or ... something even more fun that I don't even know is going on in Ruby land.
But hey, these are just peeves about an otherwise great program.

Tuesday, August 28, 2012

GOTO 2012 - The Plus Side

I'm going to be GOTO again this year, and - as always - I'm excited about the program. Lots of exiting stuff going. While I will try not plan too much ahead, because I find it both more fun, and in the end also more educating to just go with the flow and see the talks that catch my attention when I'm right there at the conference, I have few things on my radar, that I'm most probably going to see:
  • Monday morning I'll likely kick off the conference with Brian Slettens Webs of Data talk. I saw Brian last year give a good technical talk on testing REST APIs, and the abstract for this one sounds intriguing.
  • Later monday I want to see Jonas Bonér talk about Akka - because Akka and Scala are so very, very cool!
  • Tuesday I want hear Sam Newman talk about how to design for rapid releases, because that's something I've been spending time thinking about too. It's a fascinating question really: How does the design of the software - of the code - effect the speed at which we can deliver. This is not just about how cruft slows us down, but also how to make software that we can build trust in quickly, which lends itself to automation and so on. I'm curious what Sam has to say.
  • Tuesday I also want to hear a little something about R. Mainly because I know nothing about R except that there seems to be a growing hype around it. And also languages are fun, so of course I need a language talk!
  • At the face of it Wednesday seems a bit slower for me, but I'll probably go see Dan Norths talk and Liz Keoghs talk, because both are such passionate, opinionated (in a good sense) and well spoken people.
You can see my (very) tentative schedule on this need thing GOTO made.

If you're going, create your schedule on the GOTO site, and post the link in the comments. I'd like see what caught your eye.

Friday, August 24, 2012

Draupner: Full Stack ASP.NET MVC Scaffolding


A little while back I announced on twitter that Mjølner had open sourced it's ASP.NET MVC scaffolding tool, Draupner:



Since then Draupner has been updated to produce ASP.NET MVC 4 apps, and all the dependencies have been updated too. So now since like a good time an introductory post about Draupner.


Why Did we Build Draupner?
Scaffolding is not a new concept. It's been used for while in other web frameworks, and moreover Steven Sandersons ASP.NET MVC scaffolding has been around for a while too. That begs the question why another scaffolder? Basically because the existing ASP.NET MVC one does not do what we (Mjølner) want the way we want it: We want a simple command line tool that can

  • Set up a new ASP.NET MVC solution with a "Web" project along with an associated "Core" project and a "Test" project.
  • Add further entities to the solution as needed throughout the solutions lifetime
  • Add CRUD operations and GUIs to existing entities
  • Add tests for all the other stuff it adds
  • Allow us to code away happily on the solution without having to think about making the scaffolding tool happy

Furthermore, and importantly, we wanted everything the tool produced to follow an architecture we've used successfully time and again. This means setting up a certain structure in terms of projects and folders, as well as building on a certain technology stack that we like to work with. In other words the tool, Draupner, is a set of practices we at Mjølner have had succes with put into code. These last bits are what sets Draupner apart.

What Technology Stack does Draupner Set Up?
Draupner projects uses a bunch of technologies, that we've found to work well together. All of them are set up as NuGet dependencies (except Rake). The stack includes:


You can see the full list on the Draupner page on Github.

What Does a Draupner Solution Look Like?
Let's have a quick glance at a solution built with Draupner. This screenshot shows which projects such a solution consists of:



The .Web project is the ASP.NET MVC site: It includes views and some thin controllers, that rather quickly call into the .Core project.

The .Core project is where the domain model goes and where the persistence of said domain model is handled. We like intelligent domain models, so this is where the smarts of the application is meant to go.

The .Test project contains xUnit tests for both the .Web and the .Core projects.

This is all set up by Draupner during the initial project creation. Moving on from there Draupner can add further entities to the domain model and add CRUD operations/GUIs to those enitites. Taking a look inside the .Core project we see:



This gives a peek into the technolgoy stack used by Draupner projects: Entities are persisted to a SQL Server database via NHibernate (you can reconfigure NHibernate all you want if you e.g. want to change to MySql), and the NHibernate mappings are set up with Fluent.NHibernate. Draupner also create repositories for the enities it creates which the the controllers in the .Web project can use.

Also notice the Castle.Windsor dependency: All the code produced by Draupner uses dependency injection and inversion of control.Caslte.Windows is the IoC/DI container of choice used by Draupner proejcts. E.g. the repositories mentioned above expect an IUnitOfWork into which it can enroll operations. This is injected into them by Windsor. Skipping ahead a bit let me mention that the .Web project sets up an NHibernate unit of work per web request and registers it with Windsor.

Lastly we can notice that Draupner sets up Log4Net, so that it's ready to go.

Let's move up the stack and open up the .Web project:



We can see that Draupner has created controllers for the enities it created. These each allow for simple CRUD operations.

Draupner has also created simple, but nice Ajaxy CRUD views for the entities. Neither of these are really expected to be used in production, but act as placeholders until "the real thing" is implemented.

Draupner has also created a few view models, which are used in the CRUD GUIs and has set up Automapper configurations to map between the view models and the enitites in .Core.

Worth mentioning is also that the .Web project uses Elmah for error logging in the web layer.

As mentioned Draupner creates tests for all this as well. They end up the .Test project:

The tests are xUnit tests and use AutoFixture and Rhino.Mocks.

So What Now?
If this caught you're interest go clone the Draunper sample project on Github and take a harder look at how things are set up or take Draupner for a spin, following the instructions in the readme. If you like it, but find something missing or not working let us know. We're not making any promises with regards to support and bug fixes though, so an even better idea is to send a pull request. Those we do welcome.

Monday, July 9, 2012

WPF View Smoke Testing

I enjoy TDD. I enjoy the way it makes me code and I enjoy the rhythm. That is, I really enjoy TDD'ing the logic in my applications: The business logic, the controller logic, the domain logic; wherever there is logic that I can isolate and unit test cleanly TDD is just all fun and productivity. I enjoy TDD'ind integration points a little less: The database access, the web service calls, and so on are slow to test, so the TDD rhythm is broken. I still prefer TDD'ing these parts too though. I do not enjoy TDD'ing UIs: In my experience the tests are slow, either the rate of false negatives is high or the rate of false positives is high, and UI test are a pain to write (don't even get me started on the horrors of recording UI tests). Therefore I usually settle for subcut tests and manual UI testing when it comes to views. That is: I want to TDD my presentation logic, but am usually ok with checking that the pixels are in the right spot manually.
Working with WPF this is - at first sight - supported by MVVM: The view models sit right under the views, and are able to cut the .xaml.cs code behinds down to (almost) nothing, so testing the view models is subcut testing right? Right?? Not quite: There is a little bit of stuff going on in the bindings. I want to test that stuff.

WPF has been around for a while so I expected this to be a solved problem. Maybe it is, but my googling didn't yield an answer. It did however yield a number of candidated solutions. Below is a quick summary of my evaluation I of these candidates. (TLDR: None of the libraries worked, but Window.Show and FrameworkElement.FindName turned out to be my friend, and maybe, just maybe. Approval Tests could finally win me over to UI testing).

Notes From My Evaluation

Below there is a section for each of the candidates I quickly evaluated with some rough notes on how it went. The evaluations are not terrible thorough nor objective, so YMMV 

IcuTest

IcuTest is a library that works by asking you the first time a test is run to accept or reject the result: It shows you a snapshot of the window to assert on, and you accept or reject. If you accept the image is saved and used as the acceptance criteria in subsequent runs.
I have a few issues with this: The asserts come down to bitmap comparisons, which means (1) that they are imprecise - they assert on the whole window/control instead of just the one thing the current test is about and (2) they depend on the machine the test is running on. These are execatly the issues with UI testing that have bitten me in the past. So IcuTest is not what I'm after.

White

White seems to be the most grown up UI testing framework for Windows applications around. Promising. But: I installed the NuGet in my projects and was immediately sent into the Log4Net strong naming cirkus. Annoying but not Whites fault.I went on to clone the source off Github. It compiled out of the box, but most of the tests failed. After fiddling around for a few hours I got like 60% of the tests running. Not a good sign to be honest. The fiddling included changing things like which properties were used to input text into a textbox in ways that were pure guessing on my part. All in all this seemed like a route that would lead to unstable tests. So White is not what I'm after.

AvalonTestRunner

The AvalonTestRunner is an old thing from back when WPF was still Avalon. It doesn't claim to do a lot either. But none of these are bad things in themselves. If it works it works. One of the things AvalonTestRunner claims to do is sanity check all the bindings in a view: I.e. throw an exception if there is a binding path that is not found in the data context. That is part of what I'm after and actually a quite nice first line of defense to have. Only problem: I couldn't get it to work, not out of the box and not after debugging through the source for a while. Maybe it's just me, but again I concluded: AvalonTestRunner is not what I'm after.

Guia

Guia seems to try to solve the same thing as white but only for WPF. It also seems defunct: The current version is 0.1.1 and is 2 years ole (at the time of writing) . But again: If it works it works. Sadly it didn't work out of the box, and because of the lack of activity I didn't investigate further. Guia is not what I'm after.

Hand Coding

With all the libraries I tried out not really working I'm left with hand coding the tests myself. As it turns out this is in fact not nearly as bad as it sounds  - not for the subcut type of tests I'm after anyway.

MS UI Automation lib

MS UI Automation is a library intended to support building screen readers, remote controls and other types of applications that need to automate the UI of some other - known or unknown - application. It seems to provide everything you'd need to write UI test for WPF. I haven't tried it out though because it seems like overkill for what I'm looking for.

.FindName

As it turns out I probably shouldn't have spent so much time wading through testing libraries because the simple little kinds of tests I set out to write: Tests that drive and check the bindings in my XAML are actually (for the most part at least) easily written by just opening the window under test directly, finding the controls of interest by calling .FindName on the window under test and then manipulating or asserting against properties on those controls. Like this:

This is so darn easy, that once I realized, that this is actually what I am after I stopped looking at libraries although there are still a couple on my list. So there you have it: It's just not complicated enough to need a library.

Promising ones I haven't gotten around to evaluating

In spite of the conclusion above the following may be worth checking out. I'm pretty sure at least ApprovalTests work although I haven't tried it. WhiPFlash seems active, but beyond that I know nothing.

ApprovalTests

WhiPFlash

Thursday, June 7, 2012

Book review: Scala in Depth

I read Scala in Depth by Joshua D. Suereth over last week, which really is not giving it the time and attention it deserves. Furthermore I'm no Scala expert - although I'm not a complete novice either. Anyway here are my thoughts on the book.

Is it any good? Or TL;DR
Yes, it's good. It delivers on the promise of the title it is - as far as I can tell - really is Scala in depth. It does so with a mixture of some theoretical background for the different language constructs and a lot of practical Scala programming advice coming from the lessons the author has learned through his own use of the language. Where it sometimes lacks a bit is in introducing stuff before using it. This is not a problem if you have a basic working knowledge of Scala already, but I image it would be, if you didn't. Bottom line: I'd recommend Scala in Depth to anyone who - like me - has dipped their toes in Scala and want to learn more.

The language
The book starts off with a very quick introduction to the very basics of Scala - it's a statically blended OO/functional language with a very flexible syntax and grammar running on the JVM (and to some extent .NET if you really want). This introduction includes the REPL and a few tips like "prefer immutability". This should be enough to give you a feel for the language, and to get you startet playing in the REPL.
Once the basics are out of the way the book becomes somewhay more hard core - not in an academic language semantics kind of way, but in a practical way, where the reader through the book is taken through a number of sometimes hairy examples. These gives a thourough run through of the object oriented parts of the language (like classes, objects, traits and polymorphism), the type system (like generics, higher kinded types and existential types) and the functional parts (like functions, higher order functions, and some basic functional patterns).
All in all these parts of the book in themselves provide an in depth look at Scala. But not a very practical one: Knowing of the language features and even knowing the ins and outs of them is not the same as being able to use them well. These practicalities are addresed by the more style oriented parts of the book.

Style
The other part of the book - which is not pulled out as a separate part, but interleaved with the rest of the book - is the advice on style. To me these are the most interesting parts of the book. This where the book gets into things like demonstrating how to use actors effectively and safely, how to use implicits sanely, a map of the collections library and integrating with Java. These are things I suspect you'll need in real life development, and which are painful to go through learning by yourself.

All in all a very informative book that can take you from a basic knowledge of Scala to a thorough knowledge of the language - given that you take the time to do some serious coding along side your reading. If you just speed through the book - like I did - you'll learn less, but still a significant amount.

Thursday, May 10, 2012

Slides from my Community Day Copenhagen 2012 Talk

I did a talk on "alternative" .NET web frameworks, because I think it's important to realize that there are serious alternatives to ASP.NET - even in .NET-land. Anyway here are the slides:


Thursday, April 19, 2012

Slides from my MOW2012 Talk

I did talk at MOW2012 about REST and Nancy today. The code in the talk is taking from my port of RestBucks On .NET to Nancy, and here are the slide:


Friday, February 24, 2012

Repositories and Single Responsibility from the Trenches - Part II

In my last post I wrote about how we swapped sa few ever-growing repository classes for a lot of small focused requests classes, thar each take core of one request to our database. That post showed the gist of how to implement these request classes. This post will focus on how to use them, test them and mock them.

First I'll re-iterate how one of these request classes look (NB. compared to the code from the last post the base class has been renamed):

   11 public class DeleteDeviceWriteRequest : DataStoreWriteRequest
   12 {
   13   private readonly Device device;
   14 
   15   public DeleteDeviceWriteRequest(Device device)
   16   {
   17     this.device = device;
   18   }
   19 
   20   protected override void DoExecute()
   21   {
   22     // NHibernate trickery cut out for brewity
   23     Session.Delete(device);
   24   }
   25 }

And below I show how to use, test and mock this class.

Usage
Usage of these requests is really simple: You just new one up, and ask your local data store object to execute it:

  229 var deleteDeviceRequest = new DeleteDeviceWriteRequest(meter);
  230 dataStore.Execute(deleteDeviceRequest);

Say, what? I'm newing up an object that uses NHibernate directly. Gasp. That's a hard coupling to the ORM and to the database, isn't it? Well, kind of. But that is where the data store object comes into play: The request can only be exectuted through that object, because the request's only public method is its contructor and because it's base class 'DataStoreWriteRequest' has no public methods. The interface for that data store is:

    8 public interface IDataStore
    9 {
   10   void Execute(DataStoreWriteRequest req);
   11   T Execute<T>(DataStoreReadRequest<T> req);
   12   T Get<T>(long id) where T : class;
   13   T Get<T>(Expression<Func<T,bool>> where) where T : class;
   14   void Add<T>(T entity) where T : class;
   15   void Delete<T>(T entity) where T : class;
   16   int Count<T>(Expression<Func<T, bool>> where = null) where T : class;
   17 }

That could be implemented towards any database/ORM. I our case it's implemented against NHibernate, and is pretty standard except maybe for the two execute methods - but then again they turn out to be really straightforward as well:

  173 public void Execute(DataStoreWriteRequest req)
  174 {
  175   WithTryCatch(() => req.ExecuteWith(Session));
  176 }
  177 
  178 public T Execute<T>(DataStoreReadRequest<T> req)
  179 {
  180   return WithTryCatch(() => req.ExecuteWith(Session));
  181 }
  182 
  183 private void WithTryCatch(Action operation)
  184 {
  185   WithTryCatch(() => { operation(); return 0; });
  186 }
  187 
  188 private TResult WithTryCatch<TResult>(Func<TResult> operation)
  189 {
  190   try
  191   {
  192     return operation();
  193   }
  194   catch (Exception)
  195   {
  196     Dispose(); // ISession must be disposed
  197     throw;
  198   }
  199 }

Notice the calls to ExecuteWith? Those are calls to internal methods on the abstract DataStoreReadRequest and DataStoreWriteRequest classes. In fact those internal methods are the reason that DataStoreReadRequest and DataStoreWriteRequest exists. Using a template method declared internal they provide inheritors - the concrete data base requests - a way to get executed, while hiding everything but the contrustors from client code. Only our NHibernate implementation of IDataStore ever calls the ExecuteWith methods. All the code outside the our data access assembly can not even see those methods. As it turns out this is really simple code as well:

    5 public abstract class DataStoreWriteRequest
    6   {
    7     protected ISession Session { get; private set; }
    8 
    9     internal void ExecuteWith(ISession seesion)
   10     {
   11       Session = seesion;
   12       DoExecute();
   13     }
   14 
   15     protected abstract void DoExecute();
   16   }

To sum up; the client code just news the requests it needs, and then hands them off to the data store object. Simple. Requests only expose constructors to the client code, nothing else. Simple.

Testing the Requests

Testing the requests individually is as simple as using them. This is no surprise since tests - as we know - are just the first clients. The tests do whatever setup of the database they need, then new up the request and the data store, asks the data store to execute the request, and then asserts. Simple. Just like the client production code.

In fact one of the big wins with this design over our old repositories is that the tests become a whole lot simpler: Although you can split up tests classes in many ways, the reality for us (as I suspect it is for many others too) is that we tend to have one test class per production class. Sometimes two, but almost never three or more. Since the repository classes grew and grew so did the corresponding test classes resulting in some quite hairy setup code. With the new design each test class tests just one very specific request leading to much, much more cohesive test code.

To illustrate here is the first simple of test for the above DeleteDeviceRequest - note that the UnitOfWork objects in this test implement IDataStore:

   39 [Test]
   40    public void ShouldDeleteDeviceWithNoRelations()
   41    {
   42      var device = new Device();
   43      using (var arrangeUnitOfWork = CreateUnitOfWork())
   44      {
   45        arrangeUnitOfWork.Add(device);
   46      }
   47 
   48      using (var actUnitOfWork = CreateUnitOfWork())
   49      {
   50        var sut = new DeleteDeviceWriteRequest(device);
   51        actUnitOfWork.Execute(sut);
   52      }
   53 
   54      using (var assertUnitOfWork = CreateUnitOfWork())
   55      {
   56        Assert.That(assertUnitOfWork.Get<Device>(device.Id), Is.Null);
   57      }
   58    }

Mocking the Request

The other part of testing is testing the code that uses these requests; testing the client code. For those tests we don't want the request to be executed, since we don't want to get slowed down by those tests hitting the database. No, we want to mock the requests out completely. But there is a catch: The code under test like the code in the first snippet in the Usage section above new's up the request. That's a hard compile time coupling to the concrete request class. There is no seam allowing us to swap the implementation. What we're doing about this is that we're sidestepping the problem, by mocking the data store object instead. That allows us redifne what executing the reqeust means: Our mock data store never executes any of the requests it's asked to execute, it just records that it was asked to execute a certain request, and in the case of read requests returns whatever object we set it up to return. So the data store is our seam. The data store is never newed up directly in production code, it's always injected through constructors. Either by the IoC/DI container or by tests as here:


  100 [Test]
  101 public void DeleteDeviceRestCallExectuesDeleteOnDevice()
  102 {
  103   var dataStore = mock.StrictMock<IDataStore>();
  104   var sut = new RestApi(dataStore);
  105 
  106   var device = new Device { Id = 123 };
  107 
  108   Expect.Call(unitOfWork.Get<Device>(device.Id)).Return(device);
  109   Expect.Call(() => 
  110     unitOfWork.Execute(
  111     Arg<DeleteMeterWriteRequest>
  112     .Is.Equal(new DeleteMeterWriteRequest(device))));
  113 
  114   mock.ReplayAll();
  115 
  116   sut.DeleteMeter(device.Id.ToString());
  117 
  118   mock.VerifyAll();    
  119 }
  120 


(the above code uses Rhino.Mocks to mock out the data store, but that's could have been done quite simply by hand as well or by any other mocking library)

That's it. :-)

Monday, January 30, 2012

Repositories and Single Responsibility from the Trenches

In a project I'm involved with we've done what I suspect lots and lots of projects do. We've used the repository pattern to encapsulate database adds, queries and deletes. We have one such repository per entity type. That worked well for me in earlier projects. But in this case several of the repositories have become somewhat unfocused. None of them are big hairy monsters, but the different public methods really don't have much in common except the fact that they mainly act on the same entity type. In other words cohesion is low. Let me exemplify (anonymized just as the rest of the code in this post):


    9 public interface IDeviceRepository
   10 {
   11   TDeviceType Get<TDeviceType>(long id) where TDeviceType : Device;
   12   TDeviceType FindByControllerIdentifier<TDeviceType>(string controllerIdentifer)
   13       where TDeviceType : Device;
   14 
   15   IList<TDeviceType> GetAll<TDeviceType>(int offset, int max, 
   16       DateTimeOffset? createdAfter = null, DateTimeOffset? updatedAfter = null,
   17       DateTimeOffset? disconnectedAfter = null)
   18       where TDeviceType : Device;
   19 
   20   int Count<TDeviceType>(DateTimeOffset? createdAfter = null,
   21       DateTimeOffset? updatedAfter = null, DateTimeOffset? disconnectedAfter = null)
   22     where TDeviceType: Device;
   23 
   24   void Add<TDeviceType>(TDeviceType device) where TDeviceType : Device;
   25   IList<TDeviceType> GetAll<TDeviceType>() where TDeviceType : Device;
   26   IList<TDeviceType> GetAll<TDeviceType>(long[] deviceIds) where TDeviceType : Device;
   27   IList<TDeviceType> FindByControllerIdentifiers<TDeviceType>(string[] controllerIdentifiers)
   28       where TDeviceType : Device;
   29   IList<NodeDevice> FindNodesDevicesForRootDevice(string controllerIdentifier, 
   30       bool active = false);
   31   ICollection<NodeDevice> FindNodeDevicesForRootDevice(long rootDeviceId, bool active = false);
   32 
   33   IEnumerable<DeviceLink> GetTopology<T>(long id, bool active) where T : Device;
   34   DeviceLink GetDeviceLink(string controllerIdentifierA, string controllerIdentifierB);
   35   DeviceLink GetDeviceLink(long id);
   36 
   37   void AddDeviceLink(DeviceLink link);
   38 
   39   IList<Route> FindRoutesByDeviceLink(DeviceLink deviceLink);
   40   void Delete(Device device);
   41 }



This doesn't look like the interface of a class that has only one reason to change - i.e. it violates the single responsibility principle (SRP). This is in part because it in fact does not only act on one entity type but on a hierarchy of entity types; device is a base class and the are a handful of concrete types of devices. The obvious solution to that, is to move away from a repository for the base class to a number of repositories for the concrete entities - but that's where we came from. That's a route that led to duplicated code, which led to abstracting into a common device repository, so we did not want to go back down that route again.

Taking Single Responsibility Seriously

The solution we've come up with is - at it's core - simply to take SRP seriously: We want classes that do not do a bunch of database related operations around a type of enity. We want classes that does just one such operation. Take the Delete method on the interface above. That seems innocent enough, but it turns out that the implemetation is a bit complicated: There are a number of relations that need to be untied, and there is some cascading to take care of before the device can be deleted from the database. As a colleague commented: "It looks like a bad stored proc, written in C#". We decided to view that delete operation as "one thing", and decided it justified a class of its own:


   11 public class DeleteDeviceRequest : NHibernateWriteRequest
   12 {
   13   private readonly Device device;
   14 
   15   public DeleteDeviceRequest(Device device)
   16   {
   17     this.device = device;
   18   }
   19 
   20   protected override void DoExecute()
   21   {
   22     var links =
   23       Session.QueryOver<DeviceLinkReference>().Where(x => x.ControllerIdentifier == device.ControllerIdentifier).
   24         List();
   25 
   26     links.ForEach(x => x.Device = null);
   27 
   28     DeleteStatistics();
   29     new DeleteRoutesWriteRequest(device.Routes).ExecuteWith(DataStore);
   30     DeleteFromGroup();
   31     DeleteFromOrder();
   32     DeleteProfiles();
   33     DataStore.Delete(device);
   34   }
   35 
   36   // more private methods...
   37 
   38 


The only thing public here is the constructor. The contructor takes all arguments for the delete, so once the object is constructed it completely incapsulates the delete. And - importantly - it is incapable of doing anything else than that particular delete.
How is the delete executed? -Through an execute method on the base class NHibernateWriteRequest.

This swaps the repository pattern for the command pattern plus the template method pattern. But more importantly this swaps a small collection of ever growing repositories for a large collection of small never growing query classes (which, btw, also plays better into the open/closed principle). We've only just started down this path, but I think we'll be happy about it.

There are a few twists I've left out, particularly around making these request objects easy to get at for client code, while maintaining encapsulation and testability. I might revisit those in a future post.