Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Saturday, January 13, 2018

Event Sourced Aggregates Part 6: Smarter Events

In this 6th and final post in my series about event sourced aggregates, I will make the events a little bit smarter. So far events have been completely stupid object that just carry some data around, but has no behavior. In this post I give every event a little bit of behavior by moving the 'When' methods from the aggregate to the events. The result is that the aggregate becomes anemic (that is, no behavior, just data), does not violate Open/Closed principle anymore, and that the events become more self contained.

In the 5th post I made an attempt at moving the 'When' methods out of the aggregate to get to a design where the aggregate does not violate Open/Closed. I did so by introducing a new abstraction - an aggregate projector - but that just ended up with the same Open/Closed violation that the aggregate suffered from originally. Therefore I take another approach in this post.

Smarter events

Let's, once again, see how the aggregate looks after domain logic was moved from the aggregate to the command handlers in post 4:


And the 'UsernameChangedEvent' looks like this:


In order to keep the aggregate from growing and growing as features are added to the system, I will move the 'When(UsernameChangedEvent e)` to a `When` method on the event itself, like this:


Now the event holds the data of the event and is responsible for applying the state changes to the aggregate that the event implies. That is: A domain event - like the UsernameChangedEvent - is an indication that something happened to the aggregate, i.e. there was a state change to the aggregate. The 'When' method on the event applies that state change to aggregate that it gets passed in as an argument.
When moving the 'When' method from the aggregate to the event the signature of the method must change a bit to 'void When(UserAggregate aggregate)'. Notice that this signature is not specific to the 'UsernameChangedEvent', but will be the same for all events. That turns out to be a quite handy side effect of moving the `When` methods. More on that in the next section. Since the `When` signature is the same for all events I'll go ahead and add it to the event interface:


Before the 'Event' interface was just a marker interface. Now it shows that all events have a 'When' method.

Event replay revisited

Moving the 'When' methods impact how event replay is done. Remember from the first post that what is stored in the database are the events. When we need an aggregate all the events emitted on that aggregate are read from the database, and then the 'When' method for each one is called. Since the 'When' methods each apply the state changes to aggregate implied by the event the end result is an aggregate object in the correct current state of that aggregate. The replay process goes like this:


Where the event replay is done by the 'Replay' method on the abstract class 'Aggregate'. The special sauce in this is in the implementation of the 'Play' method which - as shown in the first post of the series - involves using reflection to find a suitable 'When' method. This becomes a lot simpler now that all events implement an interface with a 'When' method on it:


This simplification was not the goal of the refactoring done in this post, but a nice side effect, and as such an indication that this is indeed a good road to follow.

The anemic aggregate

With the `When` methods moved to the events the aggregate has become anemic. And that is a good thing in this case. The UserAggregate is reduced to this:


which is simply a representation of the current state of a user. That is the essence of what the 'UserAggregate' is. Moreover this only changes if we have reason to change how the current state of a user looks, which I consider a good reason to change that class. Moreover the 'UserAggregate' no longer violates the Open/Closed principle since new domain behavior can be added by adding new commands, command handlers and events, but without changing the 'UserAggregate'.
Often an anemics domain model is seen as an anti pattern in objec oriented code. I agree. But only when looking at the domain model as a whole, not when looking at just one class - like the 'UserAggregate'. My point here is, that looking a the domain model as a whole includes commands, command handlers and events. In that perspective the domain model is not anemic - only the user aggregate is anemic.

Wrapping up

In the first post of this series I outlined what I see as a typical C# implementation of event sourced aggregates. I also argued that, that implementation leads to the aggregates violating the Open/Closed principle. In the third and fourth posts I solved half of the problem by making the command handlers smarter, and in this post I solved the second half of the problem by making events smarter.

The final code is on Github, as are the intermediate steps (see other branches on that same repository).

Wednesday, November 22, 2017

Event Sourced Aggregates Part 5: Anemic aggregate

In this 5th part in my series on event sourced aggregates I continue moving code out of the aggregate. In the last post I moved domain logic out of the aggregate and into the command handlers, making them smart enough to actually handle commands by themselves. In this post I will continue along the path of moving stuff out of the aggregate: I will move the remaining methods, namely the 'When' methods, out from the aggregate to a new abstraction - an aggregate projector. While this will achieve the goal set out in the first post of getting past the aggregate growing and growing over time, the new aggregate projector unfortunately will suffer from the same problem. In the next post I will take another approach moving the 'When' methods out and arrive at a better design, but first let's follow what I think is the most obvious path and see why that leads a bad design.

The aggregate is a projection

Taking a step back, what is the aggregate? At the datastore level it is a list of events. Together the events represent everything that has happened to the aggregate and, as we have seen, the current state of the aggregate can be recreated from the list of events. At the code level the aggregate is an object - it has some state and it has some behavior. At this point the only behavior it has left is the 'When' methods. The important bit is that the aggregate is an object. It's just an object. Likewise, in the code, different read models are just objects that result from projections over events. In that sense the aggregate is not different from a read model: It is an object that is the result of a projection over events.

Introducing the aggregate projector

Before I start refactoring lets take a look at how the aggregate looks right now:



The aggregate has some state represented by the properties on lines 3 and 4, and then some 'When' methods that make up the logic needed to perform the projection from the aggregates events to its current state.

Seeing that a new 'When' method will be added to the aggregate every time a new event is introduced - and new features will usually result in new events, in my experience - the aggregate still has the problem of growing big and unwieldy over time. So let's introduce another class that can do the projections:



This doesn't just work. First off the new 'UserAggregateProjector' cannot set the properties on the aggregate to anything. That can be fixed by adding internal setters to the aggregate, allowing the projector to access the setters, but disallowing access from outside the same project as the 'UserAggregate', which I expect to mean anything beyond commands, command handlers and events.
Furthermore the event replay done when fetching an aggregate must also change from calling 'When' methods on the aggregate to calling them on the 'UserAggregateProjector'. That means changing 'Aggregate' base class to this:



The changes are the introduction of the 'GetProjector' method on line 30 and the use of that new method in the 'Play' method, which now does reflection of the projector class to find the 'When' methods instead of doing it over the aggregate. The end result is the same: An aggregate object with the current state of the aggregate recreated by replaying all events.

Moving the 'When' methods has obviously also changed the aggregate, which now only contains state:



This is what is known as an anemic domain model, because it has no behavior. That's usually considered an anti-pattern, but I don't necessarily agree that it is; as argued above the aggregate is essentially a projection of the events, so I do not see why that object has to be where the domain behavior goes. As we saw in the 4th post of the series command handlers is a nice place to put domain behavior.

The projector violates Open/Closed principle


As a stated at the beginning of this post the design I've arrived at now is not good: The new 'UserAggregateProjector' suffers just as much from perpetual growth as the aggregate did before I moved the 'When' methods out of it. In other words the new projector violates the Open/Closed principle, which is what I am trying to get away from. So I have not solved anything, just moved the problem to a new abstraction :( Seems like I need to take another iteration, which I will in the next post.

The code for this post is in this branch.

Tuesday, November 14, 2017

Event Sourced Aggregates Part 4: Smart command handlers

In this fourth post in my series about event sourced aggregates I will continue work on the command handlers. In the 3rd post I did some cleaning up of the command handlers, cutting them down to very little code. In this post I will make the command handlers smart by moving domain logic from the aggregate to the command handlers.

Motivation

In the first and second posts of the series I outlined a typical C# implementation of event sourced aggregates and how that style of implementation leads to ever growing aggregates - every added features adds (at least) two methods to the aggregate: 
  • One for the domain logic. E.g. a 'ChangeUsername' method, that has whatever business logic there is around changing the username. If and when these methods decide a change to the aggregate state is needed they emit a domain event.
  • A 'When' method for any new events. The `When` methods perform all state changes on the aggregate.
The patterns I see in the implementations I come across is that there is a one-to-one correspondence between commands, command handlers and public domain logic methods on the aggregate. For example the pattern is that for a 'ChangeUsernameCommand' there is a 'ChangeUsernameCommandHandler' class and a 'ChangeUsername' method on the aggregate. I took almost all the plumbing out of the command handler in the last post and essentially left it at this:


which invokes the 'helper.Handle' method to get all the plumbing done and then calls the 'ChangeUsername' method to get the domain logic done. So in essence the command handler just delegates to the aggregate, but isn't the responsibility of the command handler to ... handle the command? I think it is. And handling the command means running the domain logic, so let's move that logic from the aggregate to the command handler.

Smart command handlers

In the second post I introduced the 'ChangeUsernameCommand` and the associated command handler and methods on the aggregate. In particular this `ChangeUsername` method on the aggregate:



which implements the domain logic for changing username. That is the logic I want to move to the command handler.
Moving the domain logic straight over the command handler, changes the `Handle` method on the command handler to this:



Now the command handler contains the logic for handling the command. Note that now the command handler also emits domain events - on line 7. This makes sense since this is still event sourced, so changes to the aggregate state are still done through events. The rest of the mechanics around events remain unchanged: The `Emit` method on the base aggregate still calls the 'When' method for the event and stores the event to the list of new events on the aggregate. Saving the aggregate still means appending the list of new events to the event store, and getting the aggregate from the 'AggregateRepository' still means reading all the aggregates events from the event store and replaying each one.

Having moved the domain logic out of the aggregate I have a slimmer aggregate, that only has the state of the aggregate and the 'When' methods. In the next two post I slim down the aggregate even further by moving the 'When' methods out.

The complete code for this post in this branch.

Monday, November 6, 2017

Event Sourced Aggregates Part 3: Clean up command handlers

This is the 3rd part of my series about event sourced aggregates. In the first post I outlined a typical implementation of event sourced aggregates, and in the second post I showed how that implementation leads to aggregates that grow bigger and bigger as features are added to the system. In this post I will clean up the command handlers from the previous post by moving some repeated code out them to a new abstraction. In the coming posts I will refactor between the command handlers, aggregates and event handlers to arrive at a design where the aggregate does not grow.

Repeated code in command handlers

In the last post we looked briefly at this implementation of a command handler:


Looking at the above the code at line 20 is - in a sense - where the ChangeUsernameCommand the is handled, because that is the only line that is about changing a username. All the other code in the command handler is about infrastructure; loading the aggregate, saving the aggregate and dispatching events. Moreover the code for loading and saving aggregates as well as the code for dispatching will be repeated in every command handler

Introducing a helper

To get past that repetitiveness and to cut back on the amount of infrastructure code in the command handler, we introduce this helper, where the loading of the aggregate, the saving of the aggregate and the dispatching of events is done:


The idea behind the CommandHandlerHelper is that the concrete command handler calls the Handle method with a handlerFunc, that does the business logic bit of the command handler. The handlerFunc is called at line 18, so the helper makes sure the infrastructure code is done in right order in relation to the business logic.

Cleaner command handlers

With the CommandHandlerHelper in place the ChangeUsernameCommand can be rewritten to use it like this:

This is a good deal simpler than the command handler code at the start of the post.


That's it for now. With this clean up in place we set for the next steps:

Tuesday, October 31, 2017

Event Sourced Aggregates Part 2: Where the mess starts

In the first post in this series I outlined a typical C# implmentation of  event sourced aggregates. In this post we add a second feature to the example from the first post. In doing so I wish to illustrate how that typical implmentation leads to violation of Open/Closed principle and ever growing aggregates.

The second feature

Once we have the code from the first post in place - that is: The infrastructure for sending commands to aggregates, raising events from aggregates, saving events and replaying events - and need to add a second feature, the way to do it is (to some extent) as outline in the first post: 
  1. Send a new type of command
  2. Implement a command handler for the new command
  3. Implement a new method on the aggregate with the new domain logic
  4. Emit any new events needed
  5. Implement new When methods for any new event types
Let's say we want to be able change a users username.
The command and the sending of of that command looks like this:



That's pretty straight forward and not too interesting, so let's move on to the command handler:



That's also pretty straight forward, but a little more interesting: Most of the the command handler code is something that will be repeated in all commands handlers. We will deal with this in the next post, where that repetition is pulled out into a helper class.
The next step is changes to the aggregate, where this is added:



This is still pretty straight forward. In fact everything needed to add this new feature was straight forward, so that is a good thing. The problem lies in the last two methods, the ones added to the aggregate.

Problem: An Open/Closed Principle violation

The aggregate just grew. In order to support changing the username we added 2 methods to the aggregate. That violates the Open/Closed principle, which indicates that it is a potential problem. In my experience, it quickly becomes a real problem because the aggregate grows relatively quickly and eventually becomes big and complicated, just like any other class that grows monotonically.

That's it for now. The next posts will:
  1. Make the command handlers smarter and clean up some repetitiveness
  2. Make the aggregate anemic in a naive way, leaving a stable aggregate, but introducing a new Open/Closed violation
  3. Make the aggregate anemic, events (a little) smart, and solve the Open/Closed violation

Tuesday, October 24, 2017

Event Sourced Aggregates Part 1: Outline of a typical implementation

This is the first post in a series of posts that takes its offset in a design problem I've encountered repeatedly with event sourced aggregates: They grow every time a feature is added. Nothing (almost) is removed from them, so over time they grow very big and gnarly. Why does this happen? Because typical implementations of event sourced aggregates violate the Open/Closed principle.
Through this series of post, I will show how event sourced aggregates violate Open/Closed and - as a consequence - tend to grow monotonically, and then show how we can address that by re-factoring away from the big aggregate towards a small and manageable one. Cliffhanger and spoiler: The aggregate will become anemic and I think that is a good thing.

The complete code from the series is on GitHub, where there is a branch with the code for the first two posts.

Event sourced aggregates

The context of what I am exploring in this series of posts is systems based on Domain Driven Design, where some or all of the aggregates are stored using event sourcing. Often these systems also use CQRS - very much inspired by Greg Youngs talks and writings.
Using event sourcing for storing the aggregates, means that the aggregate code does not change the state of the aggregate directly, instead it emits an event. The event is applied to the aggregate which is where changes to the state of the aggregate happens, but the event is also stored to a data store. Since aggregate state is only changed when an event is applied, the current aggregate state can be recreated by reading all the events for a given aggregate up form the data store and applying each one to the aggregate. The benefits of this approach are many (when applied to a suitable domain) and described elsewhere, so I wont go into them here.

A typical C# implementation

Typically implementations of all this follow a structure where requests coming in from the outside - be it though a client making a request to an HTTP endpoint, a message on a queue from some other service or something else - result in a chain that goes like this:
  1. A command is sent, asking the domain to perform a certain task 
  2. A command handler picks up that command, fetches the appropriate aggregate and triggers appropriate domain behavior. Fetching the aggregate involves replaying all the aggregates events (event sourcing!)
  3. A method on an aggregate is called, and that is where the actual domain logic is implemented
  4. Whenever the domain logic needs to change some state it emits an event (event sourcing, again) of a specific type
  5. A 'when' method for the specific event type on the aggregate is called and updates the state of the aggregate
  6. After the aggregate is done, all the events emitted during execution of the domain logic is dispatched to any number of event handlers that cause side effects like updating view models, or sending messages to other services.
To put this whole process into code, let's think of an example: Creating a user in some imaginary system. The first step is to send the create user command:


Next step is the event handler for the create user command. Note that in this example I use the MediatR library to connect the sending of a command to the handler for the command.


Note that most of what is going on here is the same as for other handlers for other commands: Pick the right aggregate and, after executing domain logic, save that aggregate and then dispatch whatever events were emitted.

On line 19 of the handler we call into the aggregate. The code in the aggregate looks like this:


At line 11 we call the Emit method. This is how most implementations I've seen work, and typically that Emit method is part of the Aggregate base class and looks something like this:


Notice how Emit calls Play which uses reflection to find a When method on the aggregate itself, and to call that When method. The When method is supposed to update the state of the aggregate and is also the method that gets called during event replay. More on that below. For now let's see the when method:


That's pretty much it, though there a few things I have skipped over a bit quickly: How the aggregate is fetched, how it is saved and how events are dispatched. I will not go into the event dispatching, since it is not relevant to the point I am making in this series, but the code is on Github, if you want to look. As for the other two bits - fetching and saving aggregates - lets start with how aggregates are saved:


As you can see saving the aggregate essentially means saving a list of events. The list should contain all the events that has ever been emitted by the aggregate. That is the essence of event sourcing. When it comes to fetching the aggregate, the list of events is read, and each one is replayed on a new clean aggregate object - that is the When methods for each event is called in turn. Since only the When methods update the state of the aggregate the result is an aggregate object in the right state. The Get method on the aggregate repository (which does the fetching) looks like this:


And the Replay method called in line 14 just runs through the list of events and plays each on them in turn, like this:


That pretty much outline the implementations of event sourced aggregates I seem to come across. 

That's it for now. The next posts will:

  1. Add a second feature and see how the aggregate starts to violate Open/Closed principle
  2. Make the command handlers smarter and clean up some repetitiveness
  3. Make the aggregate anemic in a naive way, leaving a stable aggregate, but introducing a new Open/Closed violation
  4. Make the aggregate anemic, events (a little) smart, and solve the Open/Closed violation

Friday, November 7, 2014

A First Look at C# 6 - Nameof operator

This is the fourth post about the C# 6 features I used when moving Nancy.Linker over to C# 6. The first part was about primary constructors. The second part was about auto properties. The third was about expression bodied members. This one in about the new nameof operator.

The nameof operator

The new nameof operator is a unary operator that takes, as its argument, a type, an identifier or a method and returns the name of the argument. That is, if you pass in the type Uri, you get back the string "System.Uri", if you pass in a variable you get back the name of the variable as a string.

In Nancy.Linker there was one opportunity for using nameof, which I coincidentally think represents a usage of nameof that will become widespread.
In one the private methods in Nancy.Linker there is a null check followed by a possible ArgumentException:


Notice how I am repeating the name of the res variable in a string in order to tell the ArgumentException which argument is in error. With the nameof operator I avoid this duplication and potential source of inconsistency as follows:


Note that in case I rename res and forget to update the usage inside nameof I get a compile error.

Do I Like This?

Yes. It lets me have fewer magic strings.

Wednesday, November 5, 2014

A First Look at Using C# 6 - Expression Bodied Members

This is the third post about the C# 6 features I used when moving Nancy.Linker over to C# 6. The first part was about primary constructors. The second part was about auto properties. This one is about the new expression bodied members feature.

Expression Bodied Methods

I like to break down my code down to short methods. Quite short. So I often have one-liner methods. In Nancy.Linker I had this for instance:


With C# 6 I can make this even shorter because methods that consist of just one expression, can be implemented as just that - one expression .... with a fat arrow in front of it. Like so:



Expression Bodied Properties

I didn't have an opportunity to use any expression bodies properties in Nancy.Linker, but I want to mention them anyway.

Just as single expression methods can be changed to be a fat arrow followed by the expression so can getter only properties that have just one expression in the getter. Note what I said there: Getter only properties can be expression bodied. Properties with setters cannot ... but ... think about it ... what would it mean to set a property consisting of only an expression? There is nothing there to assign to.

Seeing that expression bodied properties are getter only, it stands to reason that you don't have to state the 'get'.
As an example an expression bodied property looks like this:



Do I Like These?

Yes :-)
I like them because they cut down on boilerplate.

Friday, September 26, 2014

A First Look at Using C# 6 - Auto Properties

This is the second post about the C# 6 features I used when moving Nancy.Linker over to C# 6. The first part was about primary constructors. This one in about the new auto-property features.


Auto-Property Initializers and Getter-Only Auto-Properties

Just like fields can be initialized from primary constructor arguments in C# 6 so can auto properties. In Nancy.Linker this is used in the Registration class where cuts out a good deal of ceremony.

Furthermore auto properties do not need to a setter anymore. This is very different from autoproperties with a private setter, because an auto-property without a setter is immutable, where as an auto-property with a private setter only protected against code outside of the class mutating it.

The Registration class in Nancy.Linker implements the IRegistrations interace from Nancy, which looks like this:



Notice that this is 3 getter-only properties. Until now you would have implement these either with explicit getters, or as auto-properties with setters. In C# 6 their implementation can follow the interface more closely and as an added bonus the whole class becomes immutable. The code for the Registration class becomes:



I like these 2 features because:
  1. They make creating immutable types in C# a lot easier, which I think we will see a whole lot more of in C# code in the near future.
  2. The cut down on the amount of ceremony needed to implement interfaces like IRegistrations

Sunday, September 21, 2014

A First Look at Using C# 6 - Primary Constructors

Update 2nd October 2014: Yesterdaty primary constructors was pulled from the planned features of C# 6, rendering this post somewhat irrelevant. Sorry, but that's the kind of thing that happens in open development processes.



The other night I moved Nancy.Linker to C# 6 - this is the first part of a summary of what I changed along with quick introductions to the C# 6 features I used. Nancy.Linker is a small library (1 interfaces, 2 production classes, 2 test class), so not everything comes into play there - but the 3 C# 6 features I'm most eager to get, namely primary constructors, auto-property initializers and getter-only auto-properties, all did. For the full overview of C# 6 features visit the language features status page on the Roslyn codeplex site.

Before diving in I will note that this is based on code written using the 'experimental' flag for the compiler that comes with Visual Studio 14 CTP 3. Since this is a pre-release of the compiler things may change, but judging from the language features status page linked above the features described here are fairly stable.


Primary constructors

The first feature I used was primary constructors. In Nancy.Linker only 1 of the production classes and 1 of the test classes have non-default constructors, and they only have 1 constructor each. Classes with 1 explicit constructor are prime candidates for primary constructors, so i gave both a primary constructor.

Primary constructors add new C# syntax for declaring a constructor. While multiple constructors are still allowed only one of them can be the primary one. From the outside a primary constructor is no different than any other constructor; its primary-ness is only visible inside the class. The declaration of a primary constructor is part of the class declaration where an argument list is added:



which is equivalent the old ResourceLinker constructor:



Once a primary constructor is declared the variables in the argument list are available for initializing fields and properties in the class. In the ResourceLinker class the old constructor assigned the two constructor arguments directly to private fields. With primary constructor syntax this becomes:



I like this for two reasons.
  1. By pulling the constructor argument list all the way up to the line declaring the class primary constructors place more emphasis on this list. I find that the constructor argument list is quite important, because it shows what is needed to create an instance of the class. Particularly so if you use DI and prefer constructor injection.
  2. It saves a few lines of trivial initializtion code.

Primary Constructor Bodies

My previous experience with primary constructors is from Scala in which everything between the open curly brace at the start of the class and the closing one at the end is the body of the primary constructor. You can put arbitrary code into the class definition and it will run when instances are created. In my experience this is done fairly often in Scala code, and it is a feature I've been happy with in Scala. In C# 6 adding is a body to a primary constructor is - in my opinion - somewhat less elegant because it requires you to add a scope somewhere in the class where all the primary constructor body code is placed.

The tests for Nancy.Linker uses a small NancyModule called test module, which sets up a few routes, that the tests then use. Before that class looked like this:



Take line 8 as an example. This is not simlpy an assignment to a field or a property. Therefore it has to be part of the primary constructor body, so moving this class over to having a primary constructor means turning it into:



See that pair of curly braces around the constructor body? That is that extra scope delimiting the body of the primary constructor.

I still like the changes to this class for the same reasons as stated above, but I don't like the necessity of putting all the primary constructor body code into a scope because:
  1. It is less flexible than the Scala counterpart, because all the code has to be together.
  2. Aestetics: The code is at an extra level of indentation and has to be surrounded by a pair of somewhat dangling braces.


Saturday, August 30, 2014

Using Nancy.Linker with Razor Views

First things first: I recommend that you use Nancy.Linker to generate link in the route handler not the view code, as described in my last post. If you insist on generating the links in the view code here is how to make Nancy.Linker work with Razor views.

Firstly you need to pass an instance on IResourceLinker and the NancyContext to your view. This works just like passing any other object from the handler to the view  - in your Nancy module you have your route handler pass the IResoureLinker and NancyContext objects as part of the model to the view you want to render:




The NancyContext must  passed along with the IResourceLinker, since Nancy.Linker needs it to generate links. Once you've done this you are almost ready to use Nancy.Linker in your Razor code, but first you need a little bit of web.config gymnasitcs. This is because IResouceLinkker returns System.Uri objects, which Razor does not know about unless you tell it where to look. To tell Razor, add this to your web.config:



Refer to the Nancy documentation for a proper explanation of this.
Having added the web.config snippet you can go ahead and use Nancy.Linker to generate links in the Razor code:



That's all folks!

Sunday, July 27, 2014

Using Nancy.Linker with Views

TL;DR

You have two options: 
  • The simplest is to use Nancy.Linker in your route handler to generate the links needed in the views, put them on the view model and pass the view model to the view as usual. 
  • The other is to pass IResourceLinker to the view and allow it to generate links as needed. For this to work you may need a little bit of web.config'ing to make Razor play nice. 
This post shows the former.

Nothing New

In the last post I introduced Nancy.Linker, showed how to use it to create links to named routes and place them on a model object returned by a Nancy route handler.

In essence; given this module:


This route handler will return a model with a link to the route in the module above in either XML, JSON or whatever other format you have support for in your application:


(how the format is chosen, and which are supported is another story)
Now, if you want to show that link in a view, you just have to add one line the to handler and - of course - view code. The handler becomes:


Assuming you are using Razor for your views the bar.cshtml view can simply be:


There you have it. Just use Nancy.Linker the same as when returning data, point to the view from the handler and use the generated link as any other string passed to a view.

This is the approach I'd recommend, but in the next post I will show how to use IResourceLinker in Razor code.

Saturday, July 5, 2014

Nancy.Linker

TL;DR

Nancy.Linker is a small library for creating URIs for named routes in Nancy application that I released to NuGet the other day.

Purpose of Nancy.Linker

The problem Nancy.Linker solves is to allow your application code to create URIs pointing to endpoints in your Nancy application without hardcoding the URI. Instead you refer to the endpoint by its route name and provide values for whatever route parameters the route expects. The library then returns you a suitable System.Uri.

Example

Let's consider a Nancy application with this module in it:


The module does nothing interesting, but bear with me. The thing to notice is that FooModule has all sorts of routes - a constant one, one with a simple parameter, one with a constraint on a parameter, one with a regex segment with a captured parameter, one with an optional parameter and one with a default value for an optional parameter. To read up on Nancy routing check out the docs.

Now let's assume another module needs to put links to endpoints in FooModule in its own response. It can do so by taking a dependency on IResoureLinker from Nancy.Linker and asking it to create the URIs:


which will produce a response containing a bunch of links back to the FooModule. For instance the call 


results in this string (assuming your application runs on http://www.nancyisawesome.com):


Getting Started

Just install Nancy.Linker from NuGet:


Nancy.Linker will take care of registering the IResourceLinker with your container, so modules can just go ahead a take a dependency on IResourceLinker.



Saturday, March 8, 2014

In Search of Maybe in C# - Part III

Following up on the last two posts[1, 2] this final post in my little series on Maybe/Option gives a third implementation of Option. This time around I'll drop the idea of letting Option implement IEnumerable and instead focus on implementing the function needed to make my Option a monad (which it wasn't actually in the last post).

First off let's reiterate the basic implementation of Option - a simple class that can be in two states 'Some' or 'None', where Options in 'Some' state contain a value and Option in 'None' state don't.

I also want to maintain the matching functionality that I had  in the other posts, so I'm keeping this:

While even this is somewhat useful as is, it becomes better if we add the remaining part of the monad; the FlatMap function (the unit function is the Some(T value) method, so only FlatMap aka Bind aka SelectMany is missing). So we add this to the OptionHelpers class:

This is nice because now we are once again able to chain operations without having to worry much about null ref exceptions.
Just for convenience I'll throw in a Filter function in the OptionHelpers as well:


To exemplify how Option can flow through a series of operations take a look at the following where there is a value,so the some case will flow through:

In the case of no value the none case will flow through as in this example:



Towards the end of the first post in the series I mentioned that it's nice to be able to treat Option as a collection. That led to Option implementing IEnumerable in the second post, thus enabling passing Options into existing code working on IEnumerables, which is quite useful when introducing Option into an existing code base. Especially so, if it's a code base that uses LINQ a lot. In such code you're likely to find functions accepting an IEnumerable and returning an IEnumerable. An Option type implementing IEnumerable fits rigth in. On the other hand, if you don't have exiting LINQ based code to fit into, the Option version in this post is maybe a bit simpler to work with because it has smaller API surface and it still provides the benefit of allowing chaining.

Thursday, February 20, 2014

In Search of Maybe in C# - Part II

In my last post I wrote that I'd like to have an implementation of the Maybe monad in C#, and I explored how to use the F# implementation - FSharpOption in C#.

In this post I'll show a quick implementation of Maybe in C#, I'll call it Option - as was noted in some of the comments on the last post this turns out to be quite simple. I left off last time mentioning that it's nice when Option can act somewhat as a collection. That turns out to be easy too. Just watch as I walk through it.

Implementing basic Option
First of all the Option type is a type that can either be Some or None. That is simply:


which allows for creating options like this:


So far so good. This all there is to a basic Maybe monad in C#.

Simulate pattern matching on Option
As argued in the last post I'd like be able to do something similar to pattern matching over my Option type. It's not going be nearly as strong as F#s or Scalas pattern matcing, but it's pretty easy to support these scenarios:


All that takes is adding to simple methods on the Option type that each check the value of the Option instance and calls the right callback. In other words just add these two methods to the Option class above:


That's matching sorted.

Act like a collection
Now I also want Option to be usable as though it was a collection. I want that in order to be able to write chains of linq transformations and then just pass Options through that. To illustrate, I want to able to do this:


This means that Option must now implement IEnumerable<T>, which is done like so:


And to be able to get back an option from a IEnumerable with 0 or 1 element:


which is useful in order to be able to do the matching from above.

All together
Putting it all together the Option type becomes:


Not too hard.

Wednesday, January 29, 2014

In Search of Maybe in C#

One of the things working in Scala has taught me about C# is that the maybe monad is nicer than null.

In case you're not sure what the maybe monad is: It's a generic type that either contains a value or not. In Scala as well as in F# this type is called Option and can be either Some - in which case it contains a value - or None - in which case it doesn't. This allows you to represent the absence of a value in an explicit way visible to the compiler, since Option<T> is a separate type from T. This turns out to be a whole lot stronger than relying on null to represent the absence of a value. I'll point you to an article about F#'s Option for more explanation on why.

Functional languages will usually let you pattern match over Option, making it very easy to clearly differentiate the case of Some from the case of None. In F# this look like so:


In C# we don't have this. The closest thing is Nullable<T>, but that doesn't work over reference types. But is Nullable<T> really the closest thing in C# to Option? - NO. As was pointed out to me in a recent email exchange the F# Option type is available in C# too.

Let's see if we can redo the above F# code in C#. First the declaration the name variable:


The F# Option type is part of the FSharp.Core assembly and the is actually called FSharpOption. Hence the line above.
On to the nameOrApology variable:


Not nearly as clear as the F# counterpart. What's in the if part; what's in the else part. Not nearly as clear to me as the pattern matching in F#.
We can do better, using this little helper extension function:


Which let's us initialize nameOrApology like this:


Now the two outcomes - a value inside the Option or no value - are much more clear to me.

This is OK. Although the name, FSharpOption, is not too nice. Some of the method names - e.g. get_IsSome - aren't nice either. More importantly: Another thing Scala taught me is that it's nice if Option can be used in place of a collection. But that's for another post.

Wednesday, March 13, 2013

More OO Around That Island of Functional

In my last post I showed how C# seamlesly supports having little functional islands inside an otherwise OO code base. I did that using a simple object model, in which one of the objects was implemented in a functional manner. The other objects, though, were just data containers, really.In that post I made the off hand remark that these could be better encapsulated. In this post I'll address this, and also show that the "other" objects could easily have more behavior as well. All-in-all this brings the example closer to the original intention: To show how the overall design can be OO, but the insides of individual classes can be functional.

The classes involved were CourseTemplate, Course, Registrant and CoursePlan. The example is to create a course plan, where registrants are placed in in courses bsed on which course they wish to take.

Slightly Better Encapsulation

To get to a more object oriented design the first step is to move away from simple auto properties with public getters and public setters to a least have private setters. Using CourseTemplate as an example this simply means moving to this:


Adding Basic Domain Operations

Nest step is to start adding basic domain operations that make the objects have just a little bit of actual behavior and some more encapsulation. To illustrate lets add a method for wishing courses to the Registrant class, and lets also add an override of equals that makes more sense from a domain perspective:


Adding other Behaviors

We're getting the hang of this. Let's add the same type of basic domain methods along with a few other (made up) methods to the Course class:


By adding Register, IsFull and IsEmpty some responsibility is being moved from the CoursePlan version in the last post to the Course - which makes much more sense. By adding Start and AwardDiplomas I point toward the fact that these classes can have more behavior if we need it.

The Functional Island is Still Functional

Despite these changes the CoursePlan doesn't change all that much. The places the implementation has changed is where the new methods on Course are used. The implementation is still just as functional:



Conclusion

The functional islands I talked about in the last post can easily exist in a truly object oriented code base.


Thursday, March 7, 2013

On Functional Islands in Obejct Oriented Code

This post is a comment on some of how Brian Marick outlines how to embed functional code in a object oriented code base in his book "Functional Programming for the Object-Oriented Programmer". An excellent book for anybody moving into FP from OO, by the way. I definitely recommend it.

TL;DR

I feel that the best way to embed functional islands in an OO code base is to have certain classes be fully OO on the outside and as functional as you can get them on the inside. Doing so in a multi-paradigm language like C#, the bridging between the OO realm and the FP realm can become seamless.

Functional Islands


Brian uses a drawing like this one in his book to explain that functional code can live within an OO structure, and can be used by the OO code via a couple of bridges; one transforming an object graph to a data structure more suitable for the functional code, and one doing the opposite transformation. This bridging is done because functional code tends to work more on data than on objects - typically contained in general data structures like lists or sets. 

This makes sense; the OO world has one way of modeling and the FP world has another way. To move between them some bridging is needed. The book goes on to show an example of this, where some Java code calls into some Clojure code. But before and after the functional Clojure code runs, the bridging occurs. Each bridge is simply a Java method that does the required transformations; simple but tedious code.

I can sort of recognize this pattern from my own C# code, but only sort of. What I usually have when I embed functional code in otherwise OO C# code is that certain classes expose an OO interface to the rest of the code base, but are implemented in a functional style. To demonstrate this I've taken the example from Brians book and sketched in C#.  The rest of this post walks through that code.

Example

The example i: Given a number of courses that run either morning or afternoon, a fixed number of intructors that can teach any course, and a number of registrants that have wishes for which courses they want to attend find a solution.

First off here is the C# code for the OO strucutre Brian uses in his book:


just a few stupid classes - these should probably be better incapsulated, but that is not really the point here.

The solution I want to end up with should be an instance of this class:


To get there the CoursePlan class is used. This is the class that is implemented in a functional style - or to be honest a somwhat functional style: It represents its state as a couple of lists, and does it's work by applying functions to these lists. Moreover it uses functions that work lazily. That's functional. On the other hand is mutates the state of objects inside it lists. In other words it has side effects. Not so functional. All in all I thinks it's fair to say CoursePlan is implemented in a functional style.

To use CoursePlan I do this:


I just create a new CoursePlan and pass in a course list, registrant list and the number of instructors. Notice the CourseList and RegistrantList? -Those are domain specific collections. By which I mean domain objects like any other that happen to implement the domain notions of "a list of courses" and of "a list of registrants". Having such domain specific classes instead of List<Course> and List<Registrant> in my experience leads to cleaner OO code, and is something I usually advocate.

So where does the bridging from OO to FP go on? Well that's the point it's almost not there: The bridging is taken care of simply because the two domain specific collections - CourseList and RegistrantList - implement IEnumerable<Course> and IEnumerable<Registrant> respectively.

The code for CoursePlan is pretty trivial, but here you go anyway:


Conclusion

In conclusion I feel that the best way to embed functional islands in an OO code base is to have certain classes be fully OO on the outside and as functional as you can get them on the inside. Doing so  in a multi-paradigm language like C#, the bridging between the OO realm and the FP realm can become seamless.

Monday, July 9, 2012

WPF View Smoke Testing

I enjoy TDD. I enjoy the way it makes me code and I enjoy the rhythm. That is, I really enjoy TDD'ing the logic in my applications: The business logic, the controller logic, the domain logic; wherever there is logic that I can isolate and unit test cleanly TDD is just all fun and productivity. I enjoy TDD'ind integration points a little less: The database access, the web service calls, and so on are slow to test, so the TDD rhythm is broken. I still prefer TDD'ing these parts too though. I do not enjoy TDD'ing UIs: In my experience the tests are slow, either the rate of false negatives is high or the rate of false positives is high, and UI test are a pain to write (don't even get me started on the horrors of recording UI tests). Therefore I usually settle for subcut tests and manual UI testing when it comes to views. That is: I want to TDD my presentation logic, but am usually ok with checking that the pixels are in the right spot manually.
Working with WPF this is - at first sight - supported by MVVM: The view models sit right under the views, and are able to cut the .xaml.cs code behinds down to (almost) nothing, so testing the view models is subcut testing right? Right?? Not quite: There is a little bit of stuff going on in the bindings. I want to test that stuff.

WPF has been around for a while so I expected this to be a solved problem. Maybe it is, but my googling didn't yield an answer. It did however yield a number of candidated solutions. Below is a quick summary of my evaluation I of these candidates. (TLDR: None of the libraries worked, but Window.Show and FrameworkElement.FindName turned out to be my friend, and maybe, just maybe. Approval Tests could finally win me over to UI testing).

Notes From My Evaluation

Below there is a section for each of the candidates I quickly evaluated with some rough notes on how it went. The evaluations are not terrible thorough nor objective, so YMMV 

IcuTest

IcuTest is a library that works by asking you the first time a test is run to accept or reject the result: It shows you a snapshot of the window to assert on, and you accept or reject. If you accept the image is saved and used as the acceptance criteria in subsequent runs.
I have a few issues with this: The asserts come down to bitmap comparisons, which means (1) that they are imprecise - they assert on the whole window/control instead of just the one thing the current test is about and (2) they depend on the machine the test is running on. These are execatly the issues with UI testing that have bitten me in the past. So IcuTest is not what I'm after.

White

White seems to be the most grown up UI testing framework for Windows applications around. Promising. But: I installed the NuGet in my projects and was immediately sent into the Log4Net strong naming cirkus. Annoying but not Whites fault.I went on to clone the source off Github. It compiled out of the box, but most of the tests failed. After fiddling around for a few hours I got like 60% of the tests running. Not a good sign to be honest. The fiddling included changing things like which properties were used to input text into a textbox in ways that were pure guessing on my part. All in all this seemed like a route that would lead to unstable tests. So White is not what I'm after.

AvalonTestRunner

The AvalonTestRunner is an old thing from back when WPF was still Avalon. It doesn't claim to do a lot either. But none of these are bad things in themselves. If it works it works. One of the things AvalonTestRunner claims to do is sanity check all the bindings in a view: I.e. throw an exception if there is a binding path that is not found in the data context. That is part of what I'm after and actually a quite nice first line of defense to have. Only problem: I couldn't get it to work, not out of the box and not after debugging through the source for a while. Maybe it's just me, but again I concluded: AvalonTestRunner is not what I'm after.

Guia

Guia seems to try to solve the same thing as white but only for WPF. It also seems defunct: The current version is 0.1.1 and is 2 years ole (at the time of writing) . But again: If it works it works. Sadly it didn't work out of the box, and because of the lack of activity I didn't investigate further. Guia is not what I'm after.

Hand Coding

With all the libraries I tried out not really working I'm left with hand coding the tests myself. As it turns out this is in fact not nearly as bad as it sounds  - not for the subcut type of tests I'm after anyway.

MS UI Automation lib

MS UI Automation is a library intended to support building screen readers, remote controls and other types of applications that need to automate the UI of some other - known or unknown - application. It seems to provide everything you'd need to write UI test for WPF. I haven't tried it out though because it seems like overkill for what I'm looking for.

.FindName

As it turns out I probably shouldn't have spent so much time wading through testing libraries because the simple little kinds of tests I set out to write: Tests that drive and check the bindings in my XAML are actually (for the most part at least) easily written by just opening the window under test directly, finding the controls of interest by calling .FindName on the window under test and then manipulating or asserting against properties on those controls. Like this:

This is so darn easy, that once I realized, that this is actually what I am after I stopped looking at libraries although there are still a couple on my list. So there you have it: It's just not complicated enough to need a library.

Promising ones I haven't gotten around to evaluating

In spite of the conclusion above the following may be worth checking out. I'm pretty sure at least ApprovalTests work although I haven't tried it. WhiPFlash seems active, but beyond that I know nothing.

ApprovalTests

WhiPFlash

Thursday, May 10, 2012

Slides from my Community Day Copenhagen 2012 Talk

I did a talk on "alternative" .NET web frameworks, because I think it's important to realize that there are serious alternatives to ASP.NET - even in .NET-land. Anyway here are the slides: