Tuesday, October 6, 2015

Book Excerpt: What is a Microservice?


In this article, excerpted from my upcoming book Microservices in .NET, I will talk about the characteristics that help you recognize a Microservice when you see one and that help you scope and implement your services in way that enable the benefits of Microservices

A Microservice is a service with one and only one very narrowly focused capability. That capability is exposed to the rest of the system in a remote API. For example, think of a system for managing a warehouse: Some capabilities that might each be provided by a Microservice in such a system are:

  • Receive stock
  • Calculate where new stock should be stored
  • Calculate placement routes inside the warehouse for putting stock into the right storage units
  • Assign placement routes to warehouse employees
  • Receive orders
  • Calculate pick routes in the warehouse given a set of orders
  • Assign pick routes to warehouse employees

Each of these capabilities - and most likely many more - are implemented by individual Microservices. Each Microservice runs in separate processes and can be deployed on its own independently of the other Microservices. Likewise, each Microservice has its own dedicated database. Still each Microservice collaborates and communicates with other Microservices.

It is entirely possible that different Microservices within a system will be implemented on different platforms - some Microservices may be on .NET, others on Erlang, and still others on Node.js. As long as they can communicate in order to collaborate this polyglot approach can work out fine. HTTP is a good candidate for the communication: All the mentioned platforms, as well as many others, can handle HTTP nicely. Other technologies also fit the bill for Microservice communication: Some queues, some service buses and some binary protocols for instance. Of these HTTP is probably the most widely supported, is fairly easy to understand and - as illustrated by the World Wide Web - is quite capable, all in all making it a good candidate.

To illustrate, think of the warehouse system again. One Microservice in that system is the Assign Pick Routes Microservices. Figure 1 shows the Assign Pick Route Microservice receiving a request from another collaborating Microservice. The request is for the next pick route for a given employee. The Assign Pick Route Microservice has to find a suitable route for the employee. Calculating an optimal route is done in another Microservice. The Assign Pick Route Microservice simply gets notified of the pick routes and only needs to decide how to assign them to employees. When a request for a pick route for a given employee comes in the Assign Pick Route Microservice looks in its database for a suitable pick route, selects one and returns it to the calling Microservice.

Figure 1 The Assign Pick Route Microservice exposes an API for getting a pick route for an employee. Other Microservices can call that API.

What is a Microservices Architecture?

Microservices as an architectural style is a lightweight form of Service Oriented Architecture where the services are very tightly focused on doing one thing and doing it well.

A system that uses Microservices as its main architectural style is a distributed system of a - most likely large - number of collaborating Microservices. Each Microservice runs on its own in its own process. Each Microservice only provides a small piece of the picture and the system as a whole works because the Microservices collaborate closely. To collaborate they communicate over a lightweight medium that is not tied to one specific platform like .NET, Java or Erlang. As mentioned, all communication between Microservices, in this book, is over HTTP, but other options include a queue, a bus or a binary protocol like Thrift.

The Microservices architectural style is quickly gaining in popularity for building and maintaining complex server side software systems. Understandably so: Microservices offer a
number of potential benefits over both more traditional service oriented approaches and monolithic architectures. Microservices – when done well - are malleable, scalable, resilient and allow a short lead time from start of implementation to deployment to production. A combination which often prove evasive for complex software system.

Microservice Characteristics

So far I have established that a Microservice is a very tightly focused service, but that is still a vague definition. To narrow down the definition of what a Microservice is, let’s take a look at what characterizes a Microservice. In my interpretation of the term, a Microservice is characterized by being:

  1. Responsible for one single capability
  2. Individually deployable
  3. Consists of one or more processes
  4. Owns its own data store
  5. A small team can maintain a handful of Microservices
  6. Replaceable

This list of characteristics both helps you recognize a Microservice when you see one and helps you scope and implement your services in way that enable the benefits of Microservices - a malleable, scalable and resilient system. Let’s look at each in turn.

Responsible for One Single Capability

A Microservice is responsible for one and only one capability in the overall system. Breaking that statement down there are two parts in there: First, a Microservice has a single
responsibility. Second, that responsibility is for a capability. The single responsibility principle has been stated in several ways. One traditional way is:

"There should never be more than one reason for a class to change."
-- Robert C. Martin SRP: Single Responsibility Principle

While this way of putting it specifically mentions “a class” the principle turns out to apply on other levels than that of a class in an object-oriented language. With Microservices, we apply the single responsibility principle at the level of services. Another, newer, way of stating the single responsibility principle, also from Uncle Bob, is:

"Gather together the things that change for the same reasons. Separate those things that change for different reasons."
-- Robert C. Martin The Single Responsibility

This way of stating the principle applies to Microservices: A Microservice should implement exactly one capability. That way the Microservice will have to change only when there is a change to that capability. Furthermore, we should strive to have the Microservice fully implement the capability, such only that one Microservice has to change when the capability is changed.

A capability in Microservice system can mean a couple of things. Primarily, a capability can be a business capability. A business capability is something the system does that contributes to purpose of the system – like keeping track of users shopping carts or calculating prices. A very good way to tease apart which separate business capabilities a system has is to use Domain Driven Design. Secondly, a capability can sometimes be a technical capability that several other Microservices need to make use of – integration to some third-party system for instance. Technical capabilities are not the main drivers for breaking down a system to Microservices, they are only identified as the result of several of Microservices implementing business capabilities needing the same technical capability.

Individually Deployable

Every Microservice should be individually deployable. That is: When you a change a particular Microservice you should be able to deploy that change of the Microservice to the production environment without deploying or in any other way touching any other part of your system. In fact, the other Microservices in the system should continue running and working during the deployment of the changed Microservice as well as after that new version is deployed and up and running.

Consider an ecommerce site. Whenever a change is made to the Shopping Cart Microservice, you should be able to deploy just the Shopping Cart Microservice. Meanwhile the Price Calculation Microservice, the Recommendation Microservice, the Product Catalog Microservice etc. should continue working and serving user requests.

Being able to deploy each Microservice individually is important for several reasons. For one, in a Microservice system, there are many Microservices and each one will collaborate with several others. At the same time development work is done on all or many of the Microservices in parallel. If we have to deploy all or groups of them in lock step, managing the deployments will quickly become unwieldy typically resulting in infrequent, but big risky deployments. This is something we very much want to avoid. Instead, we want to be able to deploy small changes to each Microservice often resulting in frequent and small low risk deployments.

To be able to deploy a single Microservice while the rest of the system continues to function, the build process must be set up with this in mind: Each Microservice has to be built into separate artifacts or packages. Likewise, the deployment process itself must also be set up to support deploying Microservices individually while other Microservices continue running. For instance, a rolling deployment process where the Microservice is deployed to one server at a time in order to reduce downtime can be used.

The way Microservices interact is also informed by the fact that we want to deploy them individually. Changes to the interface of a Microservice must be backwards-compatible in the majority of cases, so that other existing Microservices can continue to collaborate with the new version the same way they did with the old one. Furthermore, the way Microservices interact must be resilient in the sense that each Microservices must expect the other services to fail once in a while and continue working as best it can anyway. One Microservices failing – for instance because of a short period of downtime during deployment – must not result in other Microservices failing, only in reduced functionality or in slightly longer processing time.

Consisting of One or More Processes

A Microservice is made up of one or more processes. There are two sides to this characteristic. First, each Microservice runs in separate processes from the other Microservices. Second, each Microservice can have more than one process.

That Microservices run in separate processes is a consequence of wanting to keep each Microservice as independent of the other Microservices as possible. Furthermore, in order to
deploy a Microservice individually, that Microservice cannot run in the same process as any other Microservice. Consider a Shopping Cart Microservice again. If it ran inside the same process as a Product Catalog Microservice, the Shopping Cart code might have a side effect on the Product Catalog. That would mean a tight and, undesirable coupling between the Shopping Car Microservice and the Product Catalog Microservice.

Figure 2 Running more than one Microservice within a process leads to high coupling in terms of deployment. If two Microservices share the same process, deploying one will directly affect the other and may cause downtime or bugs in that one.

Now consider deploying a new version of the Shopping Cart Microservice. We either would have to redeploy the Product Catalog Microservice too, or would have some sort of dynamic code loading capable of switching out the Shopping Cart code in the running process. The former option goes directly against Microservices being individually deployable. The second option is complex and at the very least puts the Product Catalog Microservice at risk of going down because of a deployment to the Shopping Cart Microservice.

Each Microservice may consist of more than one process. On the surface this may be surprising. We are, after all, trying to make each Microservice as simple to handle as possible, so why introduce the complexity of having more than one process? Let’s consider a Recommendation Microservice in an e-commerce site. It implements the recommendation algorithms that drive recommendations at our e-commerce site. These algorithms run in a process belonging to the Microservice. It also stores the data needed to provide a recommendation. This data could be stored in files on disk, but is more likely stored in a database. That database runs in a second process that also belongs to the Microservice. The need for a Microservice to often have two or more processes comes from the Microservice implementing everything needed to provide a capability including, for example, data storage and possibly background processing.

Owns Its Own Data Store

A Microservice owns the data store where it stores the data it needs. This is another consequence of wanting the scope of a Microservice to be a complete capability. For most business capabilities, some data storage is needed. For a Product Catalog Microservice, for instance, information about each product needs to be stored. To keep the Product Catalog Microservice loosely coupled with other Microservices, the data store containing the product information is completely owned by the Product Catalog Microservice. It is a decision of the Product Catalog Microservice how and when the product information is stored. Other Microservices – the Shopping Cart Microservice for instance – can only access product information through the interface to the Product Catalog Microservice, never directly from the Product Catalog Store.

Figure 3 One Microservice cannot access another’s data store. All communication with a given Microservice happens through its public API; only the Microservice itself is allowed to access its data store directly

Each Microservice owning its own data store opens up the possibility for using different database technologies for different Microservices depending on the needs of each Microservice. The Product Catalog Microservice might use SQL Server to store product information, while the Shopping Cart Microservice might store each user’s shopping cart in Redis and the Recommendations Microservice could use an Elastic Search index to provide recommendations. The database technology chosen for a Microservice is part of the implementation and is hidden from the view of other Microservices. Mixing and matching database technologies with the requirements for each Microservice has the upside of allowing each Microservice to use exactly the database best suited for the job. This can have benefits in terms of development time, performance and scalability, but also comes with a cost. Databases tend to be complicated pieces of technology and learning to use and run one reliably in production is not easy. When choosing database technology for a Microservice, you should consider this trade-off. But also remember that since the Microservice owns its own data store, swapping it out for another database later is feasible.

Maintained by a Small Tema

So far, I have not talked much about the size of a Microservice even though the “micro” part of the term Microservice indicates that they are small. I do not believe, however, that it makes sense to discuss the number of lines of code that a Microservice should have, or the number of requirements, use cases or function points it should implement. All of that depends on the complexity of the capability provided by the Microservice. What makes sense, however, is to consider the amount of work involved in maintaining the Microservice. A rule of thumb that can guide the size of Microservice is that a small team – of, say, 5 people – should be able to maintain a handful or more Microservices. Maintaining a Microservice includes all aspects of keeping it healthy and fit for purpose: Developing new functionality, factoring out new Microservices from ones that have grown too big, running it in production, monitoring it, testing it, fixing bugs and everything else needed. Considering that a small team should be able to perform all of this for a handful of Microservices should give you an idea of the size of a typical Microservice.


That a Microservice is replaceable means it can be rewritten from scratch within a reasonable time frame. In other words, the team maintaining the Microservice can decide to replace the current implementation with a completely new implementation and do so within the normal pace of their work. This characteristic is another constraint on the size of a Microservice: If a Microservice grows too large it will be expensive to replace, only when kept small is it realistic to rewrite.

Why would a team decide to rewrite a Microservice? One reason could be that the code has become a mess. Another is that the Microservice doesn’t perform well enough in production.While these are not desirable situations, they can present themselves. Even if we are diligent while building our Microservices, changes in requirements over time can push the current implementation in ways it cannot handle. Over time, the code can become messy because the original design is bent too much. The performance requirements may increase too much for the current design to handle. If the Microservice is small enough to be rewritten within a reasonable time frame, these situations are OK from time to time. The team simply does the rewrite with the all the knowledge obtained from writing the existing implementation as well as the new requirements in mind.

Excerpted from Microservices in .NET

Friday, September 25, 2015

NCraftsConf Talk Video

This is somewhat late, but still: Here is a recording of the talk I did at NCraftsConf in Paris in May:

LAYERS CONSIDERED HARMFUL - Cristian Horsdal from NCRAFTS Conferences on Vimeo.

Saturday, June 6, 2015

Shipping Error Logs from an Angular App

By popular demand1 I'm writing this post describing how I've gone about implemented log shipping in an Angular app. It's what worked for me. YMMV.


The reasons for logging messages in an SPA are the same as in any other applications: To gain insight into what happens in the app at run time.
The problem with client side logging is that it is hard to get at once the app is in production - that is, once it is running on a machine you (the developer) do not have access to. Collecting the log messages on the client side and sending them back to the server lets you put the client side log messages into a log store on the server, where they are available for analysis.

Client Side

In my situation the SPA is written in Angular which provides a logging service called $log. The $log service can be used anywhere in application code and is also used by Angular itself to log stuff. So the messages written to $log are messages I would like to ship back to the server. Out of the box $log simply logs to the console. Therefore I wrap $log in a decorator and tell Angular to inject my decorated $log wherever $log would otherwise be injected. My decorator stores all log messages along with the log level - error, warning, information, debug - in an array an then delegates to the default $log implementation. That way log messages are both stored for shipping at a later point in time and logged to the console immediately. The decorator also adds a method called shipLogs that sends the current array of log messages to the server and then clears the array on success. The decorator is set up like this:

The shipping of the logs can be done periodically - in my case every 10 seconds - using Angulars $interval service

This is not a fool proof process: There is no handling of situations where shipping takes more than 10 seconds, there is no guarantee that the shipping won't happen in parallel with other requests and thus compete for bandwidth and there is no guarantee that all log messages are shipped before the user leaves the app. But it works well enough for my particular case.

Server Side

The service side is exceedingly simply.
The only point of interest is deciding how to log the client side logs on the server. I've decided to start out by degrading all client side log messages to information level on the server regardless of the level on the client. This is done because certain things - e.g. lost network - will cause error situations on the client - like failed ajax requests - even though they are not really errors seen from the server side - we cannot do anything about clients losing network. This is contextual and might be completely different in your situation. Furthermore I decided to log each batch of client side log messages in one go, in order to avoid spamming my server side log. Both of these decisions might change at a later stage once we have more insight into characteristics of the logs we get back from clients.
As I said, the server side is simple, it boils down to a single simple endpoint:

That's it. Again YMMV.

1. OK 2 tweets doesn't constitute popular demand. But hey.

Tuesday, May 12, 2015

Upcoming Nancy and ASP.NET 5 Training

I have some public courses coming up over next few of months in Copenhagen and in Aarhus, which I am really looking forward to running.


Nancy is such a good framework, and so nice to work with. I'm really looking forward to running this 1-day course on Nancy. You'll get from 0 to ready-to-develop-something real, and we'll have fun along the way!


ASP.NET 5 is a big update to ASP.NET. There is a lot great stuff going on, like cross platform, DI, middleware, new tooling. There is also a lot of new stuff to come to terms with and in this 2 day course we will do just that. Day 1 focuses on ASP.NET and low level tooling, whereas day 2 moves up the stack to MVC.

  • August 31st and September 1st in Copenhagen. Sign up
  • September 3rd and 4th in Aarhus. Sign up

Thursday, April 30, 2015

Short Circuiting Requests in OWIN Middleware

In earlier posts I've looked at writting and packaging OWIN middleware that does something to requests on thier way in and to the response on its way out. This time around I will show how middleware can choose to short circuit the pipeline and just return a response immediately.


Do not call next, and fill in the "owin.ResponseBody" environment key.


Usually one middleware calls the next and that is how the pipeline is executed and how different pieces of middleware gets to each do their thing. At the end of the pipeline is the application where you do whatever application logic to handle the request. There are situations, though, where it makes sense for requests to not even reaching the application logic:
  • Requests from unpaying customers could be throttled to some low rate
  • Requests from certain IPs could be blocked
  • Requests without valid authentication tokens could be turned away with a 401 status
  • Some endpoint are not for application use - like monitoring endpoints

Build a Monitor Middleware

Lets take a look at building a middleware that provides a monitoring endpoint. It will listen on the path "/_monitor" and will respond with a small json object containing the version of the deployed software.

Once again I implement the middleware using the OWIN Midfunc - which we can think of as a curried function taking first the next piece of middleware in the pipeline and then the OWIN environment dictionary.
To implement the monitoring endpoint I look up the request path in the environment dictionary and decide if it is a monitoring path. If it is I write a simple json string to the response body, set the content type header and return. On the other hand if it is not a monitoring endpoint I just pass the environment into the next middleware in the pipeline.
Here is the code:

Tuesday, April 7, 2015

Packaging OWIN Middleware

Following on from the last post about building OWIN middleware directly as functions this post shows how to package such middleware up in an easy to consume package. Doing so is quite easy, so this post is quite short.

The request logging middleware developed in the last post looks like this:

To use this application developers will have to add that function to their OWIN pipeline. Doing so in pure OWIN, without resorting to APIs specific to a given server implementation, is a matter of calling the build function with the middleware function. The server implementation is responsible for providing the build function.

Using the ASP.NET 5 API for obtaining an OWIN build function and assuming the requestLoggingMiddleware is in scope adding middleware through the build function looks like this:

Nice and simple. What we might add on top of this is a simple extension method to the build function:

turning the code for adding the middleware to the pipeline into this:

The advantage of this is that
  • the requestLoggingMiddleware variable need not be in scope in the application code
  • the extension method provides a convenient place for application startup code to pass in parameters to the middleware - like say logging configuration parameters or whatever else might be relevant
  • the extension method returns the build function allowing application code a convenient way of chaining pipeline setup

Now sharing the middleware a across a number of services (and teams) is a matter of sharing the class above through e.g. NuGet.

Thursday, March 26, 2015

Handle Cross Cutting Concerns with an OWIN MidFunc

This post picks up where my post Why Write (OWIN) 'Middleware' in Web Applications? left off, and starts digging into writting OWIN middleware.

OWIN Middleware is a Function

OWIN middleware is a function, which makes perfect sense considering this Pipes and Filters like diagram from the last OWIN port:

The pieces of middleware are functions that request and response data flow through.

Middleware is a Curried Function

The traditional way of looking a the function signature of OWIN middleware a bit daunting1. I find it easier to think of middleware as curried functions that takes two arguments - one at a time - the first argument being the next piece of middleware in the pipeline and the second a dictionary containing the request and the response data:

The middleware function can do whatever it wants to the request and response data before and after calling the next part of the pipeline - which is the function passed into the 'next' argument.

Handling a Simple Cross Cutting Concern

A simple thing that most people building an SOA system want to do across all services is request logging. A simple piece of OWIN middleware can handle this quite niicely like this:

Put this middleware at the beginning of the OWIN pipeline and all requests and all request execution times will be logged.

Summing up

OWIN middleware is most easily (I find) understood as simple curried function taking two parameters: The next step in the pipeline and an environment dictionary containing all request and response data.
Next time we will look a packaging this up nicely in a library that makes it easy to drop this middleware into your pipeline.

1. The middleware function is usually presented as

or this shorthand version

Saturday, February 28, 2015

Nancy 1.0 and the Book

In light of Nancy recently reaching 1.0 (and quickly thereafter 1.1), I thought it was appropriate to revisit my Nancy book, and the running code sample in it on Nancy 1.1. This post looks at what's changed and what I might have coded differently now.

But first,

Is the Book still Relevant?

Yes, I think the book is still relevant. Almost all the code works unchanged with Nancy 1.1, and Nancy still follows the same principals and philosophy. I still think the book is a quick way to get introduced to Nancys way of doing things, the DSLs it provides and it awesome extensibility.
(But then again, I may be slightly biased)

What Broke Between Then and Now?

After updating the Nancy packages in my copy of the code for the 12th and final chapter of the book only one thing did not compile. One method on the IBodyDeserializer interface has changes, namely CanDeserialize which now receives a BindingContext as an argument. I added this argument, compiled and all tests passed and AFAICT everything worked.
That's it. And that says a lot about Nancy. There may not be a lot of promises of backward compatibility, but still the effort needed to update from 0.17 to 1.1 was minimal.

I have had comments that the MongoDB driver used is out of date and that the code in the book uses methods from it that are now obsolete. Since MongoDB is not the subject of the book and the code still works, I'm going to continue punting on that one, and just say no big deal.

What Would I Have Done Differently?


The book covers ASP.NET hosting and self hosting and how to setup a solution where the site is in a class library which can be re-used across hosts. Today I might still start out with the ASP.NET host because it is familiar to many ASP.NET developers, but self hosting as well as the cross host scenario would be OWIN based. In general I'd recommend OWIN hosting now, so that would have some space in the book, if written today.

The Cloud Chapter

Hmmm. Deploying Nancy to the cloud. That chapter seemed like a good idea up front. But wasn't. It ended up being more about some esotric xUnit setup stuff than about deployment. Why? Because deploying Nancy to the cloud is no big deal. It's just like deploying any ASP.NET (or OWIN) application to the cloud. In other words nothing special.
I didn't see that while writing, though, but I hope I would have left this chapter out if I'd written the book today. Maybe a chapter on OWIN would have been a better idea.


The book describes adding Tiwtter login using the Worlddomination NuGet packages. Unfortunately that package changed it's name to SimpleAuthentication between my final draft and the publication of the book. Grrrrrrrrrrrrrrrrrrrr.
In any case, if writing the book today I think I would have pushed the Twitter authentication down to the OWIN level. On the other hand that would mean less focus on Nancy and more on OWIN. But I think that is what I'd do, because using OWIN middleware for authentication would be my (general) recommendation today.

Test Defaults

I worte the book in a test first fashion - showing tests first and then implementation. I'm still happy with that decision, because it emphasizes the awesome Nancy.Testing library and just how testable Nancy is. -And for me testability is a major feature of a framework.
There is a feature that has been introduced in Nancy since the book was written that would have allowed me to cut down on a lot of duplication in the test code, namely defaults for the Browser object. I would have used that all over the book if writing it now, and highly recommend using the Browser defaults.


This is minor, but if writing the book today I would not have used NLog but SeriLog instead.

Friday, February 6, 2015

Why Write (OWIN) 'Middleware' in Web Applications?


Middleware - in the OWIN sense - can help you modularize your web application code, and maybe even enable you to reuse the plumbing code needed to deal with cross cutting concerns.

What is Middleware?

The context of this post is web applications, and in that context middlewares are pieces of code between the web server and the application code:

When a request comes in it is first handled by the web server, which hands it off to the first piece of middleware, which hands it to the next and so on until the request reaches the application code - i.e. the Nancy Modules or MVC/WebAPI controllers. The response takes the opposite route from the application, through the middlewares and the web server. You can think of that as a pipeline. The pipeline starts and ends with the web server, has the middlewares along the way and the application in the middle:

The trick here is that the middlewares all conform to the same interface; meaning that the pipeline be composed in many different ways - we may omit middleware 3, insert a 4th between 1 and 2 or swap 2 and 3.

Put Cross Cutting Concerns In Middleware

When we look at the handling of an HTTP request as a pipeline there are some things that we can immediately see that we could stick into that pipeline and thereby have applied to all requests. The obvious examples are request and response logging, perfomance logging and authentication. Indeed these are things that there are OSS OWIN middleware implementations for.
Slightly less obvious maybe are the more application specific things like request throttling, caching, redirects of obsoleted URLs, or even things like integration to a payment gateway.
The common theme of these candidates for middleware is they are moving concerns that should otheriwse have been handled by the application code out into the pipeline. This not only cleans up application code but also lends you the flexibility of the pipeline to re-configure the cross cutting stuff in one pipeline across all end points.

Middleware in a Service Oriented Architecture

I'm usually wary of the promise of reuse, but it does depend on where the reuse happens. There is usually not a whole lot of reuse of business logic between applications. But the frameworks they are built on are hugely reused. Middleware sits (well) in the middle.
In the case of service oriented architecture (where HTTP is use for inter-service communication) serving one or more applications there is a potential for reuse across services of the types of plumbing that are specific to this particular architecture but not to each individual service. Things like

  • adding monitoring endpoints to each service (middleware can short circuit the pipeline such that requests never reach the application)
  • collecting usage data
  • opening and closing database sessions
  • keeping track of correlation tokens
  • caching rules
  • ...

These are all examples of things that you don't want to write for every single service, but that you might want every service to have. The more services there are and the smaller each one is the more important the reuse of plumbing becomes. Wrapping each of these cross cutting pieces of functionality into middleware which plugs right into the pipeline makes them readily reusable across services.


Both OWIN and ASP.NET 5 supports the idea of middleware. In OWIN it's one the central features. In ASP.NET 5 the same feature is available 1) because ASP.NET 5 supports OWIN and 2) because it also brings it's own notion of middleware to the table.

In future post I'll dig more into both OWIN middleware and the 'proprietary' ASP.NET 5 middleware.