Saturday, June 6, 2015

Shipping Error Logs from an Angular App

By popular demand1 I'm writing this post describing how I've gone about implemented log shipping in an Angular app. It's what worked for me. YMMV.

Why?

The reasons for logging messages in an SPA are the same as in any other applications: To gain insight into what happens in the app at run time.
The problem with client side logging is that it is hard to get at once the app is in production - that is, once it is running on a machine you (the developer) do not have access to. Collecting the log messages on the client side and sending them back to the server lets you put the client side log messages into a log store on the server, where they are available for analysis.

Client Side

In my situation the SPA is written in Angular which provides a logging service called $log. The $log service can be used anywhere in application code and is also used by Angular itself to log stuff. So the messages written to $log are messages I would like to ship back to the server. Out of the box $log simply logs to the console. Therefore I wrap $log in a decorator and tell Angular to inject my decorated $log wherever $log would otherwise be injected. My decorator stores all log messages along with the log level - error, warning, information, debug - in an array an then delegates to the default $log implementation. That way log messages are both stored for shipping at a later point in time and logged to the console immediately. The decorator also adds a method called shipLogs that sends the current array of log messages to the server and then clears the array on success. The decorator is set up like this:



The shipping of the logs can be done periodically - in my case every 10 seconds - using Angulars $interval service


This is not a fool proof process: There is no handling of situations where shipping takes more than 10 seconds, there is no guarantee that the shipping won't happen in parallel with other requests and thus compete for bandwidth and there is no guarantee that all log messages are shipped before the user leaves the app. But it works well enough for my particular case.

Server Side

The service side is exceedingly simply.
The only point of interest is deciding how to log the client side logs on the server. I've decided to start out by degrading all client side log messages to information level on the server regardless of the level on the client. This is done because certain things - e.g. lost network - will cause error situations on the client - like failed ajax requests - even though they are not really errors seen from the server side - we cannot do anything about clients losing network. This is contextual and might be completely different in your situation. Furthermore I decided to log each batch of client side log messages in one go, in order to avoid spamming my server side log. Both of these decisions might change at a later stage once we have more insight into characteristics of the logs we get back from clients.
As I said, the server side is simple, it boils down to a single simple endpoint:



That's it. Again YMMV.


1. OK 2 tweets doesn't constitute popular demand. But hey.

Tuesday, May 12, 2015

Upcoming Nancy and ASP.NET 5 Training

I have some public courses coming up over next few of months in Copenhagen and in Aarhus, which I am really looking forward to running.

Nancy

Nancy is such a good framework, and so nice to work with. I'm really looking forward to running this 1-day course on Nancy. You'll get from 0 to ready-to-develop-something real, and we'll have fun along the way!



ASP.NET 5

ASP.NET 5 is a big update to ASP.NET. There is a lot great stuff going on, like cross platform, DI, middleware, new tooling. There is also a lot of new stuff to come to terms with and in this 2 day course we will do just that. Day 1 focuses on ASP.NET and low level tooling, whereas day 2 moves up the stack to MVC.

  • August 31st and September 1st in Copenhagen. Sign up
  • September 3rd and 4th in Aarhus. Sign up

Thursday, April 30, 2015

Short Circuiting Request in OWIN Middleware

In earlier posts I've looked at writting and packaging OWIN middleware that does something to requests on thier way in and to the response on its way out. This time around I will show how middleware can choose to short circuit the pipeline and just return a response immediately.

TL;DR

Do not call next, and fill in the "owin.ResponseBody" environment key.

Why

Usually one middleware calls the next and that is how the pipeline is executed and how different pieces of middleware gets to each do their thing. At the end of the pipeline is the application where you do whatever application logic to handle the request. There are situations, though, where it makes sense for requests to not even reaching the application logic:
  • Requests from unpaying customers could be throttled to some low rate
  • Requests from certain IPs could be blocked
  • Requests without valid authentication tokens could be turned away with a 401 status
  • Some endpoint are not for application use - like monitoring endpoints

Build a Monitor Middleware

Lets take a look at building a middleware that provides a monitoring endpoint. It will listen on the path "/_monitor" and will respond with a small json object containing the version of the deployed software.

Once again I implement the middleware using the OWIN Midfunc - which we can think of as a curried function taking first the next piece of middleware in the pipeline and then the OWIN environment dictionary.
To implement the monitoring endpoint I look up the request path in the environment dictionary and decide if it is a monitoring path. If it is I write a simple json string to the response body, set the content type header and return. On the other hand if it is not a monitoring endpoint I just pass the environment into the next middleware in the pipeline.
Here is the code: