Friday, October 11, 2019

Identifying and Scoping Microservices talk from DevConfPL

I recently had the pleasure for delivering my Identifying and Scoping Microservices talk at DevConf in Krakow. Here's the video. Enjoy 😉


Friday, September 20, 2019

A pattern for synchronizing data from a legacy system to microservices

Abstract

A recurring need in the teams I work with seems to be to move and continually synchronize data from a legacy system - typically a monolith - to a new microservice based replacement. In this post I will outline a data synchronization solution that allows
  • The microservice side to catch up on historical data
  • Synchronizes new data to the microservices as it is produced in the legacy system
  • Lets us control the speed of synchronization and as a result the load on the legacy system
  • Is repeatable, so the same data can easily be synchronized again if need be
In the interest of keeping this post short I will only show a high level view of the solution.

Microservices need legacy data

When replacing an existing legacy system with a modern microservice based system teams (sensibly) tend to do so in steps following the strangler pattern. This means that for a while both systems run in production. Some functionality is handled in the legacy system and some in the microservices. At first very little is handled in the microservices but gradually more and more functionality is handled there. To support those small early steps implemented in the microservices data from the legacy side is often needed. For instance, in a back office system a team in the process of moving to a microservice architecture they might want to implement a new sales dashboard with microservices. To do so we will need order data and possibly other data too, but let's just focus on the order data for now. Orders are still being taken in on the legacy side, so order data is being produced on the legacy side. But the new sales dashboard needs both historical orders and new orders to work correctly.

To make things more interesting let's say the legacy system and the microservices are in different data center - maybe the system is moving from on prem to the cloud as part of the microservice effort.

Solution: A data pump 

A solution to the situation above is to implement a data pump that sends any updates to relevant data in the legacy database over to the microservices. In the example that means new orders as well as changes to orders.

This solution has two components: A data pump which is deployed in the legacy environment and a data sink which is deployed in the microservices environment. The data pump tracks which data has already been sent over to the microservices and sends new data over as it is produced in the legacy system. The data sink simply receives the data from the pump and posts it onto a queue. This enables any and all microservices interested in the data - e.g. new or updated orders - to subscribe to such messages on the queue and to build up their models of that data in their own databases.

With the described solution in place any historical data can be sent over to the microservices. That may take a a while, especially if the legacy database cannot take to much additional load. In such cases the data pump can be throttled. Once the pump has caught up sending over historical data it will continue to send over new data in near real time.

If we put a bit of up front design into the data pump we can also support restarting the process - the pump tracks what it has already sent, resetting that tracking will make it start over. That's sometimes useful if we e.g. don't get the receiving mciroservice right in the first attempt, or if we introduce new microservices that also need data.

This is a solution I have seen used with success in several of my client's systems and that I think is applicable in many more systems too.

Monday, June 17, 2019

InMemoryLogger available on NuGet

I recently made a small library, InMemoryLogger, to support recording and inspecting logs made from .NET  Core code under test. The library is pretty small and simply implements a .NET Core Ilogger that records incoming logs and exposes them for inspection through a few public properties. I've been using InMemoryLogger for a while now and I think it is ready for other people to use too.

If you follow arrange/act/assert in your tests the idea is to create the in memory logger in the arrange part either by directly newing it or adding it to the application under tests IServiceCollection. Then do whatever to the application under test in the act part, and finally inspect the properties that expose recorded logs on the InMemoryLogger in the assert part.
For more info check out the readme or grab InMemoryLogger from NuGet.

Wednesday, April 3, 2019

Interview on the .NET Core Show

Last week the .NET Core Show published an interview with me where where I talk about various aspects of developing microservices with .NET Core, though most of it technology stack agnostic. Enjoy.

Saturday, February 16, 2019

Building a self-contained .NET Core app in a container

Last week I needed to
  1. Build a self-contained .NET Core app.
  2. Not install the .NET Core SDK on the CI server
  3. Fit this into a build pipeline that was already containerized.
These three requirements led me to build the self-contained app in a container. Doing that is a 2 step process:
  1. First build a container using docker build
  2. The run the container with a local folder called "output" as a volume
The result is that the self-contained app is in the local "output" folder.
This is a Dockerfile that allows this:


And the two commands needed to build and run the container are:


That builds the self-contained app into .\output\release\netcoreapp2.2\linux-x64\publish\

For context: In my case I needed to build DbUp as part of the build pipeline for a service that I run in a container. I want DbUp to be self-contained so I can run it during the deployment pipeline without needing to install the .NET Core runtime.