determined - Fotolia


State and threading: Key concepts in a .NET microservices architecture

There are a number of reasons why implementing a .NET microservices architecture presents a challenge. Tom Nolle explains how to marry .NET and microservices.

Microservices represent a unique balancing act for planners and developers. On one hand, microservices evolved from applying web principles to application design, and these are normally associated with languages like Java. On the other hand, most enterprises have a significant investment in traditional IT languages and tools, including Microsoft's .NET middleware.

For .NET developers to balance these influences successfully, it's important to step back to look at the basic architecture of a microservice, view things like multithread and stateless features as just implementation features, and never lose sight of the cloud.

Basic microservice concepts

Everyone knows what a microservice is at the architecture level: a unit of network-connected functionality that can be deployed in an agile way and composed into applications as needed. In development terms, though, a microservice is, and foremost, a message-handler. Requests (the message) are posted to a microservice. These requests are actioned by its internal logic, and a response (another message) is returned.

That may sound a lot like any distributed application component. Microservices are unique, however, because of two requirements in particular: They must be able to be used asynchronously by multiple applications/components, and they must be able to horizontally scale in order to grow and shrink the number of instances of the microservice as message volumes change. The only way to achieve both these goals in design is to be very aware of something called state and something called threads.

Statelessness and multithread

A microservice, like any piece of logic, is said to be stateless if each message is processed to a response without regard for what happened before or what will happen after. A stateful microservice is one where messages are processed in a sequence, so the order of messages can impact the outcome. It's easy to create stateless microservices that have the asynchrony and scalability properties we've noted; any copy of a microservice can process any request. Stateful microservices, to be asynchronous and scalable, require some means of communicating state between sequential messages and ensuring that messages aren't mixed into a context where they don't belong.

A thread is an independent execution path or context. Single-threaded applications or microservices run one message through to completion before starting the next, where multithreaded versions can support several messages at the same time. To work in a multithreaded environment, the code has to be thread-safe, which can be accomplished in a variety of ways, all of which reduce to either tolerating concurrent use by multiple messages or locking portions of the software that can't be made to be tolerant.

The big disadvantage .NET developers face with microservices is that these two concepts are not as well-understood as they are in the web community, namely amongst Java developers. C# and .NET support multithread operations and control of state, but many don't know how to use the capabilities. There are some basic rules that can help.

Simple rules for .NET microservices architecture

The rule is that the simplest approach is the most restrictive. You can build out your .NET microservices architecture using the ASP.NET Web API to create what's called a model-view controller (MVC), which front-ends business logic with a web REST interface. This can then pass control back to business logic in some way. The good news is that it reuses current logic. The bad news is that it doesn't address the issues of state or threads at all. You'll have to deal with them in the reused logic or ignore the benefits they'd offer.

The tried-and-proved approach is to build microservices for the future, test in the simplest case and then scale to the two-host model.

The second rule is that to get beyond the basics, you'll need to use a message queue or bus of some sort. Both the microservice goal of concurrent use and the goal of scalability under load will require keeping your message sources decoupled from processing. A queue or bus will allow anything to post messages and anything to process them, which doesn't automatically solve state and thread issues, but at least presents an easier path to solution.

Queues or busses let you post messages when needed and then associate them with a process element for handling. The beauty of the approach in .NET applications is that each process element can then be treated as a kind of subordinate microservice, and you can address things like multithread operation and statefulness versus statelessness where needed. Where a process is simple, you can use a stateless, multithread implementation for maximum performance and agility, and where things are complex, you can make the process stateful and blocking, without impacting the other elements of the microservice.

The third rule is use design patterns where you can for maximum consistency. There are nearly a hundred microservice-related design patterns available for .NET, far too many to even summarize, but they tend to fall into the areas of gateway control, publish/subscribe and message routing, and hosting and database scaling control.

The final rule of building your .NET microservices architecture is to implement the microservices with all the concurrency and scaling features, but in a simple single- or two-host environment to wring out issues of thread and state control. Trying to get everything to work in a virtualized or cloud deployment is going to be a lot more difficult. The tried-and-proved approach is to build for the future, test in the simplest case and then scale to the two-host model. From there, you can go as far as your application needs demand.

The ultimate support for microservices

The most significant advance Microsoft has made in support for a .NET microservices architecture is the Azure Service Fabric, also available on Windows Server 2016. This is a microservice-friendly queue/bus architecture designed (as the name shows) for Microsoft Azure. However, it can host components on any public or private cloud, or on bare metal, as long as Windows Server in a compatible version is provided.

What makes Service Fabric so useful for a .NET microservices architecture is that it's platform-universal and totally distributed. Message management and distribution in a virtual or cloud deployment is complicated enough without facing the issues of multiple hosting locations and scaling and failover that has to cross data center and cloud provider boundaries. Service Fabric lets you treat a cluster of resources as a single hosting point, and to manage your processes and distribute your messages within the cluster regardless of where the resources are. That's the ultimate support for microservices anywhere.

Next Steps

Learn why it's time for .NET developers to look to Azure for help

Microservices: More akin to SOA or MVC?

Discover everything you need to know about securing your microservices

Dig Deeper on Application management tools and practices

Software Quality
Cloud Computing