Messaging with Redis and C# using ServiceStack

Leave a comment

In my previous post I wrote about subscribing and publishing to Redis message queues in node.js using the node_redis library.

The proposed architecture of the Kraken Office system ( see selecting technologies for Kraken Office for details ) uses Redis as a message broker to link the various parts of the system together. The previous post shows how to do this from the node.js web socket server but now we need to access the message queues in Redis from C# as the application and data access layer of the system will be written using the .NET framework.

We will be using the ServiceStack.Redis client library to access Redis from .NET.

Using ServiceStack.Redis

ServiceStack.Redis is a client .NET library that enables you to access the Redis no-SQL data store. It has functionality for using Redis as a data store and also for accessing the subscriber / publisher messaging functionality.

To use ServiceStack.Redis simply download the lastest code base and compile it using Visual Studio. Once compiled you will have access to the binaries used to interface with Redis.

Firstly you will need to start a Redis service instance (see subscribing and publishing to Redis message queues in node.js for details).

Open or create a .NET project using C# and add references to the following binaries generated during compilation of the ServiceStack.Redis project:

  1. ServiceStack.Common.dll
  2. ServiceStack.Interfaces.dll
  3. ServiceStack.Redis.dll
  4. ServiceStack.Redis.Text.dll

Once you have added the references you can then create a Redis client object in C# as following:

var client = new RedisClient("localhost", 6379);

Publishing a message to Redis is now as simple as:

var client = new RedisClient("localhost", 6379);
client.PublishMessage("node_layer", "message_body");

This publishes the message body to the specified message channel in Redis, in this case ‘node_layer‘. The url and port specified when constructing the client should point to your Redis server instance.

Subscribing to receive messages from a particular channels also uses the same Redis client object. To subscribe to a channel and start receiving messages the following code can be used:

var client = new RedisClient("localhost", 6379);
IRedisSubscription redisSubscription = null;

using(_redisSubscription = client.CreateSubscription())
     _redisSubscription.OnMessage += (channel, message) =>
          // handle message as appropriate

_redisSubscription.SubscribeToChannels(new string[] { "application_layer" });

You can easily subscribe to multiple channels by specifying additional channel names in the array parameter of the SubscribeToChannels method.

Note that the SubscribeToChannels method is blocking so your process will block on this method call and messages to the OnMessage event will be fired on a different thread.


Accessing Redis’ messaging facilities in .NET can be achieved easily by using the ServiceStack.Redis client library.

The message broker and client library have a major limitation, however, in that they work on a broadcast basis. As a result, multiple clients listening to a particular channel will all receive every message.

This raises issues when trying to use the Redis message broker for creating scalable systems where extra clients can be added to provide additional processing resource. For this type of scalability a round-robin system of message is needed, where messages are balanced across subscribers.

Currently ServiceStack.Redis does not provide a round-robin message distribution option. As I like the Redis product and have found message queue integration easy from both .NET and node.js I have decided to write my own message proxy service to provide round-robin messaging facilities. This will be the topic of my next post.

Until then – happy coding!


Messaging with Redis and Node.js Using Node_Redis Client

1 Comment

Now that I have connectivity between client HTML 5 pages and a node.js web socket server (see creating an HTML web socket client for details), the next step in the process is to integrate a message broker to distribute messages to the scalable middleware.

In my architecture post I covered the proposed architecture of the Kraken Office system. In order to create a scalable, distributed system I am using messaging to break dependencies between components and to enable scalability of the middleware components. I have chosen the Redis data store technology as the message broker for the system, partially as Redis will also be used for data storage and also because it is extremely fast.

I had initially selected the Redback library to assist in integrating my Node.js server with Redis. On closer inspection, however, it turns out that this library is suitable for Redis persistence requirements but does not provide integration with the Redis publisher / subscriber messaging feature. I therefore looked for an alternative integration library that did provide access to the messaging features and eventually selected the node_redis project, which includes all of the features necessary for sending and receiving messages with Redis in a node.js context.

Installing And Starting Redis On Window

As I am working with Windows the first step is to download, install and start the Redis server. Redis is written for the Unix platform although several Windows ports of the Redis project exist. After looking around I eventually used the Windows downloads available here.

Redis doesn’t actually require any installation. In order to start the service you simply need to navigate to the download location and then double-click the redis-server.exe file. This starts the Redis service in a console window. Projects exist on the web for hosting Redis within a Windows Service – a quick search on the web will find them – although initially at least I am content to run Redis within a console window as it allows me to quickly and easily view the state of the server and any errors.

Redis can be configured by changing the setting in the redis.conf file. I won’t go through the various settings in here now other than to say that this is where the port settings can be found, enabling you to run multiple instances of Redis on different ports.

Installing The Node_Redis Library

Installing the Node_Redis library, enabling connectivity between Node.js and Redis, is straightforward. The library can be installed using the NPM package manager. Simply open a command window and use the CD command to navigate to the root installation of your node.js installation. Once you have done this you then use the following NPM command to install the library:

npm install redis

If you execute your node.js javascript driver files from the installation directory (usually Program Files) then the Node_Redis library is ready for use. If you execute your node.js scripts from a different location you will need to copy the node_modules folder from the installation directory to the directory in which your javascript node.js driver files reside. If you do not do this node.js will not be able to locate the correct library files.

Sending Messages To Redis From Node.js

Before using the Node_Redis library in your own node.js javascript driver files you will need to add a require statement to the top of your javascript file. This should be done as follows:

var redis = require("redis");

This ensures that node.js can locate the correct library files.

The next thing is to create an instance of the Redis client as follows:

var REDIS_URL = 'localhost';
var REDIS_PORT = 6379;

var redisClient = redis.createClient( REDIS_PORT, REDIS_URL );

Obviously the Url and Port variables will need to be set to those of your own Redis server. You can find the port on which your Redis server is running by referring to the Redis start-up trace information displayed in the console window.

To send a message to the broker you can now simply call:

redisClient.publish( 'channel_name','message_body' );

The ability to specify a channel enables multiple messaging channels to be open at any one time. A subscriber can the decide from which channel they are interested in receiving messages.

A real-world example would be:

redisClient.publish( "application_layer", "{client logon request message}" );

This shows how easy it is to publish messages to a Redis server from node.js.

Building on our echo web socket server described in setting up a web socket service with node.js on windows, below is code for an amended server which doesn’t echo messages back to the client but instead sends the message to a channel in Redis.

In order to run this web socket server in node.js you should:

  1.  Save the code to a file with a .js extension in the root of your node.js installation
  2. Open a command window and use the CD command to navigate to the node.js installation directory
  3. Use the node <filename> command to start the node.js server
#!/usr/bin/env nodevar WebSocketServer = require('websocket').server;
var http = require('http');
var redis = require("redis");

var REDIS_URL = 'localhost';
var REDIS_PORT = 6379;
// create a redis connection
 var redisClient = redis.createClient( REDIS_PORT, REDIS_URL );
catch (err)
 console.log( "ERROR => Cannot connect to Redis message broker: URL => " + REDIS_URL + "; Port => " + REDIS_PORT );

var server = http.createServer(function(request, response) {
 console.log((new Date()) + ' Received request for ' + request.url);
server.listen(8080, function() {
 console.log((new Date()) + ' Server is listening on port 8080');

wsServer = new WebSocketServer({
 httpServer: server,
 // You should not use autoAcceptConnections for production
 // applications, as it defeats all standard cross-origin protection
 // facilities built into the protocol and the browser. You should
 // *always* verify the connection's origin and decide whether or not
 // to accept it.
 autoAcceptConnections: false

function originIsAllowed(origin) {
 // put logic here to detect whether the specified origin is allowed.
 return true;

wsServer.on('request', function(request) {
 if (!originIsAllowed(request.origin)) {
 // Make sure we only accept requests from an allowed origin
 console.log((new Date()) + ' Connection from origin ' + request.origin + ' rejected.');

var connection = request.accept('kraken-protocol', request.origin);
 console.log((new Date()) + ' Connection accepted.');

connection.on('message', function(message) {

 if (message.type === 'utf8') {
 console.log('Received Message: ' + message.utf8Data);

// post the message to the redis message broker
 var channelName = 'application';
 redisClient.publish(channelName, message.utf8Data);

 connection.on('close', function(reasonCode, description) {
 console.log((new Date()) + ' Peer ' + connection.remoteAddress + ' disconnected.');
Subscribing To Messages From Redis In Node.js

Receiving messages from Redis in node.js is also straightforward.

Firstly you need to subscribe to a particular channel and then handle the message received event. The code below shows a simple example of how this works:

var redis = require("redis")

// create client on required port
var REDIS_URL = 'localhost';
var REDIS_PORT = 6379;
var redisClient = redis.createClient( REDIS_PORT, REDIS_URL );
client.on("message", function (channel, message) {

// message received - output to console window
 console.log("client channel => " + channel + "; message => " + message + ";");

// subscribe to receive messages from a particular channel

If you need to handle messages based upon channel you will need to check this in the message event.


Using Redis as a message broker can help remove tight coupling in systems by removing direct communications between components. Using node.js enables a client to pass a message directly to a web socket server and then to distribute this via Redis to any number of subscribed middleware components, enabling scalable systems to be created.

The next step is to look into receiving messages from Redis in my .NET middleware components so this will be the focus of my next blog.

In the meantime – happy coding!

Selecting Technologies For Kraken Office


My last, long, post was concerned with the overall architecture of the Kraken Office project. I am now at a point where I am settled enough about the initial architecture of a duplex, message based web application that I can start selecting technologies for the project.

Most of my career has involved developing software for the Microsoft platform ( ASP, VB6, VB Script, ASP.NET, C#, SQL, Silverlight etc.) so I am naturally going to approach the problem with .NET as the technology for the development of the majority of the middleware, That is not to say that I won’t consider alternative technologies but this is to be my starting point.

Web Application Technologies

The client facing user interface will be a web application. This is to maximise the audience for Kraken Office.

Having recent experience with Silverlight and the MVVM methodology, this would be a logical choice. I am becoming increasing concerned, however, about Silverlight’s usage as a internet facing technology due to its reliance on a plug-in. The fact that there is no plug-in support for most mobile devices is especially disconcerting. Another issue is Microsoft’s lack of vocal support for the technology which is currently worrying the entire Microsoft development community (see Microsoft developers horrified).

I want Kraken Office to appeal to as large an audience as possible and I don’t think this goal would be advanced by setting system pre-requisites or by excluding the mobile market.

The obvious alternative for creating a web facing interface with rich LOB features is HTML 5. While this is a relatively new technology and not yet universally supported across all browsers, the development community has generally welcomed it and take up has been good. Microsoft appear to have invested heavily in its usage in Windows 8, although the full details are yet to emerge.

I have therefore decided upon HTML 5 as the main web technology for the application, as it currently appears to be the only viable alternative to Silverlight (Flash also appears to be going the way of Silverlight) for Kraken Office, which I am planning to develop into a fairly complex LOB application. In many ways this is a shame as the toolset for Silverlight and MVVM is impressive, easy to use and easy to debug. Moving back to HTML with Javascript, albeit with comprehensive Javascript libraries these days, does seem a bit daunting.

It is important to remember, however, that the end product of the development process is a useful product with the biggest audience possible. If HTML 5 is the best way of achieving this goal then I feel this is the approach to take.

While HTML 5 may be the technology I also need to decide on a development environment. For me this is pretty easy decision – I will use the Microsoft MVC 3 platform for the website framework and will work within Visual Studio with which I have a lot of experience.

I’m going to need to get more familiar with the available Javascript libraries, with the starting point being the ubiquitous JQuery. It has a small footprint and a large plug in community.

The last main issue is communication with the application server. I have used duplex communication via Duplex WCF services, but these utilise a long poll mechanism rather than a real push mechanism. Web sockets are the new HTML 5 technology for duplex communication and offer support for real duplex communication between server and client. Web sockets currently have limited browser support but are currently being implemented in all major new browser releases. For further information on web sockets refer to this interesting comparison of the pros and cons of long polling vs web sockets

Unfortunately web sockets are not yet supported in Microsoft’s WCF platform, although support is on the way at some stage (WCF WebSocket preview).

In view of the lack of WCF support my first thought was to write a web socket server in C#. I did some initial prototyping and developed a working web sockets server but found the solution verbose and complex. I also have reservations about network based coding in general as I have always found it one of the most difficult types of component to get stable and reliable. In addition to this I want to re-use as many existing frameworks and toolkits as possible, especially when it comes to project infrastructure, so that I can concentrate on developing business functionality.

I came across a new project called XSockets which has only just been released at the time of writing and which looks like it could offer WebSockets hosted in .NET. It was a brand new release, however, with very little documentation so has been discounted for now.

The other framework that caught my eye was the node.js project. After reading this great node.js websocket introduction I was immediately intrigued. Node.js is a javascript based framework for creating scalable, distributable network servers. Although it does not directly support web sockets, a plugin, WebSocket-Node, is available which appears to render the creation of web socket servers a relatively trivial task. Allied with excellent scalability and a dedicated community I believe node.js is an ideal framework for hosting my web socket servers and will relieve me of the task of creating my own server application.

Client-side web sockets will be handled using Javascript. JQuery does not currently support easy scripting of web sockets but there is a JQuery plugin, jquery-websocket, which is designed specifically for client-side interaction with web socket servers.

In conclusion the following sums up my initial web platform technology choices:

  1. HTML 5 developed in Visual Studio with MVC 3
  2. Javascript scripting with JQuery
  3. Node.js hosted web socket server using WebSocket-Node plugin
  4. Client-side javascript web socket interaction using the jquery-websocket JQuery extension
Middleware / Data Storage Technologies

My initial system designs for Kraken Office, which can be found at Designing A Scalable, Resilient System, assumed that I would be creating a .NET based web sockets server. With my decision to implement the web socket server with node.js, this will now not be the case. What is clear, however, is that the node.js server will be the bridge between the client web application and the middleware. In order to ensure that the system remains scalable any communication between node.js and the middleware must use a message broker of some description.

While investigating message brokers, Enterprise Service Buses (ESBs) and in-memory data stores I came across a range of options. ESBs seem too heavy-weight for the system in mind, although at a later date such power may be required. Currently, however, I don’t want to spend all of my precious time resources on trying to configure a system that is not adding value to the project.

One of the most recommended, and fast, in-memory data caches is Redis. A Windows port of the project is available here. In addition to being immensely fast, it provides support for publisher / subscriber queues. Redis is open source and would enable me to kill two birds with one stone; to provide both in-memory data storage and message broker requirements in one system. This would minimise the amount of new technology I need to learn and also the amount of software installation and configuration.

Redis clients exist for both node.js (Redback) and C# (ServiceStack).

Using Redis and node.js will necessitate a change to the architecture of the system. The diagram below documents v2 of the architecture.

The system now uses a scalable node.js layer as the initial application layer. This layer will manage web socket instances for the client web applications. The node.js layer will relay messages to the rest of the system via the Redis messaging layer. A .NET application layer will receive messages from the node.js layer and will decode and act upon the messages. If a message requires data interaction then the application layer will again submit this to the Redis messaging layer, in a different message channel. The data layer (another .NET based layer) will subscribe to data channel messages in Redis and will action any relevant messages. Data will be returned from the data layer via the Redis messaging layer back to the application layer and then onto the node.js layer.

This should prove to be a scalable and resilient system.

In conclusion the following sums up my initial middleware technology choices:

  1. .NET middleware layers (application and data) written in Visual Studio using C#
  2. Messaging provided by Redis and accessed by node.js and .NET clients using Redback and ServiceStack respectively
  3. Fast in-memory data caching using Redis
Data Persistence

While data cached in Redis can be persisted to disk using a variety of configurable persistence strategies, I do not consider it suitable for a permanent data storage solution for two main reasons:

  1. Persisting data on a regular basis to disk, which would be necessary for data integrity, will degrade the performance of Redis
  2. The Kraken Office system will need to archive certain types of data but retain the ability to retrieve the data on demand, for which Redis would be unsuitable

As a result I am going to stick with my decision to persist data to a RDBMS for permanent storage, in addition to retaining data in Redis for real-time application usage. As I am most familiar with Microsoft SQL Server I will stick with this as an initial database, although I will investigate MySQL in the near future due to its Open Source status.

Designing A Scalable, Resilient System

1 Comment

At the outset of the Kraken project I believe it is fundamental to have a good understanding of the required architecture even if the exact business requirements are still forming in my mind.

A minimum requirement of the system, regardless of the specific requirements, will be a duplex based messaging environment, that allows a web application to talk to a service layer via a messaging protocol and then receive pushed messages back from the service layer. The service layer will need to be scalable to meet demand and must be able to persist data in some format. Failures to hardware should not lead to service outages and data retrieval and persistence must be fast and scalable so that data bottlenecks do not occur.

My main aims of a system architecture can be simply stated as SCALABILITY, AVAILABILITY /  RESILIENCE and PERFORMANCE.


I need to be confident that the hardware supporting Kraken Office can be scaled to cope with an increased demand for the service. Scalability in this sense relates to horizontal scalability, i.e. the addition of extra hardware units to increase capacity to an existing system without needing changes to underlying code.

In my experience scalability is best achieved by creating stateless, disconnected software components that talk to each other via a message broker or enterprise service bus (ESB). This removes the requirement for systems to have direct communication with each other. It is this direct communication between system components which often limits the ability to simply add more hardware as required. Message based systems (either simple brokers or ESBs) take care of communication delivery and enable additional publishers and subscribers to be quickly and easily added to a system. This means that multiple servers running the same software can easily be configured and injected into a system to provide the ability to cater for more demand.

An example of an Enterprise Service Bus

Enterprise services buses are higher level entities than pure message brokers and generally have features such as routing, security, reporting, queue interfaces, data transformations etc.. ESBs are commonly used by service orientated systems and are common in SOA world. Message brokers are lower level and enable raw messages to be broadcast from publishers to subscribers.

I have yet to make a decisions on the exact messaging implementation to be used by Kraken Office (this will be covered in a later post) but I will be using some messaging implementation to disconnect the different components of the system. The chosen communication system will need to be scalable across multiple nodes to provide support for flexible levels of demand and therefore also increase resilience.

In effect, the ability to easily drop in more nodes to an existing system with simple configuration is the key to scalability. If designed correctly a system under strain can be improved by simply installing the necessary software components onto a new hardware server which is then added to the system in the correct location via configuration.

Availability / Resilience

Distributed, disconnected systems also offer the benefits of resilience as a by-product of scalability. The ability to add multiple nodes running the same services to a system means that the failure of one of the nodes will not result in a total loss of service. If a system is designed to be scalable then a service layer will generally consist of two or more hardware servers. If one of these fails then the ability of the system to cope with excessive demand for its services may be reduced but the system will not automatically fail and lead to a total loss of service. If enough servers are used to ensure an adequate service level from a particular software component even under the heaviest of usage then the failure of one of the servers should not lead to an obvious degredation in system performance.

The goals of scalability ultimately lead to the removal of any one point of failure in a system. This ensures that all elements of a system can be scaled across multiple nodes, thus removing the possibility of one server failure bringing a system to it’s knees.

High availability is becoming increasingly important in software systems as more business processes and elements of our personal lives are reliant on computer systems. As a result, users no longer tolerate lack of availability and availability and resilience should be key aims in any system architecture.

As long as I successfully design the Kraken Office to be scalable with no single points of failure in the system I believe availability will follow closely behind.

Performance Bottlenecks

We all want lightning fast websites these days. Users have more and more choice when selecting which web applications to use and are unwilling to accept  any waiting time when using websites. Website performance is therefore a very important consideration and I am going to design Kraken Office for optimal performance from the outset.

In my experience performance bottlenecks generally occur in the following main areas:

  1. Large, inefficient web pages
  2. Excessive server communication with no client-side caching
  3. Long running, inefficient server actions
  4. Excessive data-store access
  5. Inappropriate use of relational databases
  6. Slow, inefficient database access code (scripts, stored procedures etc..)

All of the above are obvious candidates for performance issues and I’m sure pretty much all developers will have experienced these at some stage during their careers.

Some of these problems are associated with inefficient coding practices and over-use of large resources (images, videos etc.) in web applications. Modern broadband speeds are solving, or indeed masking, some of these file size download issues but size optimization should always be an important consideration when developing web projects.

Excessive server communication is an issue I have encountered numerous times, especially with LOB (line of business) applications which often manage and display large sets of data. Client side caching can be invaluable when focusing on these issues.

These issues are more implementational than design and therefore I will not consider them here. When I come to implement elements of the Kraken Office system these issues will be discussed in detail.

Designing For Performance

Designing for performance should address the issues raised above. This is a separate issue from implementing for performance.

I believe that systems designed for performance should consider the following:

  1. Minimise the number of calls between client and server – generally this is achieved by the use of client-side caching technologies.
  2. Minimise the server processes and interaction between server components, especially where these are synchronous communications.
  3.  Favour asynchronous communications where possible – this stops components and user interface elements from become unresponsive while waiting  for a process to complete.
  4. Utilise in memory data stores (NoSQL concepts) on the server side for real time data access rather than relational databases. NoSQL systems generally scale horizontally much better than RDBMS systems and as they generally run in memory have faster access times.
  5. Relational database interaction should use asynchronous operations based on message queues to persist and request data. If possible RDBMS should not be relied upon for real time usage.

Obviously many of the points above are open to debate and I do not prescribe an approach that fits all but merely what has worked for me when designing previous projects.

Basic Overall Design

The diagram below gives an overall picture of how I intend to architect the system. This should be considered as only one take on an asynchronous, duplex communication based web application with offline data storage. I imagine significant elements of the system will change and grow as the project progresses but this is a starting point from which I will begin.

The design of the system enables horizontal scalability of all elements of the system, given that distributable ESB and data caching systems are used.

The following points describe the salient elements of the system:

  1. A persisted duplex communication socket will existing between the web applications running in client web browsers and one of the application servers in the application layer. If one of the application servers fails then any clients connected to the server will lose their connection. A simple refresh of the browser would reconnect the user to a different application server via the load balancer.
  2. As the application layer sits behind a load balancer it can be easily scaled by simply adding new hardware with the correct components installed behind the load balancer.
  3. The application layer does not talk directly to any other part of the system, but all communication is asynchronous via an enterprise service bus or message broker. The messaging system then talks to any relevant systems and relays messages back to the application layer. The ESB or message broker will need to be distributable across multiple nodes to ensure scalability and resilience as described above.
  4. The data layer is scalable as it has no direct communication with the application layer and a correctly configured server can simply be injected into the data layer, picking up messages from the ESB or message broker.
  5. The data layer has direct communication with the cached data store as this will present one interface to the outside world, even though the system should be distributed across multiple nodes in implementation. The data layer will have the responsibility of checking whether data is available in the cache, loading from the persistant RDBMS if not and ensuring that updates are correctly persisted in both the cached data layer and the RDBMS. Reads from the RDBMS will be synchronous but all writes will be made offline via the ESB or message broker.
  6. The RDBMS will be scalable vertically and to some extent horizontally but the system will not rely on the system for real time data availability if the data has been pre-loaded into the cache. Exactly how data is to be cached is yet to be decided. The options include a full data cache on system start, data caching on demand or an intelligent pre-caching of commonly used data.

With the exception of synchronous data reads from the RDBMS and data cache the application is strictly asynchronous. Even the UI will not update until a data update message is received from the application layer. A typical process flow would be:

  1. A user makes a change to data.
  2. An asynchronous message is sent to the application layer  informing the system of the update.
  3. A message is sent to the ESB or message broker that data has been updated.
  4. A node within the data layer picks up the message and makes the relevant changes to the data cache, sending a message to the ESB or message broker informing that persistant data needs to update.
  5. A message is sent from the data layer to the ESB informing other systems that a data update has been made.
  6. All servers within the application layer pick up the message from the data layer and distribute it to the relevant web applications via their duplex connection.
  7. The user interface on the web applications is updated on receipt of the data update message.

This methodology enables truely asynchronous operations and also enables data changes to be broadcast to a range of web applications at one go, keeping data synchronised across multiple users.


At this stage of the process I think I have the basics of an architecture to enable  me to begin further design.

The next round of design will focus on the available technologies which can be used to develop such a system and a decision on which  I am going to employ.

Goodbye till then!