NATS Messaging - Part 9

Yesterday we made some quite minor changes to our app and got it to use JetStream on both the Producer and Consumer side. These changes solved several problems for us, like being able to restart Consumers without losing any messages.

The last remaining issue was around handling messages that fail to be delivered. Imagine the case where our disk is full on the Consumer, wouldn’t it be great if we can somehow communicate our inability to handle messages to the network and have it retry later?

That’s the role of Acknowledgements and JetStream supports several modes. Today we’ll look at those.

[Read More]

NATS Messaging - Part 8

In our previous post, we dived a bit into JetStream API, and how to interact with it, many people would not need to know this all to get going. The CLI or Terraform management approaches would be perfectly fine. And today we’ll use the CLI rather than the API.

In this post, we’re back on our codebase, and we’ll see how we might need to change the tools to support JetStream well. To be honest, I could have made some better decisions early on about the shipper design, but that gives us more opportunity to see how some apps might need to adapt.

[Read More]

NATS Messaging - Part 7

Yesterday we did a quick intro to JetStream, before we jump in and write some code we have to talk a bit about how to configure it via its API and how it relates to core NATS.

NATS Streaming Server, while built using the NATS broker for communication, is, in fact, a different protocol altogether. It’s relation to NATS is more like that of HTTP to TCP. It uses NATS transport, but its protocol is entirely custom and uses Protobuf messages. This design presented several challenges to users - authentication and authorization specifically were quite challenging to integrate with NATS.

NATS 2.0 brought a significant rework of the Authentication and Authorization in NATS and integrating the new world with NATS Streaming Server would have been too disruptive. Further NATS 2.0 is Multi-Tenant which NATS Streaming Server couldn’t be without a massive rework.

So JetStream was started to be a much more natural fit in the NATS ecosystem, indeed, as you saw Yesterday the log shipper Producer did not need a single line of code change to gain persistence via JetStream. Additionally, it is a comfortable fit in the Multi-tenant land of NATS 2.0. All the communication uses plain NATS protocol, and some JSON payloads in its management API.

[Read More]

NATS Messaging - Part 6

Last week we built a tool that supported shipping logs over NATS, and consume them on a central host.

We were able to build a horizontally scalable tool that can consume logs from thousands of hosts. What we could not solve was doing so reliably since if we stopped the Consumer, we would drop messages.

I mentioned NATS does not have a persistence store today, and that one called JetStream was in the works. Today I’ll start a few posts looking into JetStream and show how we can leverage it to solve our problems.

[Read More]

NATS Messaging - Part 5

Yesterday we wrote our first end to end file tail tool and a consumer for it. It was fatally flawed though in that we could not scale it horizontally and had to run one consumer per log file. This design won’t work for long.

The difficulty lies in the ordering of messages. We saw that NATS supports creating consumer groups and that it load-shares message handling in a random manner between available workers. The problem is if we have more than one worker, there are no longer any ordering guarantees as many workers are concurrently handling messages each processing them at different rates. We’d end up with 1 file having many writers and soon it will be shuffled.

We’re fortunate in that the ordering requirements aren’t necessary for all of the messages, we only really need to order messages from a single host. Host #1 messages can be mixed with host #2, as long as all of the host 1 messages are in the right order. This behaviour is a big plus and lets us solve the problem. We can have 100 workers receiving logs, as long as 1 worker always handles logs from one Producer.

Read on for the solution!

[Read More]

NATS Messaging - Part 4

Previously we used the nats utility to explore the various patterns of messaging in NATS, today we’ll write a bit of code, and in the following few posts, we’ll expand this code to show how to scale it up and make it resilient.

We’ll write a tool that tails a log file and publishes it over NATS to a receiver that writes it to a file. The intent is that several nodes use this log file Publisher and a central node consume and saves the data to a file per Publisher with log rotation in the central node.

I should be clear that there are already solutions to this problem, I am not saying you should solve this problem by writing your own. It’s a good learning experience, though because it’s quite challenging to do right in a reliable and scalable manner.

  • Producers can be all over the world in many locations
  • It’s challenging to scale problem as you do not always control the rate of production and have an inherent choke point on the central receiver
  • You do not want to lose any logs, so we probably need persistence
  • Ordering matters, there’s no point in getting your logs in random order
  • Scaling consumers horizontally while having order guarantees is difficult, additionally you need to control who writes to log files
  • Running a distributed network with all the firewall implications in enterprises is very hard

So we’ll look if we can build a log forwarder and receiver that meet these criteria, in the process we’ll explore the previous sections in depth.

We’ll use Go for this but the NATS ecosystem supports almost 40 languages today, you’re spoiled for choice.

[Read More]

NATS Messaging - Part 3

In our previous posts, we did a thorough introduction to messaging patterns and why you might want to use them, today let’s get our hands dirty by setting up a NATS Server and using it to demonstrate these patterns.

Setting up

Grab one of the zip archives from the NATS Server Releases page, inside you’ll find nats-server, it’s easy to run:

$ ./nats-server -D
[26002] 2020/03/17 15:30:25.104115 [INF] Starting nats-server version 2.2.0-beta
[26002] 2020/03/17 15:30:25.104238 [DBG] Go build version go1.13.6
[26002] 2020/03/17 15:30:25.104247 [INF] Git commit [not set]
[26002] 2020/03/17 15:30:25.104631 [INF] Listening for client connections on 0.0.0.0:4222
[26002] 2020/03/17 15:30:25.104644 [INF] Server id is NA3J5WPQW4ELF6AJZW5G74KAFFUPQWRM5HMQJ5TBGRBH5RWL6ED4WAEL
[26002] 2020/03/17 15:30:25.104651 [INF] Server is ready
[26002] 2020/03/17 15:30:25.104671 [DBG] Get non local IPs for "0.0.0.0"
[26002] 2020/03/17 15:30:25.105421 [DBG]   ip=192.168.88.41

The -D tells it to log verbosely, at this point you’ll see you have port 4222 open on 0.0.0.0.

Also grab a copy of the nats CLI and place this in your path, this can be found in the JetStream Releases page, you can quickly test the connection:

$ nats rtt
nats://localhost:4222:

   nats://127.0.0.1:4222: 251.289µs
       nats://[::1]:4222: 415.944µs

Above shows all you need to have a NATS Server running in development and production use is not much more complex, to be honest, once it’s running this can happily serve over 50,000 Choria nodes depending on your hardware (a $40 Linode would do)

[Read More]

NATS Messaging - Part 2

Yesterday, in our previous post, we did a light compare between HTTP and Middleware based architectures for Microservices. Today we’ll start focusing on Middleware based architectures and show some detail about the why’s and the patterns available to developers.

Why Middleware Based

The Middleware is a form of Transport, rather than addresses you think in named channels and the endpoints within that channel decides how they consume the messages, 1:1, 1:n or randomly-selected 1:n.

The goal is to promote an architecture that is scalable, easy to manage and easy to operate. These goals are achieved by:

  • Promoting application design that breaks complex applications into simple single-function building blocks that’s easy to develop, test and scale
  • Application building blocks are not tightly coupled and can scale independently of other building blocks
  • The middleware layer implementation is transparent to the application – network topologies, routing, ACLs etc. can change without application code change
  • The brokers provide a lot of the patterns you need for scaling – load balancing, queuing, persistence, eventual consistency, failover, etc.
  • Mature brokers are designed to be scalable and highly available – very complex problems that you do not want to attempt to solve on your own
  • Fewer moving parts, less coupled to infrastructure layers and scalable across multiple clusters

There are many other reasons, but for me, these are the big-ticket items – especially the 2nd one.

[Read More]

NATS Messaging - Part 1

Back in 2011, I wrote a series of posts on Common Messaging Patterns Using Stomp; these posts were very popular, I figured it’s time for a bit of refresh focusing on NATS - the Middleware Choria uses as it’s messaging transport.

Today there are 2 prevailing architectures in Microservices based infrastructure - HTTP based and Middleware based Microservices. I’ll do a quick overview of the two patterns highlighting some of the pros and cons here; first, we look at the more familiar HTTP based and then move to Middleware based. In follow-up posts, we’ll explore the Middleware communication patterns in detail and show some code.

Note though the context here is not to say one is better than the other or to convince you to pick a particular style, I am also not exhaustively comparing the systems - that would be impossible to do well.

Today the prevailing architecture of choice is HTTP based, and it’s demonstrably a very good and very scalable choice. I want to focus on using Middleware to achieve similar outcomes and what other problems they can solve and how - the aim is to share information and not to start a product/architecture comparison debate.

Series Index

[Read More]