NATS Messaging - Part 7

Yesterday we did a quick intro to JetStream, before we jump in and write some code we have to talk a bit about how to configure it via its API and how it relates to core NATS.

NATS Streaming Server, while built using the NATS broker for communication, is, in fact, a different protocol altogether. It’s relation to NATS is more like that of HTTP to TCP. It uses NATS transport, but its protocol is entirely custom and uses Protobuf messages. This design presented several challenges to users - authentication and authorization specifically were quite challenging to integrate with NATS.

NATS 2.0 brought a significant rework of the Authentication and Authorization in NATS and integrating the new world with NATS Streaming Server would have been too disruptive. Further NATS 2.0 is Multi-Tenant which NATS Streaming Server couldn’t be without a massive rework.

So JetStream was started to be a much more natural fit in the NATS ecosystem, indeed, as you saw Yesterday the log shipper Producer did not need a single line of code change to gain persistence via JetStream. Additionally, it is a comfortable fit in the Multi-tenant land of NATS 2.0. All the communication uses plain NATS protocol, and some JSON payloads in its management API.

[Read More]

NATS Messaging - Part 6

Last week we built a tool that supported shipping logs over NATS, and consume them on a central host.

We were able to build a horizontally scalable tool that can consume logs from thousands of hosts. What we could not solve was doing so reliably since if we stopped the Consumer, we would drop messages.

I mentioned NATS does not have a persistence store today, and that one called JetStream was in the works. Today I’ll start a few posts looking into JetStream and show how we can leverage it to solve our problems.

[Read More]

NATS Messaging - Part 5

Yesterday we wrote our first end to end file tail tool and a consumer for it. It was fatally flawed though in that we could not scale it horizontally and had to run one consumer per log file. This design won’t work for long.

The difficulty lies in the ordering of messages. We saw that NATS supports creating consumer groups and that it load-shares message handling in a random manner between available workers. The problem is if we have more than one worker, there are no longer any ordering guarantees as many workers are concurrently handling messages each processing them at different rates. We’d end up with 1 file having many writers and soon it will be shuffled.

We’re fortunate in that the ordering requirements aren’t necessary for all of the messages, we only really need to order messages from a single host. Host #1 messages can be mixed with host #2, as long as all of the host 1 messages are in the right order. This behaviour is a big plus and lets us solve the problem. We can have 100 workers receiving logs, as long as 1 worker always handles logs from one Producer.

Read on for the solution!

[Read More]

NATS Messaging - Part 4

Previously we used the nats utility to explore the various patterns of messaging in NATS, today we’ll write a bit of code, and in the following few posts, we’ll expand this code to show how to scale it up and make it resilient.

We’ll write a tool that tails a log file and publishes it over NATS to a receiver that writes it to a file. The intent is that several nodes use this log file Publisher and a central node consume and saves the data to a file per Publisher with log rotation in the central node.

I should be clear that there are already solutions to this problem, I am not saying you should solve this problem by writing your own. It’s a good learning experience, though because it’s quite challenging to do right in a reliable and scalable manner.

  • Producers can be all over the world in many locations
  • It’s challenging to scale problem as you do not always control the rate of production and have an inherent choke point on the central receiver
  • You do not want to lose any logs, so we probably need persistence
  • Ordering matters, there’s no point in getting your logs in random order
  • Scaling consumers horizontally while having order guarantees is difficult, additionally you need to control who writes to log files
  • Running a distributed network with all the firewall implications in enterprises is very hard

So we’ll look if we can build a log forwarder and receiver that meet these criteria, in the process we’ll explore the previous sections in depth.

We’ll use Go for this but the NATS ecosystem supports almost 40 languages today, you’re spoiled for choice.

[Read More]

NATS Messaging - Part 3

In our previous posts, we did a thorough introduction to messaging patterns and why you might want to use them, today let’s get our hands dirty by setting up a NATS Server and using it to demonstrate these patterns.

Setting up

Grab one of the zip archives from the NATS Server Releases page, inside you’ll find nats-server, it’s easy to run:

$ ./nats-server -D
[26002] 2020/03/17 15:30:25.104115 [INF] Starting nats-server version 2.2.0-beta
[26002] 2020/03/17 15:30:25.104238 [DBG] Go build version go1.13.6
[26002] 2020/03/17 15:30:25.104247 [INF] Git commit [not set]
[26002] 2020/03/17 15:30:25.104631 [INF] Listening for client connections on 0.0.0.0:4222
[26002] 2020/03/17 15:30:25.104644 [INF] Server id is NA3J5WPQW4ELF6AJZW5G74KAFFUPQWRM5HMQJ5TBGRBH5RWL6ED4WAEL
[26002] 2020/03/17 15:30:25.104651 [INF] Server is ready
[26002] 2020/03/17 15:30:25.104671 [DBG] Get non local IPs for "0.0.0.0"
[26002] 2020/03/17 15:30:25.105421 [DBG]   ip=192.168.88.41

The -D tells it to log verbosely, at this point you’ll see you have port 4222 open on 0.0.0.0.

Also grab a copy of the nats CLI and place this in your path, this can be found in the JetStream Releases page, you can quickly test the connection:

$ nats rtt
nats://localhost:4222:

   nats://127.0.0.1:4222: 251.289µs
       nats://[::1]:4222: 415.944µs

Above shows all you need to have a NATS Server running in development and production use is not much more complex, to be honest, once it’s running this can happily serve over 50,000 Choria nodes depending on your hardware (a $40 Linode would do)

[Read More]

NATS Messaging - Part 2

Yesterday, in our previous post, we did a light compare between HTTP and Middleware based architectures for Microservices. Today we’ll start focusing on Middleware based architectures and show some detail about the why’s and the patterns available to developers.

Why Middleware Based

The Middleware is a form of Transport, rather than addresses you think in named channels and the endpoints within that channel decides how they consume the messages, 1:1, 1:n or randomly-selected 1:n.

The goal is to promote an architecture that is scalable, easy to manage and easy to operate. These goals are achieved by:

  • Promoting application design that breaks complex applications into simple single-function building blocks that’s easy to develop, test and scale
  • Application building blocks are not tightly coupled and can scale independently of other building blocks
  • The middleware layer implementation is transparent to the application – network topologies, routing, ACLs etc. can change without application code change
  • The brokers provide a lot of the patterns you need for scaling – load balancing, queuing, persistence, eventual consistency, failover, etc.
  • Mature brokers are designed to be scalable and highly available – very complex problems that you do not want to attempt to solve on your own
  • Fewer moving parts, less coupled to infrastructure layers and scalable across multiple clusters

There are many other reasons, but for me, these are the big-ticket items – especially the 2nd one.

[Read More]

NATS Messaging - Part 1

Back in 2011, I wrote a series of posts on Common Messaging Patterns Using Stomp; these posts were very popular, I figured it’s time for a bit of refresh focusing on NATS - the Middleware Choria uses as it’s messaging transport.

Today there are 2 prevailing architectures in Microservices based infrastructure - HTTP based and Middleware based Microservices. I’ll do a quick overview of the two patterns highlighting some of the pros and cons here; first, we look at the more familiar HTTP based and then move to Middleware based. In follow-up posts, we’ll explore the Middleware communication patterns in detail and show some code.

Note though the context here is not to say one is better than the other or to convince you to pick a particular style, I am also not exhaustively comparing the systems - that would be impossible to do well.

Today the prevailing architecture of choice is HTTP based, and it’s demonstrably a very good and very scalable choice. I want to focus on using Middleware to achieve similar outcomes and what other problems they can solve and how - the aim is to share information and not to start a product/architecture comparison debate.

Series Index

[Read More]

Rego policies for Choria Server

Open Policy Agent is a CNCF incubating project that allow you to define Policy as code. It’s widely used in various projects like Istio, Kubernetes and more.

It allows you to express authorization policies - like our Action Policy - in a much more flexible way.

Building on the work that was done for aaasvc, I’ve added a rego engine to the choria server, which will allow us to do most of what actionpolicy allows, as well as:

  • Assertions based on the arguments sent to actions
  • Assertions based on other request fields like TTL and Collective
  • Assertions based on if the server is set to provisioning mode or not

Read below the fold for our initial foray into OPA policies and what might come next.

[Read More]

Choria Configuration

Choria configuration, despite efforts with Puppet module and so, is still very challenging. Just knowing what settings are available has been a problem.

We tried to hide much of the complexity behind Puppet models but for people who don’t conform to the norm it’s been a challenge.

I eventually want to move to a new configuration format - perhaps HCL? - but this is a massive undertaking both for me and users. For now we’ve made some effort to give insights to all the known configuration settings on the CLI and in our documentation.

First we’ll publish a generated configuration reference in CONFIGURATION.md - for now it’s in the Git repository we’ll move it to the doc site eventually.

As of the upcoming version of Choria Server you’ll be able to query the CLI for any setting using regular expressions. The list will show descriptions, data types, validation rules, default values, deprecation hints and URLs to additional information.

choria tool config

And get a list:

$ choria tool config puppet -l
plugin.choria.puppetca_host
plugin.choria.puppetca_port
plugin.choria.puppetdb_host
plugin.choria.puppetdb_port
plugin.choria.puppetserver_host
plugin.choria.puppetserver_port

These references are extracted from the Go code - something that I never imagine is possible - read on for details on how that is done.

[Read More]

Upcoming Server Provisioning Changes

I’ve previously blogged about a niche system that enables Mass Provisioning Choria Servers, it is quite scary and tend to be specific to large customers so it’s off by default and require customer specific builds to enable.

I’ve had some time with this system and it’s proven very reliable and flexible, I’ve had many 100s of thousands of nodes under management of this provisioner and can kick 10s of thousands of nodes into provisioning state and it will happily do thousands a minute.

The concept is sound and the next obvious step is to make it available to FOSS users in our FOSS builds. To get there is a long road one that I think will take us toward Kubernetes deployed Choria Broker, CA and Choria Provisioners and eventually a Puppet free deployment scenario.

It’s a long road and step one is about looking at how we can safely enable provisioning in the open source builds. Read on for the details of what was enabled in Choria Server 0.13.0 that was released on 12th of December.

[Read More]