ActiveMQ Config Reference for MCollective Users
Apache ActiveMQ is the primary middleware we recommend with MCollective. It’s good software, but its XML config file is large and unwieldy, and you may need to edit many sections of it in a complex MCollective deployment. This reference guide attempts to describe every major ActiveMQ setting that matters to MCollective.
How to Use This Page
- This page doesn’t describe the complete format of the activemq.xml config file, and will sometimes use incomplete shorthand to describe elements of it.
- You should definitely refer to an example config file while reading, so you can see each element’s full syntax in context.
- You don’t need to read this entire page when setting up a new deployment. We recommend that you:
- Start with an example config file (see directly below).
- Make heavy use of the table of contents above.
- Skim the sections of this page you currently care about, and edit your config as needed.
- Refer back to this page later when you need to expand your broker infrastructure.
- If you are a new user, we recommend that you:
Example Config Files
We have several.
Note: Some config data needs to be set in both MCollective and ActiveMQ; your configuration of one will affect the other. In this page, we call out that information with headers labeled “Shared Configuration.”
This document is about the “new” MCollective/ActiveMQ interface, which means it requires the following:
- MCollective 2.0.0 or newer
- ActiveMQ 5.5.0 or newer
- Stomp gem 1.2.2 or newer
- The activemq connector plugin (included with MCollective 2.0.0 and newer)
How MCollective Uses ActiveMQ
MCollective connects to ActiveMQ over the Stomp protocol, and presents certain credentials:
- It provides a username and password, with which ActiveMQ can do what it pleases.
- If TLS is in use, it will also present a certificate (and verify the ActiveMQ server’s certificate).
Once allowed to connect, MCollective will use the Stomp protocol to create subscriptions. It will then produce and consume a lot of traffic on queues and topics whose names begin with
mcollective. (See “Topic and Queue Names” directly below.)
Absolute Minimum Requirements
ActiveMQ defaults to believing that it is routing traffic between processes in a single JVM instance: it doesn’t assume it is connected to the network, and it uses a loose-to-nonexistent security model.
This means that if you do nothing but enable Stomp traffic, MCollective will work fine. (Albeit with terrible security and eventual out-of-control memory usage.)
Topic and Queue Names
MCollective uses the following destination names. This list uses standard ActiveMQ destination wildcards. “COLLECTIVE” is the name of the collective being used; by default, this is
mcollective, but if you are using subcollectives, each one is implemented as an equal peer of the default collective.
ActiveMQ.Advisory.>(built-in topics that all ActiveMQ producers and consumers need all permissions on)
COLLECTIVE.*.agent(for each agent plugin, where the
*is the name of the plugin)
COLLECTIVE.nodes(used for direct addressing; this is a single destination that uses JMS selectors, rather than a group of destinations)
COLLECTIVE.reply.>(where the continued portion is a request ID)
Subcollectives must also be configured in the MCollective client and server config files. ActiveMQ must allow traffic on any subcollective that MCollective servers and clients expect to use.
Config File Location and Format
ActiveMQ’s config is usually called activemq.xml, and is kept in ActiveMQ’s configuration directory (
/etc/activemq with Puppet Labs’s Red Hat-like packages, or a subdirectory of
/etc/activemq/instances-enabled with the standard Debian or Ubuntu packages). Any other files referenced in activemq.xml will be looked for in the same directory.
The config file is in Java’s Beans XML format. Note that all of the settings relevant to MCollective are located inside activemq.xml’s
This document won’t describe the complete format of the activemq.xml config file, and will sometimes use incomplete shorthand to describe elements of it. You should definitely refer to an example config file while reading, so you can see each element’s full syntax in context.
You can also read external documentation for a more complete understanding.
Bug Warning: In ActiveMQ 5.5, the first-level children of the
<broker>element must be arranged in alphabetical order. There is no good reason for this behavior, and it was fixed in ActiveMQ 5.6.
External ActiveMQ Documentation
You’ll see a lot of ActiveMQ destination wildcards below. In short:
- Segments in a destination name are separated with dots (
*represents one segment (i.e. any sequence of non-dot characters)
>represents the whole rest of the name after a prefix
One way or another, you must set all of the following.
It’s generally best to only enable the transport connectors you need. For example, if you’re using Stomp over TLS, don’t leave a bare Stomp transport open. If you’re not using a network of brokers, close the OpenWire transport.
name attribute of a transport connector doesn’t seem to matter as long as it’s locally unique.
ActiveMQ must listen over the network for Stomp connections; otherwise, MCollective can’t reach it. Enable this with a
<transportConnector> element inside the
<transportConnectors> element. We generally recommend using TLS.
- Note that the protocol/port/arguments for Stomp URIs can differ:
- CA-verified TLS (ActiveMQ ≥ 5.9):
- CA-verified TLS:
- Anonymous TLS (ActiveMQ ≥ 5.9):
- Anonymous TLS:
- You can choose to restrict the interface/hostname to use instead of listening on
If you are using TLS, note that you must also:
If you are using a network of brokers instead of just one ActiveMQ server, they talk to each other over OpenWire, and will all need a transport connector for that protocol too:
- Note that the protocol/port/arguments for OpenWire URIs can differ:
- CA-verified TLS:
- Anonymous TLS:
- You can choose to restrict the interface/hostname to use instead of listening on
Standard Ports for Stomp and OpenWire
Alas, there aren’t any; just a rough consensus.
- 61613 for unencrypted Stomp
- 61614 for Stomp with TLS
- 61616 for unencrypted OpenWire
- 61617 for OpenWire with TLS
All of our documentation assumes these ports.
MCollective needs to know the following:
- The port to use for Stomp traffic
- The hostname or IP address to reach ActiveMQ at
- Whether to use TLS
In a network of brokers, the other ActiveMQ servers need to know the following:
- The port to use for OpenWire traffic
- The hostname or IP address to reach peer ActiveMQ servers at
- Whether to use TLS
Reply Queue Pruning
MCollective sends replies on uniquely-named, single-use queues with names like
mcollective.reply.<UNIQUE ID>. These have to be deleted after about five minutes, lest they clog up ActiveMQ’s available memory. By default, queues live forever, so you have to configure this.
<policyEntry> element for
*.reply.> queues, with
gcInactiveDestinations set to true and
inactiveTimoutBeforeGC set to 300000 ms (five minutes).
Disable Producer Flow Control on Topics
In the example above, you can also see that
producerFlowControl is set to false for all topics. This is highly recommended; setting it to true can cause MCollective servers to appear blocked when there’s heavy traffic.
If you are using TLS in either your Stomp or OpenWire transport connectors, ActiveMQ needs a keystore file, a truststore file, and a password for each:
Note: This example is for CA-verified TLS. If you are using anonymous TLS, you may optionally skip the truststore attributes.
The redundant nested
<sslContext> elements are not a typo; for some reason ActiveMQ actually needs this.
ActiveMQ will expect to find these files in the same directory as activemq.xml.
Creating a Keystore and Truststore
Authentication (Users and Groups)
When they connect, MCollective clients and servers provide a username, password, and optionally an SSL certificate. ActiveMQ can use any of these to authenticate them.
By default, ActiveMQ ignores all of these and has no particular concept of “users.” Enabling authentication means ActiveMQ will only allow users with proper credentials to connect. It also gives you the option of setting up per-destination authorization (see below).
You set up authentication by adding the appropriate element to the
<plugins> element. See also the ActiveMQ docs about authentication. In summary:
simpleAuthenticationPlugindefines users directly in activemq.xml. It’s well-tested and easy. It also requires you to edit activemq.xml and restart the broker every time you add a new user. The activemq.xml file will contain sensitive passwords and must be protected.
jaasAuthenticationPluginlets you use external text files (or even an LDAP database) to define users and groups. You need to make a
login.configfile in the ActiveMQ config directory, and possibly two more files. You can add users and groups without restarting the broker. The external users file will contain sensitive passwords and must be protected.
jaasCertificateAuthenticationPluginignores the username and password that MCollective presents; instead, it reads the distinguished name of the certificate and maps that to a username. It requires TLS, a
login.configfile, and two other external files. It is also impractical unless your servers are all using the same SSL certificate to connect to ActiveMQ; the currently recommended approach of re-using Puppet certificates makes this problematic, but you can probably ship credentials around and figure out a way to make it work. This is not currently well-tested with MCollective.
The example below uses
This creates two users:
admin. Note that unless you set up authorization (see below), these users have the exact same capabilities.
MCollective servers and clients both need a username and password to use when connecting. That user must have appropriate permissions (see “Authorization,” directly below) for that server or client’s role.
Authorization (Group Permissions)
By default, ActiveMQ allows everyone to read from any topic or queue, write to any topic or queue, and create (admin) any topic or queue.
By setting rules in an
<authorizationPlugin> element, you can regulate things a bit. Some notes:
- Authorization is done by group.
- The exact behavior of authorization doesn’t seem to be documented anywhere. Going by observation, it appears that ActiveMQ first tries the most specific rule available, then retreats to less specific rules. This means if a given group isn’t allowed an action by a more specific rule but is allowed it by a more general rule, it still gets authorized to take that action. If you have any solid information about how this works, please email us at firstname.lastname@example.org.
- MCollective creates subscriptions before it knows whether there will be any content coming. That means any role able to read from or write to a destination must also be able to admin that destination. Think of “admin” as a superset of both read and write.
The following example grants all permissions on destinations beginning with
mcollective to everyone in group
This means admins can issue commands and MCollective servers can read those commands and reply. However, it also means that servers can issue commands. If you’re using MCollective’s SSL security plugin (or another security provider with strong client authentication), this is generally fine: servers could potentially receive bogus commands, but will reject any commands that don’t come from an authorized client user.
Note that the
everyone group (as seen in the
ActiveMQ.Advisory.> topics) isn’t special. You need to manually make sure all users are members of it. ActiveMQ does not appear to have any kind of real wildcard “everyone” group, unfortunately.
The following example splits permissions along a simple user/server model:
This means admins can issue commands and MCollective servers can read those commands and reply. This time, though, servers can’t issue commands. The exception is the
mcollective.registration.agent destination, which servers DO need the ability to write to if you’ve turned on registration.
Admins, of course, can also read commands and reply as though they were servers, since they have power over the entire
mcollective.> destination set. This isn’t considered much of an additional security risk, considering that admins can already control your entire infrastructure.
Detailed Restrictions with Multiple Subcollectives
Both of the above examples assume only a single
mcollective collective. If you are using additional subcollectives (e.g.
us_collective, etc.), their destinations will start with their name instead of
mcollective. If you need to separately control authorization for each collective, it’s best to use a template to do so, so you can avoid repeating yourself.
This example divides your users into several groups:
adminsis the “super-admins” group, who can command all servers.
serversis the “super-servers” group, who can receive and respond to commands on any collective they believe themselves to be members of.
COLLECTIVE-adminscan only command servers on their specific collective. (Since all servers are probably members of the default
mcollective-adminsgroup are sort of the “almost-super-admins.”)
COLLECTIVE-serverscan only receive and respond to commands on their specific collective.
Thus, when you define your users in the authentication setup, you could allow a certain user to command both the EU and UK collectives (but not the US collective) with
groups="eu_collective-admins,uk_collective-admins". You would probably want most servers to be “super-servers,” since each server already gets to choose which collectives to ignore.
MCollective’s Exact Authorization Requirements
As described above, any user able to read OR write on a destination must also be able to admin that destination.
ActiveMQ.Advisory.>— Everyone must be able to read and write.
COLLECTIVE.*.agent— Admin users must be able to write. Servers must be able to read.
COLLECTIVE.registration.agent— If you’re using registration, servers must be able to read and write. Otherwise, it can be ignored.
COLLECTIVE.nodes— Admin users must be able to write. Servers must be able to read.
COLLECTIVE.reply.>— Servers must be able to write. Admin users must be able to read.
Subcollectives must also be configured in the MCollective client and server config files. If you’re setting up authorization per-subcollective, ActiveMQ must allow traffic on any subcollective that MCollective servers and clients expect to use.
Settings for Networks of Brokers
You can group multiple ActiveMQ servers into networks of brokers, and they can route local MCollective traffic amongst themselves. There are a lot of reasons to do this:
- Scale — we recommend a maximum of about 800 MCollective servers per ActiveMQ broker, and multiple brokers let you expand past this.
- High availability — MCollective servers and clients can attempt to connect to mupltiple brokers in a failover pool.
- Partition resilience — if an inter-datacenter link goes down, each half’s local MCollective system will still work fine.
- Network isolation and traffic limiting — if your clients default to only sending messages to local machines, you can get better performance in the most common case while still being able to command a global collective when you need to.
- Security — destination filtering can prevent certain users from sending requests to certain datacenters.
This is naturally more complicated than configuring a single broker.
Designing your broker network’s topology is beyond the scope of this reference. The ActiveMQ Clusters guide has a brief description of an example network; see the ActiveMQ docs for more detailed info. For our purposes, we assume you have already decided:
- Which ActiveMQ brokers can communicate with each other.
- What kinds of traffic should be excluded from other brokers.
<broker> element has a
brokerName attribute. In single-broker deployments, this can be anything and defaults to
localhost. In a network of brokers, each broker’s name must be globally unique across the deployment; duplicates can cause message loops.
All participants in a network of brokers need OpenWire network transports enabled. See “Transport Connectors” above for details.
If you are using a network of brokers, you need to configure which brokers can talk to each other.
This configuration can get complex! Here’s why:
- This is where you define your brokers’ network topology: in the sum total of all of the
<networkConnector>elements across all of your brokers. (This is massively shared configuration — you can’t determine the network topology by looking at the connectors on any single broker.)
- Each network connector connects two brokers. Think of a network connector as a run of cable between two routers.
- For reasons that are hard to summarize, every link between two brokers must consist of two network connectors:
- One reserved for topics, with
conduitSubscriptionsset to true (its default value) and queue traffic excluded.
- One reserved for queues, with
conduitSubscriptionsset to false and topic traffic excluded.
It’s basically as though you had one cable that could only carry UDP and another cable that could only carry TCP. We don’t know why it’s built like this, but it’s easy to get tripped up on, so please watch out.
- One reserved for topics, with
- Network connectors can be bidirectional (
duplex="true") or unidirectional (
duplex="false"). Bidirectional is easier to deal with, since you only have to configure the connectors on one participant in each link, and the other participant will automatically configure itself. (If you used unidirectional connectors, you’d end up using FOUR network connectors per link, with two on each participant, due to the topic/queue split mentioned above.)
As you can see, there are a lot of ways to do this, especially since this configuration is a direct reflection of your network topology. As an example, a simple star topology could be configured in the following ways:
- The hub broker has two bidirectional connectors for each leaf broker. None of the leaf brokers need any connectors configured. This centralizes the configuration, but the configuration is relatively complex — you’d need to maintain a list of all leaf nodes.
- Every leaf broker has two bidirectional connectors pointed toward the hub. This spreads configuration across many nodes, but the configuration is relatively simple, since each leaf broker just needs its own name and the URI of the hub.
- Every leaf broker has two unidirectional connectors pointed toward the hub, and the hub has two unidirectional connectors pointed at each leaf node. Worst of both worlds; we don’t know of a good reason to do this.
Network connectors are configured with
<networkConnector> elements inside the
<networkConnectors> element. Note that the queues connector excludes topics and vice-versa.
nameattribute on each connector must be globally unique. Easiest way to do that is to combine the pair of hostnames involved with the word “queues” or “topics.”
- If you’re using TLS for OpenWire, you’ll need to change the URIs to something like
static:(ssl://stomp2.example.com:61617)— note the change of both protocol and port.
- The network TTL is the number of network hops that messages and subscriptions are allowed to pass through. You will need to adjust the TTL to match your network’s topology. In a ring, it would be the number of brokers minus one; in a star, it would be two.
- A username and password are required. The broker with the
<networkConnector>connects to the other broker as this user. This user should have full rights on all queues and topics, unless you really know what you’re doing. (See authentication and authorization above.)
- Alternately, you can set up two uni-directional connectors on both brokers; see the ActiveMQ documentation linked above for more details.
If you want to prevent certain traffic from leaving a given datacenter, you can do so with
<dynamicallyIncludedDestinations> elements inside each
<networkConnector> element. This is mostly useful for reducing noise (which can potentially save you money, depending on network topology and datacenter locations), but it can also serve security purposes. Generally, you’ll be filtering on subcollectives, which, as described above, begin their destination names with the name of the collective.
Both types of filter element can contain
<topic> elements, with their
physicalName attributes defining a destination name with the normal wildcards. Since MCollective uses both queues and topics, you’ll usually write a pair of rules for any exclusion or inclusion.
Remember to retain the all-queues/all-topics exclusions as shown above.
Assume a star network topology.
This topology can be achieved by either having each edge broker connect to the central broker, or having the central broker connect to each edge broker. You can achieve the same filtering in both situations, but with slightly different configuration. The two examples below have similar but not identical effects; the ramifications are subtle, and we really recommend reading the external ActiveMQ documentation if you’ve come this far in your deployment scale.
If your central broker is connecting to the UK broker, and you want it to only pass on traffic for the global
mcollective collective and the UK-specific
In this case, admin users connected to the central broker can command nodes on the
uk_collective, but admin users connected to the UK broker can’t command nodes on the
Alternately, if your UK broker is connecting to your central broker and you want it to refrain from passing on UK-specific traffic that no one outside that datacenter cares about:
In this case, admin users connected to the central broker cannot command nodes on the
uk_collective; it’s expected that they’ll be issuing commands to the main
mcollective collective if they need to (and are authorized to) cross outside their borders.
The minor adjustments listed below (turn off dedicated task runner, increase heap, and increase memory and temp usage in activemq.xml) will generally let you reach about 800 MCollective nodes connected to a single ActiveMQ server, depending on traffic and usage patterns, number of topics and queues, etc. Any more detailed tuning is beyond the scope of this reference, and is likely to be unnecessary for your deployment.
Don’t Use Dedicated Task Runner
-Dorg.apache.activemq.UseDedicatedTaskRunner=false when starting ActiveMQ. MCollective creates a lot of queues and topics, so not using a thread per destination will save you a lot of memory usage.
This setting is not configured in activemq.xml; it’s an extra argument to the JVM, which should be provided when ActiveMQ starts up. The place to put this varies, depending on the package you installed ActiveMQ with; it usually goes in the wrapper config file. Check your init script for clues about this file’s location. With the common TanukiWrapper scripts, it would look something like this:
Increase JVM Heap if Necessary
Likewise, the max heap is usually configured in the wrapper config file (
wrapper.java.maxmemory=512) or on the command line (
Memory and Temp Usage for Messages (
Since ActiveMQ expects to be embedded in another JVM application, it won’t automatically fill up the heap with messages; it has extra limitations on how much space to take up with message contents.
As your deployment gets bigger, you may need to increase the
<tempUsage> elements in the
<systemUsage> element. Unfortunately, we lack a lot of solid data for what to actually set these to. Most users leave the defaults for memory and temp until they have problems, then double the defaults and see whether their problems go away. This isn’t perfectly effecient, but anecdotally it appears to work.
The many redundant nested elements are not a typo; for some reason, ActiveMQ seems to require this.
There’s little reason to care about these settings in most conditions, but they’re in the example config files anyway.
MCollective rarely uses this. It’s only necessary in networks of brokers, where it is used to prevent routing loops. Leave it enabled; it has no notable performance penalty and its disk usage is limited by the
<storeUsage> element described above.
This is for monitoring. MCollective doesn’t use this and the examples have it turned off, but you may want it for your own purposes.
MCollective doesn’t use this.
Jetty (Web Consoles, APIs, etc.)
The activemq.xml file will often either contain Jetty settings or import them from another file. MCollective doesn’t use this. If you’re not using it to manage ActiveMQ, leaving it enabled may be a security risk. Note that this configuration is outside the