- Getting Started
NOTE: We are currently improving the Getting Started documentation. Red Hat and CentOS users can refer to our new guide, others will be added in time.
Below find a rough guide to get you going, this assumes the client and server is on the same node, but servers don’t need the client code installed.
For an even quicker intro to how it all works you can try our EC2 based demo
Look at the Screencasts page, there are some screencasts dealing with basic architecture, terminology and so forth that you might find helpful before getting started.
We try to keep the requirements on external Gems to a minimum, you only need:
RPMs for these are available here.
NOTE: You need version Stomp Gem 1.1 for mcollective up to 0.4.5. mcollective 0.4.6 onward supports 1.1 and 1.1.6 and newer
I’ve developed this against ActiveMQ. It should work against other Stomp servers but I suspect if you choose one without username and password support you might have problems, please let me know if that’s the case and I’ll refactor the code around that.
Full details on setting up and configuring ActiveMQ is out of scope for this, but you can follow these simple setup instructions for initial testing (make sure JDK is installed, see below for Debian specific issue regarding JDK):
Download and Install
- Download the ActiveMQ “binary” package (for Unix) from ActiveMQ
- Extract the contents of the archive:
- cd into the activemq directory
- Execute the activemq binary
A spec file can be found in the ext directory on GitHub that can be used to build RPMs for RedHat/CentOS/Fedora you need tanukiwrapper which you can find from jpackage, it runs fine under OpenJDK that comes with recent versions of these Operating Systems. I’ve uploaded some RPMs and SRPMs here.
For Debian systems you’d be better off using OpenJDK than Sun JDK, there’s a known issue #562954.
First you should configure ActiveMQ to listen on the Stomp protocol
And then you should add a user or two, to keep it simple we’ll just add one user, the template file will hopefully make it obvious where this goes, it should be in the broker block:
Note: This config is for ActiveMQ 5.4
This creates a user mcollective with the password marionette and give it access to read/write/admin /topic/mcollective.
Save the above code as activemq.xml and run activemq as - if installing from a package probably /etc/activemq/activemq.xml:
Else your package would have a RC script:
There are also a few known to work and tested configs in git.
Download and Extract
Grab a copy of the mcollective ideally you’d use a package for your distribution else there’s a tarfile that you can use, you can extract it wherever you want, the RPMs or deps will put files in Operating System compatible locations. If you use the tarball you’ll need to double check all the paths in the config files below.
You’ll need to tweak some configs in /etc/mcollective/client.cfg, a full reference of config settings can be found here:
Mostly what you’ll need to change is the identity, plugin.stomp.
* and the plugin.psk:
You should also create /etc/mcollective/server.cfg here’s a sample, , a full reference of config settings can be found here ConfigurationReference:
Replace the plugin.stomp.host with your server running ActiveMQ and replace the plugin.psk with a Pre-Shared Key of your own.
The STOMP connector supports other options like failover pools, see ConnectorStomp for full details.
NOTE: If you are testing the development versions - 1.1.3 and newer - you should use make a small adjustment to both config files above:
By default - and for this setup - we’ll use a simple YAML file for a fact source, later on you can use Reductive Labs Facter or something else.
Create /etc/mcollective/facts.yaml along these lines:
Start the Server
If you installed from a package start it with the RC script, else look in the source you’ll find a LSB compatible RC script to start it.
Test from a client
If all is setup you can test with the client code:
This sent a simple ‘hello’ packet out to the network and if you started up several of the mcollectived.rb processes on several machines you would have seen several replies, be sure to give each a unique identity in the config.
At this point you can start exploring the discovery features for example:
This searches all systems currently active for ones with a fact country=uk, it got the data from the yaml file you made earlier.
If you use confiuration management tools like puppet and the nodes are setup with classes with classes.txt in /var/lib/puppet then you can search for nodes with a specific class on them - the locations will configurable soon:
Chef does not yet support such a list natively but we have some details on the wiki to achieve the same with Chef.
The filter commands are important they will be the main tool you use to target only parts of your infrastructure with calls to agents.
See the –help option to the various mco
* commands for available options. You can now look at some of the available plugins and
play around, you might need to run the server process as root if you want to play with services etc.
We provide limited default plugins, you can look on our sister project MCollective Plugins where you will find various plugins to manage packages, services etc.
From here you should look at the rest of the wiki pages some key pages are:
- Screencasts - Get a hands-on look at what is possible
- Introduction to Simple RPC - a simple to use framework for writing clients and agents
- ControllingTheDaemon - Controlling a running daemon
- AESSecurityPlugin - Using AES+RSA for secure message encryption and authentication of clients
- SSLSecurityPlugin - Using SSL for secure message signing and authentication of clients
- ConnectorStomp - Full details on the Stomp adapter including failover pools