- Getting Started
Getting started using Red Hat based distribution like Red Hat Enterprise Linux and CentOS is easy as RPMs are available for all the required components. This guide walks you through the process.
If you just want to experiment with the system please try our EC2 based demo. You should be familiar with the architecture and terminology of Marionette Collective, please review the basic architecture, terminology and message flow screencast first.
We try to keep the requirements on external Gems to a minimum, you only need:
We strongly recommend you set up a local Yum repository that will host all the packages on your LAN, you can get the prerequisite packages here:
- ActiveMQ - activemq-5.4.0-2.el5.noarch.rpm, activemq-info-provider-5.4.0-2.el5.noarch.rpm, tanukiwrapper-3.2.3-1jpp.
- Java - OpenJDK that is included with your distribution
- Ruby - included with your distribution
- RubyGems - EPEL
- Stomp Ruby Gem - EPEL
- MCollective - mcollective-1.x.x-1.el5.noarch.rpm, mcollective-common-1.x.x-1.el5.noarch.rpm, mcollective-client-1.x.x-1.el5.noarch.rpm
The rest of this guide will assume you set up a Yum repository.
ActiveMQ is currently the most used and tested middleware for use with MCollective.
You need at least one ActiveMQ server on your network, all the nodes you wish to manage will connect to the central ActiveMQ server. Later on your can cluster the ActiveMQ servers for availability and scale.
On the server that you chose to configure as the ActiveMQ server:
Initially you’ll just keep it simple with a single ActiveMQ broker and a basic user setup, further security information for ActiveMQ can be found here
Place the following in /etc/activemq/activemq.xml you can download this file from GitHub
Other examples are also available from GitHub
This creates a user mcollective with the password marionette and give it access to read/write/admin /topic/mcollective.
*. You should change this passsword.
Start the ActiveMQ service:
You should see it running in the process list:
You should also see it listening on port 6163 in your network stack
You should open port 6163 for all your nodes to connect to.
There are a few packages supplied and you will have potentially two type of server:
- Nodes that you wish to manage using mcollective need the mcollective and mcollective-common packages
- Nodes that you wish to use to initiate requests from also known as clients need mcollective-client and mcollective-common packages
A machine can be both at once, in which case you need to install all 3 packages. We’ll work on the assumption here that you wish to both manage your machine and use it as a client by installing all 3 packages on our initial node.
You’ll need to tweak some configs in /etc/mcollective/client.cfg, a full reference of config settings can be found here:
We’re assuming you called the machine running ActiveMQ stomp.example.net please change as appropriate
You should also create /etc/mcollective/server.cfg here’s a sample, , a full reference of config settings can be found here ConfigurationReference:
Replace the plugin.psk in both these files with a Pre-Shared Key of your own.
NOTE: If you are testing the development versions - 1.1.3 and newer - you should use make a small adjustment to both config files above:
By default - and for this setup - we’ll use a simple YAML file for a fact source, later on you can use Puppet Labs Facter or something else.
Create /etc/mcollective/facts.yaml along these lines:
Start the Server
The packages include standard init script, just start the server:
You should see in the log file somethig like:
If all is fine and you see this log message you can test with the client code:
This sends out a simple ‘hello’ packet to all the machines, as we only installed one you should have just one reply.
If you install the mcollective and mcollective-common packages along wit the facts and server.cfg you should see more nodes show up here.
You can explore other aspects of your machines:
This searches all systems currently active for ones with a fact country=uk, it got the data from the yaml file you made earlier.
If you use confiuration management tools like puppet and the nodes are setup with classes with classes.txt in /var/lib/puppet then you can search for nodes with a specific class on them - the locations will configurable soon:
The filter commands are important they will be the main tool you use to target only parts of your infrastructure with calls to agents.
See the –help option to the various mco
* commands for available options. You can now look at some of the available plugins and
play around, you might need to run the server process as root if you want to play with services etc.
We provide limited default plugins, you can look on our sister project MCollective Plugins where you will find various plugins to manage packages, services etc.
From here you should look at the rest of the wiki pages some key pages are:
- Screencasts - Get a hands-on look at what is possible
- Introduction to Simple RPC - a simple to use framework for writing clients and agents
- ControllingTheDaemon - Controlling a running daemon
- AESSecurityPlugin - Using AES+RSA for secure message encryption and authentication of clients
- SSLSecurityPlugin - Using SSL for secure message signing and authentication of clients
- ConnectorStomp - Full details on the Stomp adapter including failover pools