Assessing Module Quality
Assessing Module Quality
The Puppet Forge (Forge) has both codified and crowd-sourced ways of gauging the quality of any module.
A module’s quality score is based on a variety of lint, compatibility, and metadata tests. Individual validations are combined to create the total score. If you are comparing modules, a module’s quality score will give you some indication of its overall soundness and completeness.
For more information about a specific module’s quality score, click details.
Then scroll down the page and you will see information about the results of the lint, compatibility, and metadata tests represented as: Code Quality, Puppet Compatibility, and Metadata Quality.
You can click View full results… for even more detailed information on the scores for each section. It is possible that a module will have a perfect Code Quality score, in which case there may not be additional results to view. Otherwise, you will see some combination of the below flags:
An error flag indicates a severe problem with the module. The flag will be appended to the line causing the issue, which could be anything from a critical bug to a failure to follow a high-priority best practice. If you are the module’s author, an error flag negatively impacts your score most heavily.
A warning flag notes a general problem with the module. The flag will be appended to the line in module causing the issue, which could be nonconformance with best practices or other smaller issue in the module’s structure or code. If you are the module’s author, a warning flag will negatively impact your score, but is weighted less heavily than an error.
A notice flag indicates something in the module that warrants attention. The notice flag is used for both positive and negative things of note, and as such does not impact the module’s score.
A success flag highlights information the module covers completely. This flag only applies to Puppet Compatibility and Metadata Quality. It can be used to assess whether the module covers things like listing operating system compatibility and having a verified source url. If you are the module’s author, a success flag will positively impact your score.
When a module has a new release, the quality scoring tests are rerun and a new score is displayed. You will know this happened because you will see an indication of percentage change since last release;
Or you will see that it has had no change.
Validating Your Module’s Score
If you have written a module and would like to know what its quality score will be before you upload it to the Forge, we designed the rating evaluations to be reproducible.
To reproduce the Code Quality score, you will need to install puppet lint and then run it from the module’s root.
gem install puppet-lint puppet-lint `find ./manifests -name *.pp`
To reproduce the Puppet Compatibility score, you will need to run
puppet parser from the module’s root against the latest release for a specific version of Puppet.
If you are using Puppet 2.7+:
puppet parser validate `find ./manifests -name *.pp`
If you are using the future parser:
puppet parser validate --parser future `find ./manifests -name *.pp`
If you are using Puppet 2.6:
puppet --parseonly --ignoreimports `find ./manifests -name *.pp`
To reproduce the Metadata Quality score, you will need to install and run the metadata linter.
gem install metadata-json-lint metadata-json-lint metadata.json
A module’s community rating is based on the average of user responses to the questions found on every module page on the Forge:
And just like any good community rating system, you can see how many questions have been answered overall. For instance, in the module pictured below, 74 questions have been answered:
For more details about the answers to the questions, you can click “details”. If you scroll down the page, you will find bar graphs showing the average of the answers to the questions on a scale.