Autograding2.0.0Minimum Jenkins requirement: 2.204.4ID: autograding

Installs: 21
Last released:
Ullrich Hafner

Join the Gitter chat changelog Jenkins GitHub Actions Codacy Badge codecov


Jenkins plugin that autogrades projects based on a configurable set of metrics. Currently, you can select from the following metrics:

For each metric you can define the impact on the overall score and the individual scoring criteria. After each build the autograding plugin shows a summary in several progress charts and details in a table for each metric.

Figure 1. Overall score and individual score of participating metrics

Required workflow

In order to autograde a project, you first need to build your project using your favorite build tool. Make sure your build invokes all tools that will produce the artifacts required for the autograding later on. Then run all post build steps that record the desired results using the plugins from the list above. Autograding is based on the persisted Jenkins model of these plugins (i.e., Jenkins build actions), so make sure the results of these plugins show correctly up in the Jenkins build view. The autograding has to be started as the last step: you can configure the impact of the individual results using a simple JSON string. Currently, no UI configuration of the configuration is available. The autograding step reads all requested build results and calculates a score based on the defined properties in the JSON configuration.

Scores Summary
Figure 2. Summary of the scoring
Scores Details
Figure 3. Details for all metrics

Job Configuration

Please have a look at the example pipeline that shows how to use this plugin in practice. It consists of the following stages:

  1. Checkout from SCM

  2. Build and test the project and run the static analysis with Maven

  3. Run the test cases and compute the line and branch coverage

  4. Run PIT to compute the mutation coverage

  5. Record all Maven warnings

  6. Autograde the results from steps 2-5

The example pipeline uses the following configuration that shows all possible parameters:

  "analysis": {
    "maxScore": 100,
    "errorImpact": -10,
    "highImpact": -5,
    "normalImpact": -2,
    "lowImpact": -1
  "tests": {
    "maxScore": 100,
    "passedImpact": 1,
    "failureImpact": -5,
    "skippedImpact": -1
  "coverage": {
    "maxScore": 100,
    "coveredImpact": 1,
    "missedImpact": -1
  "pit": {
    "maxScore": 100,
    "detectedImpact": 1,
    "undetectedImpact": -1,
    "ratioImpact": 0

If you want to skip one of the tools just remove the corresponding JSON node from the configuration. Additionally, you need to select the individual configuration options based on your current assignment. Sometimes it makes sense to start with a given number of points and subtract points for each violation (e.g., minus points for each SpotBugs warning). For other metrics it makes more sense to add points for each achieved percentage (e.g., for line coverage).

ArchivesGet past versions
This plugin has no labels