This plugin provides caching for dependencies and build artefacts to reduce build execution times. This is especially useful for Jenkins setups with ephemeral executors which always start from a clean state, such as container based ones.
- Store caches on the Jenkins controller, AWS S3 and S3 compatible services - see additional plugins requirements: https://github.com/jenkinsci/jobcacher-plugin/blob/main/src/main/java/jenkins/plugins/itemstorage/s3/S3ItemStorage.java#L110
- Use caching in pipeline and freestyle jobs
- Define maximum cache sizes so that the cache won't grow indefinitely
- View job specific caches on job page
jenkins.plugins.itemstorage.ItemStoragefor adding custom cache storages
jenkins.plugins.jobcacher.Cachefor adding custom caches
By default, the plugin is configured to use on-controller storage for the cache. In addition, a storage implementation for Amazon S3 and S3 compatible services is also available.
The storage type can be configured in the global configuration section of Jenkins.
The following cache configuration options apply to all supported job types.
|The maximum size in megabytes of all configured caches that Jenkins will allow until it deletes all completely and starts the next build from an empty cache. This prevents caches from growing indefinitely with the downside of periodic fresh builds without a cache. Set to zero or empty to skip checking cache size.
|If set to
true, skip saving the cache. Default
|If set to
true, skip restoring the cache. Default
|If the current branch has no cache, it will seed its cache from the specified branch. Leave empty to generate a fresh cache for each branch.
|Defines the caches to use in the job (see below).
|The path to cache. It can be absolute or relative to the workspace.
|The name of the cache. Useful if caching the same path multiple times in a pipeline.
|The pattern to match files that should be included in caching.
|The pattern to match files that should be excluded from caching.
|Whether to use default excludes (see DirectoryScanner.java#L170 for more details).
|The workspace-relative path to one or multiple files which should be used to determine whether the cache is up-to-date or not. Only up-to-date caches will be restored and only outdated caches will be created.
|The compression method (
TAR) to use. Some are without compression. Note that method
NONE is not supported anymore and is now treated as
cacheValidityDecidingFile option can be used to fine-tune the cache validity. At its simplest, you specify a file and the cache will be considered outdated if the file changes. You can also specify a folder, in which case all the files in the folder (recursively found) will be used to determine the cache validity. This can be too coarse if you have generated files lumped in with source files. To fine-tune this, you can specify an arbitrary list of patterns to include and exclude paths from the cache validity check. The patterns are relative to the workspace root. These patterns are paths or glob patterns, separated by commas. Exclude patterns start with the
! character. The order of the patterns does not matter. You can mix include and exclude patterns freely.
For example, to cache everything in a folder
src except files named
Different situations might require different packaging and compression methods, controlled by the
TARGZ use gzip with a compression level which is a "sweet spot" between compression speed and size (Deflate compression level 6). If you cache lots of text files, for instance source code or
TARGZ_BEST_SPEED use gzip with the lowest compression level, for best throughput. If high speed at cache creation is important, and you cache directories with a mix of both text and binary files, this option might be a good choice.
TAR use no compression. If you cache directories with lots of binary files, this option might be best.
TAR_ZSTD use a JNI-binding to machine architecture dependent Zstandard binaries, with pre-built binaries for many architectures are available. It offers better compression speed and ratio than gzip.
ZIP packages the cache in a zip archive.
The plugin provides a "Job Cacher" build environment. The cache(s) will be restored at the start of the build and updated at the end of the build.
The plugin provides a
cache build step that can be used within the pipeline definition. The cache(s) will be restored before calling the closure and updated after executing it.
cache(maxCacheSize: 250, defaultBranch: 'develop', caches: [
arbitraryFileCache(path: 'node_modules', cacheValidityDecidingFile: 'package-lock.json')
If you use the plugin within a Docker container through the Docker Pipeline plugin, the path to cache must be located within the workspace. Everything outside is not visible to the plugin and therefore not cacheable.