Optimizing build performance is a process, not an event. Develocity has everything you need to keep your builds running as fast as possible as your code and environment constantly change.

You can complete this tutorial in:

  • 1 minute (read the Introduction)

  • 5-10 minutes (read the Introduction and Tour)

  • 15-20 minutes (read the Introduction and Tour and perform the Hands-on Lab)

Introduction

Everyone wants fast builds. Faster developer builds allow more code to be processed and increases the agility and velocity of feature delivery. Faster CI builds reduce the time to get feedback which results in better and more effective continuous integration.

Develocity’s remote Build Cache is a powerful tool that teams can deploy to make all builds faster. You can learn more by taking our Build Cache tutorial.

This tutorial considers two different areas of build performance optimization. The first is to find and fix those factors that contribute to build time that are outside of the Build Cache(s) themselves - things like binary dependency download speeds, build script configuration processing, and memory consumption.

The other factor is optimizing and advancing the tasks that can take advantage of caching. Caching is opted in by each task type after careful analysis and testing of the correctness of the caching, and an assessment of the effectiveness of caching. In some cases, caching is slower than rebuilding, so caching will not always be turned on for that reason. Custom plugins, usage of annotation and source code generation processors, and other very specific reasons can make some tasks not effectively cacheable.

Develocity’s build scans are the key tool to address both areas of performance management and allow you to not only get your builds fast, it also allows you to keep your builds fast.

The ongoing establishment of a practice to detect build time regressions and leverage the data inherent in the build scans is a key tool in the goal that every team has: fastest builds possible, all of the time.

Tour: Keeping builds fast

Optimizing build performance (beyond task caching)

Understand the difference in build duration for a given project built locally vs. on CI

It is sometimes informative to understand if on average CI builds are faster than dev builds or vice-versa. If there is a significant difference, it can point a team to the need to increase (or decrease) the hardware resources needed for those classes of builds and help plan expenditures (and justify the purchase).

Here is a quick way to determine that for a sample project. Open the build scan list here.

Search for tag CI AND search for project gradle and search. Then sort by duration - descending. We see 5 builds and roughly estimate the average CI time is around 10 seconds.

Repeat this for tag local, project gradle, and sort by duration - descending - we roughly estimate that developer builds are taking 23 seconds.

Based on this we see that dev builds are roughly 2.5x slower than CI builds, and we have the information to possibly justify the funding of a developer laptop hardware refresh.

Make any build faster

Large, complex builds often have an embarrassment of riches in terms of opportunities for increasing build speed. The task list view can help teams identify the top candidates to look at out of a potentially very large set. This helps teams focus attention quickly on those tasks that will return the largest improvements for the time invested in refactoring the build.

Open this build scan.

Using the left-hand side navigation view the Timeline

The diagram at the top immediately tells us that we have 2 long running test tasks that in this case are running in parallel, so at least we are not serializing their execution.

If you look below the bars, to the task list, at the header of the task list you will see a dropdown selector on the far-right side: Order: and by default orders by execution order. Change that to the other choice which is Longest.

We can now see that the most return will come from optimizing the two 10 second test tasks.

Investigate what has the biggest impact on your configuration time

Significant time can be spent before any tasks are executed while Gradle Build Tool is configuring the build. You might be using dozens or hundreds of plugins from various sources (Gradle, internally-developed, and third-party) and more plugins with new versions can be coming into the build at any time.

Develocity helps you to find out how much time is spent in configuration and how that time is being spent to help you focus your efforts to reduce configuration time.

Open this build scan.

Using the left-hand navigation open the Performance section of this build scan. Locate the Configuration section across the top of the main page area and click that.

You are now seeing a view of all of the build scripts, plugins and lifecycle callbacks that are applied at configuration time by descending application time, which will help you to decide if the top ones are necessary or can be refactored to take less time to be applied.

Determine if your build needs more memory

It might be surprising but having an inappropriate amount of JVM memory for the Gradle Build Tool processes is a common source of poor build performance.

This is tracked in the build scans and here is an example.

Open the left-hand navigation category Performance.

Consider the information on Peak heap memory usage and Total garbage collection time. You will quickly conclude that we don’t have enough JVM memory and we are spending almost half of the entire build time in JVM garbage collection.

Select now the Infrastructure left-hand navigation section and see that the Max JVM memory heap size is set to 60 MiB. You can now iteratively modify the JVM size to verify the most optimal size which will have a significant effect on this build’s performance.

Determine how much time was spent resolving dependencies

Gradle automatically populates a local binary cache for upstream binary dependencies that are downloaded, but issues with network latency, availability of repository caching nodes, and other networking issues from all of the various network segments where all your builds occur can result in build performance degradation during dependency resolution.

To see how Develocity can show you this, open this build scan.

Select the Performance section of the scan from the left-hand side navigation, then select Network activity from the choices across the top.

We can see that we downloaded 23 files, which took 5 seconds of our build time, and that the bandwidth we enjoyed was 706 KiB/s. We might need to invest in a better connection or investigate placing a repository cache closer to this type of build.

Optimizing task caching

This section of examples demonstrate how Develocity build scans can be used to optimize the effectiveness of task caching.

Analyze Build Cache hit rate

Develocity shows overall statistics for Build Cache usage in the build scan and this information is very helpful in considering how effective caching currently is, and starting to point towards the "next task" that we should examine in more detail.

Consider this build scan.

Using the left-hand navigation, select the Performance section of this build scan, then select the Build Cache section from the choices presented across the top.

This gives us an overview of both local and remote cache statistics for this build. We can see the misses and hits on both caches. We can see the size of the caches, and the storage directories for both. We can see the amount of data and the time spent serving the artifacts from the caches.

Now click on the Task execution section from the choices across the top of the main display area.

Find the link Not cacheable in the view and click the link. This shows us that only the clean task is not cacheable for this build, and that the reason is that task output caching is not enabled, which makes sense - cleaning should not be cached.

Determine what tasks to make cacheable next

The process of finding and addressing task caching is iterative and goes on "forever". Use these techniques to quickly identify the next set of candidates to examine in more detail that will provide the greatest return on time invested.

View this build scan.

To find the longest-running tasks that are not (yet) cacheable, select the Timeline section of the build scan from the left-hand side navigation. Then, click on the magnifying glass icon at the top of the main display area.

Find the filter box Task output cacheability and click to display the filter choices. Select Not cacheable: Any reason.

Then find the Order control located in the main area on the far-right and drop it down and select Longest.

We can now see that of the tasks that currently cannot be cached, both task1 and task3 would be the ones to look at which have the longest execution times.

We can further click on task1 to see even more information: in this case, we can see that output caching is not enabled at all for these tasks currently. This is enabled via an annotation that is added to the build script or plugin that is defining the relevant task type. task3 does not declare its outputs or they may be incorrectly marked as @Internal and therefore cannot be cached either.

Investigate why you are getting an unexpected Build Cache miss

Probably the most common use case for debugging cache misses is a build that we think should be getting cache hits, but it is getting cache misses instead.

Consider this build scan and click on the Timeline left-hand side navigation section.

We can see that some of the tasks are annotated with FROM-CACHE but many do not have this annotation. In particular, we want to know why the task compileJava is a miss, because we think that should be a hit.

Develocity has a very powerful feature called "Adjacent Scans" that will allow the earlier and later in time scans for the same build to be navigated and compared.

At the top of the GUI, you will see an icon that looks like a clock with a circular counter-clockwise arrow around it. Press that icon.

You will see an "Adjacent scan" 3 seconds earlier in time from the one we are viewing. Select that scan.

This is the one we think should have produced the cache object that we would be consuming, so let’s investigate.

We can see a task called generateSomeSource and we can infer that this task is generating code that is different every time the build is run, so we will focus on understand why this is so and if it is necessary to do this, especially for developer builds which could otherwise benefit from caching.

Hands-on Lab: Keeping builds fast

Read on to go one level deeper.

Prerequisites

To follow these steps, you will need:

  1. A zip file that includes the source code to recreate the build scans we discussed previously.
    You can get this from here.
    Unzip this anywhere, we will refer to that location as LAB_HOME in what follows.

  2. A local JVM installed.
    JVM requirements are specified at https://gradle.org/install. Note that a local Gradle Build Tool install is optional and not required. All hands-on labs will execute the Gradle Build Tool using the Gradle Wrapper.

  3. An instance of a Develocity server.
    You can request a Develocity trial here.

For the rest of the document, we will assume you have a Develocity instance, available at https://develocity.mycompany.com.

Lab: Find potential performance killers

Open a terminal window in LAB_HOME/05-find-potential-performance-killers.

Using the text editor of your choice, modify the build scan configuration in settings.gradle to point to your Develocity server instance:

Replace
buildScan {
  server = '<<your Develocity instance>>'
  publishAlways()
  // Remove below if you don't use a self-signed or untrusted certificate
  allowUntrustedServer = true
}
with
buildScan {
  // Your personal Develocity instance
  server = 'https://develocity.mycompany.com'
  publishAlways()
  // Remove below if you don't use a self-signed or untrusted certificate
  allowUntrustedServer = true
}

In this lab, you will gain hands-on experience with creating a build and applying some of the techniques for general (non-cache) performance optimizations.

To complete this lab, please now follow the instructions as written in the file README.md.

Lab: Decide how you could increase the cache-ability of the given build

Open a terminal window in LAB_HOME/06-increase-the-cacheability-of-the-build.

Using the text editor of your choice, modify the build scan configuration in settings.gradle to point to your Develocity server instance:

Replace
buildScan {
  server = '<<your Develocity instance>>'
  publishAlways()
  // Remove below if you don't use a self-signed or untrusted certificate
  allowUntrustedServer = true
}
with
buildScan {
  // Your personal Develocity instance
  server = 'https://develocity.mycompany.com'
  publishAlways()
  // Remove below if you don't use a self-signed or untrusted certificate
  allowUntrustedServer = true
}

In this lab, you will gain hands-on experience with creating a build and applying some of the techniques for cache optimization.

To complete this lab, please now follow the instructions as written in the file README.md

Conclusion

Performance optimization is an ongoing process, not an event. Getting the build to be as fast as possible will not keep the build fast forever. There are constant changes in environment and code that, left un-tended, will result in performance regressions. Entropy always wins.

Develocity gives you a complete set of tools to find and fix the root causes of both types of performance regressions and forms a solid basis for the establishment of a performance optimization practice.