|
This documentation is for an outdated version of the Reporting Kit. If you are installing the Reporting Kit for the first time or looking for up-to-date information, please use the latest documentation. Otherwise, consider upgrading to the latest 2.x version of the Reporting Kit by following the Upgrade Guide. If you are running Develocity Reporting Kit 1.1.x or 1.0.x, you should first upgrade to the latest 1.4.x version, and then to the latest 2.x version. |
Introduction
This manual provides instructions for visualizing Develocity build data by installing the Develocity Reporting Kit into a Kubernetes cluster or onto a standalone host.
The Develocity Reporting Kit (aka the Reporting Kit) is a Kubernetes-based application, distributed as a Helm chart. Helm is a package manager for Kubernetes applications. Helm manages all Develocity Reporting Kit components.
The Reporting Kit is a companion application to a Develocity installation, and is only useful for existing Develocity customers or those trialling Develocity.
Build data
A Build Scan® is a persisted record of a single build’s captured data. Each Build Scan consists of thousands to millions of very fine-grained events. Build models aggregate these events into higher-level structures to expose easily consumable, summarized information about the build.
Build models can be consumed via the Develocity API. This guide focuses on installing the Reporting Kit and connecting it to the API of an existing Develocity installation.
| Build models are available for only Gradle and Maven builds, currently. |
The builds endpoint of the Develocity API serves the build models that are available to the Reporting Kit. |
Prerequisites
1. Develocity Installation
The Reporting Kit pulls build model data from an existing Develocity installation. To install the Reporting Kit you will need:
-
The URL of the Develocity installation. See here if deploying into the same Kubernetes cluster as Develocity.
-
Your Develocity license file.
-
An access key for an account on that instance with the necessary permissions.
| The Reporting Kit can visualize Build Scan data pulled from a Develocity instance of version 2023.4 or later. Some of the built-in dashboards rely on data that is available starting with Develocity version 2024.1. Detailed compatibility information is available in the appendix. |
2. Build environment
Most of the built-in dashboards rely on the presence of specific tags and custom values on Build Scans to render the desired visualizations.
Tags used within the dashboards:
-
CI- Used throughout the dashboards to classify builds as either CI (tag is present) or local (tag is absent) builds
Custom values used within the dashboards:
-
Git repository- The URI of the Git repository that the build was run in, e.g.,git@github.com:gradle/gradle.git -
CI provider- The name of the CI provider that ran the build, e.g.,GitHub Actions
| If you are using the Common Custom User Data Gradle plugin or Common Custom User Data Maven extension, this information will be automatically added to every Build Scan. |
Host Requirements
This section outlines the requirements for installing the Develocity Reporting Kit.
1. Kubernetes Cluster
The Reporting Kit must be installed into a running Kubernetes cluster. The cluster you install the Reporting Kit into needs to have access to sufficient resources. Your cluster should be running a recent version of Kubernetes that is still receiving patch support.
It is also possible to install the Reporting Kit on a single node using the K3s lightweight Kubernetes distribution. Installation instructions are provided below.
We do not currently recommend installing the Reporting Kit into the same standalone K3s cluster as Develocity, due to competition for resources. Please see the appendix if you are considering this.
2. Kubernetes Platforms
The Develocity Reporting Kit does not use any platform specific features and is expected to work on all platforms, but we have not verified every available platform.
We have verified that the Reporting Kit works on K3s and Amazon EKS.
3. Helm Requirements
Please check the Helm Version Support Policy to ensure compatibility with your Kubernetes version.
4. Resources
Node Group Specification
If you are planning to provision a dedicated cluster for your Develocity Reporting Kit installation, our recommended node group specification for that cluster is 2 nodes, each with 4 CPU units and 40 GiB memory.
Resource Requests and Limits
If you are planning to install the Reporting Kit in an existing cluster, we recommend ensuring access to at least 6 CPU units and 64 GiB memory.
The Develocity Reporting Kit Helm Chart’s total resource requests and limits are:
-
Resource requests (the minimal resources required by the application to start): 3.25 CPU units, 42 GiB memory.
-
Resource limits (the maximum resources that might be used by the application if available): 28.5 CPU units, 66.5 GiB memory.
Single-node Installation
If you are planning to install the Develocity Reporting Kit on a single node, then that machine should meet these minimum requirements:
-
8-core 2GHz or better CPU (amd64 architecture)
-
64 GB free RAM
5. Storage
The Develocity Reporting Kit uses persistent volume claims for storing data. You can optionally provide the name of the desired storage class to be used for provisioning persistent volumes.
Some pods are associated with persistent volumes and for Kubernetes platforms with multiple availability zones, the pods and their persistent volumes must be located in the same zone. In this case it is recommended to use a storage class with a volumeBindingMode of WaitForFirstConsumer to ensure that all persistent volumes are provisioned in the same zone that the pod was scheduled in.
It is strongly recommended to use storage classes that allow persistent volume claim expansion if available. This makes it straightforward to expand the storage used by the Reporting Kit.
Capacity recommendations
The recommended minimum capacities for the persistent volumes are:
| Description | Size in GB |
|---|---|
|
MinIO |
100 |
|
Hive metastore |
10 |
Exact MinIO storage requirements vary greatly depending on the number and size of Build Scans stored in your Develocity instance. We recommend monitoring the available space in your MinIO volume to ensure that your system doesn’t run out of space.
If your storage class does not allow expanding volumes, you should also consider preparing for future data growth by adding additional disk capacity upfront.
Helm Configuration
The Develocity Reporting Kit is a Kubernetes-based application, distributed as a Helm chart. Helm is a package manager for Kubernetes applications, and it manages all Develocity Reporting Kit components. A Helm chart is a Kubernetes manifest template, with variables that can be provided at installation time.
Providing Configuration to Helm
Helm uses a values.yaml file to populate these variables and generate the Kubernetes manifests.
The variables in values.yaml configure the Develocity installation with information such as networking, database, or hostname settings.
Here is a minimal sample values.yaml file for installing the Develocity Reporting Kit:
develocity:
address: https://develocity.example.com
accessKey: "aecitwnpfw7h2sp3bl5uhrk5yedk47756obrsmneevvfe6jo2ssa"
Helm configuration can be provided in several ways:
-
Passing values directly to the
helmcommand using--setor--set-file. -
Creating a Helm values file and passing it to
helmusing--values. -
Editing the default Helm values file in the chart prior to running
helm.
Once your values.yaml file is complete, you will install the Reporting Kit using a command similar to the one below:
helm install --values ./values.yaml
| Unless otherwise indicated, most values are optional and have usable defaults. |
User-managed secrets
The Develocity Reporting Kit allows you to configure a number of secret values for various purposes described below. The Reporting Kit’s Helm chart allows you to set secret values directly in Helm configuration. In this case, Helm will create the Kubernetes Secret which contains the secret value. As an alternative, it is usually possible in the Reporting Kit’s Helm chart to set the secret value in a Kubernetes Secret that you create and manage independently of Helm. If you use such a user-managed secret, you need to provide Helm with the name of the secret you have created for that purpose.
For example, if you would like to configure the Grafana editor password using a user-managed secret, then you would create a secret (for example, using kubectl), take a note of the name, and then add it to your Helm values file using a prescribed Helm value. For different secret values which the Reporting Kit’s Helm chart allows to be configured using a user-managed secret, you will need to use a different, specific Helm value, typically of the form some.thing.secretName. In this example, for the Grafana admin password, the Helm value to use is grafana.editorAccount.secretName. Wherever a user-managed secret can be used to configure a secret value, the data items that the secret needs to contain will be mentioned in the same place where the configuration of that secret value is documented.
Recommended values
Here is a sample values.yaml file for installing the Develocity Reporting Kit:
# Target Develocity server
develocity:
address: https://develocity.example.com
accessKey: "aecitwnpfw7h2sp3bl5uhrk5yedk47756obrsmneevvfe6jo2ssa"
# Optionally create an editor account capable of creating custom dashboards
grafana:
editorAccount:
username: "editor"
password: "showmethedata1234"
# Set the MinIO storage capacity a bit higher than the default
minio:
storage:
capacity: 200Gi
Helm Options
Each section below contains an overview of Develocity Reporting Kit installation options and their corresponding values.yaml variables:
-
Global options (license, images, annotations, storage class, security context)
-
Develocity configuration
-
Pod resources
-
Access control
-
Ingress configuration
-
Scaling
1. Global options
Configuration for pulling images into the installation cluster
In order to pull images from the Gradle registry at https://registry.gradle.com/, you need to provide your Develocity license file to Helm at installation time when installing the Reporting Kit. This is the same license file as the one used for your Develocity installation. The easiest way to do this is to pass --set-file global.license.file=/path/to/develocity.license as an argument when running helm install.
You can also provide the license file inline in the values file. Only the "data" portion of the license is needed in the Helm value file, but it is acceptable to include the entire license file contents:
global:
license:
file: R0VMRgF4nBWOSZKCMAAAX+QUu3BUIJAIwUQiwsViEwMIDKOyvH701n3p6nJBYxoRHnAUnwHwKLjb...
It is also possible to specify the imagePullSecret using a user-managed secret.
Using a user-managed secret for pulling images
To manually create a secret containing the Develocity license within the same namespace as the Develocity Reporting Kit, follow these steps:
Create a namespace for the Develocity Reporting Kit, if it does not already exist:
$ kubectl create namespace develocity-reporting-kit (1)
| 1 | This example uses develocity-reporting-kit as the namespace, but it can be a custom name. If you use a custom name, update following commands accordingly. |
Create a docker-registry secret with the Develocity license:
$ kubectl -n develocity-reporting-kit create secret docker-registry my-develocity-license-image-pull-secret \(1)
--docker-username=develocity \
"--docker-password=$(cat path/to/develocity.license)" \
--docker-server=registry.gradle.com
| 1 | my-develocity-license-image-pull-secret is the name of the secret. It can be any name you choose. |
In your values.yaml file, be sure to include the name of the specific secret you want to use within the imagePullSecret key:
global:
image:
imagePullSecret: my-develocity-license-image-pull-secret
Airgap installation image pull policy
In a K3s-based airgap installation onto a standalone host, Helm should be configured so that no attempt is made to pull images from the outside world with the values below:
global:
image:
imagePullPolicy: Never
This is because in a K3s-based standalone airgap installation, you import images onto the K3s node directly, rather than pulling them from a registry.
Storage class
The same storage class is used for all persistent volume claims. It can be configured by setting the Helm value global.storage.data.class.
Pod annotations
Pod annotations for all pods can be configured by setting the Helm value global.podAnnotations.
Security context
By default, all pods are created with a security context defining a non-root user for the containers to run as. This is our most tested configuration, and it is secure, so you should leave this enabled if you can. In some environments, most notably in OpenShift clusters, these may need to be disabled for the application to run correctly. This can be done by setting the Helm value global.securityContext.enabled to false.
2. Develocity configuration
The Develocity Reporting Kit requires an authorized connection to a Develocity instance in order to pull builds. This connection can be configured by setting the values develocity.accessKey and develocity.address.
If you’d prefer to manage the access key secret yourself, you can do so by setting develocity.accessKey.secretName instead of develocity.accessKey. The secret named by this value should contain a single item of data, accessKey, which should be set to a base64 encoded Develocity access key that has the correct Develocity permissions.
Develocity access key
The Reporting Kit must be configured with an access key associated with an account that has the Access build data via the API permission. This permission is exportData in a Develocity unattended config file.
See the Admin Manual for more details on access control and permissions, and the API User Manual for more on access key provision and verifying user granted permissions.
Build data
By default, the Reporting Kit will pull the previous 30 days of data from your Develocity instance. This can be configured by setting the Helm value buildData.days to a positive integer number of days.
3. Pod resources
Memory and CPU
The resource requests and limits for each container of each pod can be controlled using Helm values. Init containers, where they are present, use the same resource requests and limits from an appropriate non-init container. The pattern for configuring these resources is to use the Helm values <prefix>.resources.limits.cpu, <prefix>.resources.limits.memory, <prefix>.resources.requests.cpu, <prefix>.resources.requests.memory, where <prefix> is any of:
-
dataSynchronizer -
grafana.main -
grafana.accountInitializer -
hiveMetastore.main -
hiveMetastore.postgres -
minio.main -
trino.coordinator -
trino.worker
Storage
-
minio.storage.capacityconfigures the amount of storage available to MinIO for storing Build Scan data. -
hiveMetastore.storage.capacityconfigures the amount of storage available for storing partition metadata by the hive metastore. We don’t expect this to be very large ordinarily, so you should use the default value unless directed to change it by a member of the Gradle support team. -
grafana.storage.capacityconfigures the amount of storage available to the Grafana’s embedded database We don’t expect this to be very large ordinarily, so you should use the default value unless directed to change it by a member of the Gradle support team.
These values need to be provided in the format of a Kubernetes storage size.
Ephemeral storage
Ephemeral container storage is used by Trino workers for their file cache when processing SQL queries.
| We recommend that you change these values only if instructed to do so by a member of the Gradle support team. |
-
trino.worker.resources.limits.cacheSizeconfigures the size of the file cache used by Trino query workers for processing SQL queries. When increasing this value, you must also correspondingly increase the valuestrino.worker.resources.requests.ephemeralStorageandtrino.worker.resources.limits.ephemeralStorage, to prevent pod scheduling issues. -
trino.worker.resources.requests.ephemeralStorageconfigures the minimum amount of ephemeral container storage required in order for Trino worker containers to be scheduled. -
trino.worker.resources.requests.ephemeralStorageconfigures the maximum amount of ephemeral container storage Trino worker containers can consume.
4. Access control
A few components in the Develocity Reporting Kit use credentials, and these can be configured using Helm values.
Grafana
The frontend of the Reporting Kit is an embedded Grafana service, which displays the visualizations of the build data. By default, this can be viewed anonymously and without authentication.
In this version of the Reporting Kit, there is support for creating an optional single editor account. In addition to viewing the bundled dashboards, this account can also create new dashboards for custom visualizations. To create an editor account, set grafana.editorAccount.username and grafana.editorAccount.password.
You can also instead configure the editor user’s username and password using a user-managed secret. Set grafana.editorAccount.secretName to point to the secret, which must have two data items, user and password.
If you want to disable anonymous access to view the dashboards, set grafana.anonymousAccess to false. This will mean that all access must use the editor account.
5. Ingress configuration
By default, the Develocity Reporting Kit Helm chart will create a Kubernetes Ingress to serve inbound traffic to the app. If you want to disable the default Ingress in order to use some other routing solution provided by your cluster, set ingress.enabled to false. The ingress created by the Helm chart will in this subsection be referred to as the Helm-managed ingress.
Hostname
By default, the Helm-managed ingress is not restricted to a single hostname, and will attempt to route traffic for all hosts to the Reporting Kit. If deploying in a cluster with other applications installed, you can set ingress.hostname to only route traffic for a single host through to the Reporting Kit.
Customizations
You can provide additional Kubernetes resource annotations to be set on the created ingress. This is done by setting key-value pairs under the Helm value ingress.annotations, in the same way as is described for ingress annotations in the Develocity Helm Chart Configuration Guide.
By default, the created ingress will use the default Kubernetes ingress class provided by the Kubernetes cluster. If you would like to override this to use a specific, fixed ingress class, then set the Helm value ingress.ingressClassName to the name of the ingress class you’d like to use.
The default pathType for the created Kubernetes ingress is Prefix. If you need to instead use the path matching implementation provided by the ingress class, then set the Helm value ingress.pathType to ImplementationSpecific.
TLS
By default, the Helm-managed ingress will use plain HTTP only.
To enable HTTPS support, set ingress.ssl.enabled to true and provide a hostname for your installation by setting the Helm value ingress.hostname. By default, if HTTPS support is enabled, the Helm chart will generate and use self-signed SSL certificates. If instead you want to provide custom SSL certificates (trusted or untrusted), for example ones signed by your own internal Certificate Authority, then you can do so by providing the SSL certificate and SSL key as Helm values. This can be done by passing --set-file ingress.ssl.key=/path/to/key/file --set-file ingress.ssl.cert=/path/to/cert/file to Helm when running the installation command, or by providing them inline in the values file as shown below:
ingress:
ssl:
enabled: true
cert: |
-----BEGIN CERTIFICATE-----
MIIDKjCCAhKgAwIBAgIRAPNTIHf6/oUuzMKm3ffGNOgwDQYJKoZIhvcNAQELBQAw
HDEaMBgGA1UEAxMRYXV0by1nZW5lcmF0ZWQtY2EwHhcNMjExMTMwMTU1NDU5WhcN
...
Cn/3yUirFVTslrSYKAemLw8btLO6FDF9dc/lq1o7tKsYVuhEcjqnTah7puJjEN9h
z+P5RmRxU/kaaFB+Vuw1pRezbaAtZNorVgXnBwrdseY4zLGyhAcGcR9v+VtCiQ==
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA4qV8JlqDMi7y85Ykq8dn7uIsi609D6KuFtlc+UvNYjatz0+u
QzIr3iw//qf7sM8nx8fhGwuWvUWeCE6zbgKjuxDH82J9NQ0ctf70n0qVTeyW1CKR
...
XlOfXr/xvkXA66PROgvVxfwpN/GNrLXFi1HvMg7MVZJUZQpNzpAzw5JTk2MbawOl
G7tI0qQ6F20e5R4tPpEDKCFZykyvgGMhfLzsvVlrgaVW8QbVK4YWNtQ=
-----END RSA PRIVATE KEY-----
You can also use a user-managed secret for the SSL key and certificate. This secret should be a Kubernetes TLS Secret, and the name of the secret needs to be provided in the Helm value ingress.ssl.secretName.
6. Scaling
The number of Trino workers used to process SQL queries can be configured using the Helm value trino.worker.replicas. For most installations we recommend using the default value, 1.
Installation
In this section you will install the Develocity Reporting Kit on your host or cluster.
To install the Reporting Kit, commands will need to be executed on the installation host or on a host with connectivity to your Kubernetes cluster and to the internet.
1. Install K3s (if applicable)
If installing on a host rather than an existing cluster, first install K3s and make it available to the current user:
$ curl -sfL https://get.k3s.io | sh -
$ sudo chown $UID /etc/rancher/k3s/k3s.yaml
$ mkdir -p "${HOME}/.kube"
$ ln -sf /etc/rancher/k3s/k3s.yaml "${HOME}/.kube/config"
Verify that you can interact with the K3s cluster:
$ kubectl get namespace
The expected output should be similar to this:
NAME STATUS AGE default Active 1h kube-system Active 1h kube-public Active 1h kube-node-lease Active 1h
| For more information on K3s installation, see the K3s Quick-Start Guide and K3s Installation. |
2. Install Helm
The Develocity Reporting Kit requires Helm version 3.5.x (or later) to install.
It is recommended to use the latest version available as this will have all known security vulnerabilities addressed. This document describes the maximum version skew supported between Helm and Kubernetes.
| For more information on installing Helm (including alternate installation approaches), see Installing Helm. |
Install Helm with the following command:
$ curl -qs https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
3. Tell Helm about the Gradle Helm repository
The Develocity Reporting Kit is distributed from the Gradle Helm repository.
Add the Gradle Helm repository to your Helm installation and fetch its contents into the local cache:
$ helm repo add gradle https://helm.gradle.com/
$ helm repo update gradle
Verify that the Develocity Reporting Kit chart is accessible:
$ helm search repo develocity-reporting-kit
This will report the latest versions available for the two Develocity charts:
NAME CHART VERSION APP VERSION DESCRIPTION gradle/develocity-reporting-kit 1.0.4 1.0.4 Official Develocity Reporting Kit
3. Install Develocity Reporting Kit
Install the Develocity Reporting Kit:
$ helm install \
--create-namespace --namespace develocity-reporting-kit \(1)
develocity-reporting-kit \(2)
gradle/develocity-reporting-kit \(3)
--values values.yaml \(4)
--set-file global.license.file=./develocity.license (5)
| 1 | This example uses develocity-reporting-kit as the namespace, but it can be a custom name. If you use a custom name, update all other example commands accordingly. Running with --create-namespace will create the namespace if it does not already exist.
The namespace option may not be required for OpenShift if oc login has been run and the active project for the current context is set. |
| 2 | This is the Helm release name. It is used by Helm to identify the Develocity Reporting Kit installation. |
| 3 | The Helm chart to install. To install a specific version, use e.g. --version 1.0.4. |
| 4 | The Helm values file containing configuration values as described above. |
| 5 | The Develocity license file (if not already included in your values file). |
4. Confirm that Develocity Reporting Kit is running
At this point, it should be possible to see the Helm release installed:
$ helm --namespace develocity-reporting-kit list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION develocity-reporting-kit develocity-reporting-kit 1 2024-04-05 16:20:07.618596 +0800 +08 deployed develocity-reporting-kit-1.0.4 1.0.4
You can inspect the status of the Develocity Pods:
$ kubectl --namespace develocity-reporting-kit get pods
NAME READY STATUS RESTARTS AGE grafana-5bd84f6bff-jhl8v 0/2 ContainerCreating 0 12s trino-worker-79f8ddfdd8-k6fc6 0/1 Init:0/1 0 12s trino-coordinator-7d8794689-jrqn6 0/1 Init:0/1 0 12s data-synchronizer-8489cd7749-gxv8h 0/1 Init:0/2 0 12s minio-7b9d87c4c7-tsqss 0/1 ContainerCreating 0 12s hive-metastore-7bfc5977b8-vwzkf 0/2 ContainerCreating 0 12s
| Sometimes containers will initially fail when they start up because other Pods they depend on are not ready yet. This is normal and expected. In Kubernetes applications, it is idiomatic for containers to repeatedly attempt to start, crash, and be restarted by the Kubelet until their dependencies are ready, because this prevents wasted allocation of requested resources while dependencies are starting up. |
Within several minutes all the Pods should have the status Running:
NAME READY STATUS RESTARTS AGE minio-7b9d87c4c7-tsqss 1/1 Running 0 8m10s grafana-5bd84f6bff-jhl8v 2/2 Running 0 8m10s hive-metastore-7bfc5977b8-vwzkf 2/2 Running 2 (6m23s ago) 8m10s trino-worker-79f8ddfdd8-k6fc6 1/1 Running 0 8m10s trino-coordinator-7d8794689-jrqn6 1/1 Running 0 8m10s data-synchronizer-8489cd7749-gxv8h 1/1 Running 0 8m10s
The Develocity Reporting Kit has a /ping endpoint, which you use to verify that the application is accessible from your computer.
Connectivity to the Develocity Reporting Kit installation can be tested by running the following command on machines which need to connect to it:
$ curl -sw \\n --fail-with-body --show-error «develocity-reporting-kit-origin»/ping
This should show the text SUCCESS.
Once all Pods have a status of Running and the system is up and connected, you can interact with the Develocity Reporting Kit instance by visiting its address in a web browser. The root path of the Reporting Kit application is currently a generic Grafana landing page. If you navigate to the /dashboards path, you should be able to see a list of dashboard folders containing different dashboards for you to explore.
Within 10 minutes of the instance being up and running, data from your Develocity instance should have started to get synced to your Reporting Kit instance, which you will be able to see in the dashboards.
At this point, the Develocity Reporting Kit instance is installed and running.
Airgap Installation
In an airgap installation, the container images can either be pulled from an image registry available on an internal network that is accessible from the cluster, or, if you are installing the Reporting Kit on K3s, they can be loaded directly into the Kubernetes node, meaning that they do not need to be pulled from a registry.
Airgap installations require a specific entitlement on your license. Please contact Gradle if you need an Airgap-enabled license.
Airgap installation involves downloading files, transferring them, installing supporting software, and running helm install.
We recommend you save all the files into a single transfer directory, so that it is easy to transfer to the host where you are installing the Develocity Reporting Kit. For example:
$ mkdir develocity-reporting-kit-files && cd develocity-reporting-kit-files
1. Download required files
a. Download K3s (if applicable)
If installing onto a standalone host using K3s, you need to also install K3s on your airgapped installation host. Otherwise, this step can be skipped.
Please make note of the checksums provided in the k3s release page. The checksums can be found under Assets/sha256sum-amd64.txt for the release that you are planning to use.
To get the name of the release, please execute the following command:
$ curl -s https://api.github.com/repos/k3s-io/k3s/releases/latest | jq -r '.tag_name'
Then you can download and verify the K3s images, binary, and install script:
$ curl -LO https://github.com/k3s-io/k3s/releases/latest/download/k3s
$ curl -LO https://github.com/k3s-io/k3s/releases/latest/download/k3s-airgap-images-amd64.tar.gz
$ curl -L -o install_k3s.sh https://get.k3s.io
$ echo "<sha 256 checksum of 'k3s'> k3s" | sha256sum -c
$ echo "<sha 256 checksum of 'k3s-airgap-images-amd64.tar.gz'> k3s-airgap-images-amd64.tar.gz" | sha256sum -c
If you are running Red Hat Enterprise Linux with SELinux enabled, you will also need to download and verify the K3s SELinux Policy package:
$ curl -L -o k3s-selinux.el8.noarch.rpm https://github.com/k3s-io/k3s-selinux/releases/download/v1.2.stable.2/k3s-selinux-1.2-2.el8.noarch.rpm
$ echo "e949fde3e0255c6b5ce3f52db4277897882ed1664e87bfcf5122df5e96559340 k3s-selinux.el8.noarch.rpm" | sha256sum -c
b. Download Helm
Download and verify the Helm binary:
$ curl -L -o helm-linux-amd64.tar.gz https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
$ echo "bbb6e7c6201458b235f335280f35493950dcd856825ddcfd1d3b40ae757d5c7d helm-linux-amd64.tar.gz" | sha256sum -c
c. Download installation bundle
Save your Develocity license to the transfer directory as develocity.license.
Download and verify the airgap bundle:
$ curl -LOJd @develocity.license https://registry.gradle.com/airgap/develocity-reporting-kit-1.0.4-bundle.tar.gz
$ curl -LOJd @develocity.license https://registry.gradle.com/airgap/develocity-reporting-kit-1.0.4-bundle.tar.gz.sha256
$ sha256sum -c develocity-reporting-kit-1.0.4-bundle.tar.gz.sha256
If checksum verification fails, check the contents of the downloaded files for error messages. If the error message indicates that your license is invalid/expired/not airgap enabled, you will need to request an updated license file by contacting your customer success representative.
Instead of running the above curl commands, you can download the airgap bundle by navigating to https://registry.gradle.com/airgap/develocity-reporting-kit in your browser and following the instructions on the page. |
2. Prepare a Helm values file
Follow the instructions at Helm Configuration and return to this point with a complete values.yaml file.
If you are installing onto a standalone host using K3s, ensure your values.yaml file includes appropriate configuration for the image pull policy.
Before transferring files to the host where you will install the Develocity Reporting Kit, move your Helm values file into the transfer directory.
3. Transfer files
Check that the transfer directory contains all of the following files (additional files are fine):
-
helm-linux-amd64.tar.gz -
develocity.license -
values.yaml -
develocity-reporting-kit-1.0.4-bundle.tar.gz -
Optional: SSL certificates
If you are installing on a standalone host, check that the following files are also there:
-
k3s-airgap-images-amd64.tar.gz -
k3s -
install_k3s.sh -
k3s-selinux.el8.noarch.rpm(only if your installation host is using SELinux)
Once you’ve verified that you have the required files, transfer them to the host where you are installing the Reporting Kit.
4. Install K3s (if applicable)
Only do this step if you are installing the Develocity Reporting Kit onto a standalone host using K3s.
Follow these instructions on the host where you are installing the Reporting Kit with your transferred files present in the current directory.
-
If you are running Red Hat Enterprise Linux with SELinux enabled:
-
Install the
container-selinuxpackage. This is a package that can be found in Red Hat Enterprise Linux’s default repository. Install this package on the airgapped server just like you would install any other package. If your organization has an internal mirror of the Red Hat package repositories, you can run:$ sudo yum install -y container-selinux -
Install the K3s SELinux Policy package:
$ sudo yum install -y k3s-selinux.el8.noarch.rpm
-
-
Install K3s and make it available to the current user:
$ sudo mkdir -p /var/lib/rancher/k3s/agent/images/$ sudo cp k3s-airgap-images-amd64.tar.gz /var/lib/rancher/k3s/agent/images/$ (cd /var/lib/rancher/k3s/agent/images/ && sudo gunzip -f k3s-airgap-images-amd64.tar.gz)$ sudo cp k3s /usr/local/bin$ sudo chmod a+rx /usr/local/bin/k3s$ sudo chmod a+rx ./install_k3s.sh$ INSTALL_K3S_SKIP_DOWNLOAD=true ./install_k3s.sh$ sudo chown $UID /etc/rancher/k3s/k3s.yaml$ mkdir -p "${HOME}/.kube"$ ln -sf /etc/rancher/k3s/k3s.yaml "${HOME}/.kube/config" -
Verify that you can interact with the K3s cluster:
$ kubectl get namespaceThe output should be similar to this:
OutputNAME STATUS AGE default Active 1h kube-system Active 1h kube-public Active 1h kube-node-lease Active 1h
5. Install Helm
Follow these instructions on the host where you are installing the Reporting Kit with your transferred files present in the current directory.
To install Helm:
$ tar -zxvf helm-linux-amd64.tar.gz
$ sudo mv linux-amd64/helm /usr/local/bin/helm
6. Install Develocity Reporting Kit
Follow these instructions on the host where you are installing the Reporting Kit with your transferred files present in the current directory.
Expand the bundle:
$ tar zxvf develocity-reporting-kit-1.0.4-bundle.tar.gz
Import Develocity Reporting Kit images
If you are installing the Reporting Kit onto a standalone host with K3s, import the images into K3s using the following command:
$ sudo k3s ctr images import develocity-reporting-kit-1.0.4-images.tar
Otherwise, upload the images to your internal image registry:
| You must be logged in to the registry prior to running these commands. |
$ ./upload-images.sh --registry=registry.example.com/develocity-reporting-kit
Install the Develocity Reporting Kit Helm chart in airgap mode
Install the Develocity Reporting Kit using Helm:
$ helm install \
--create-namespace --namespace develocity-reporting-kit \(1)
develocity-reporting-kit \(2)
develocity-reporting-kit-1.0.4.tgz \
--values values.yaml \(3)
--set-file global.license.file=./develocity.license(4)
| 1 | This example uses develocity-reporting-kit as the namespace, but it can be a custom name. If you use a custom name, update all other example commands accordingly. |
| 2 | This is the Helm release name. It is used by Helm to identify the Reporting Kit installation. |
| 3 | The Helm values file with configuration values, including items such as the Develocity address. |
| 4 | The Develocity license file (if not already included in the values file). |
7. Confirm that Develocity Reporting Kit is running
At this point, it should be possible to see the Helm release installed:
$ helm --namespace develocity-reporting-kit list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION develocity-reporting-kit develocity-reporting-kit 1 2024-04-05 16:20:07.618596 +0800 +08 deployed develocity-reporting-kit-1.0.4 1.0.4
You can inspect the status of the Develocity Pods:
$ kubectl --namespace develocity-reporting-kit get pods
NAME READY STATUS RESTARTS AGE grafana-5bd84f6bff-jhl8v 0/2 ContainerCreating 0 12s trino-worker-79f8ddfdd8-k6fc6 0/1 Init:0/1 0 12s trino-coordinator-7d8794689-jrqn6 0/1 Init:0/1 0 12s data-synchronizer-8489cd7749-gxv8h 0/1 Init:0/2 0 12s minio-7b9d87c4c7-tsqss 0/1 ContainerCreating 0 12s hive-metastore-7bfc5977b8-vwzkf 0/2 ContainerCreating 0 12s
| Sometimes containers will initially fail when they start up because other Pods they depend on are not ready yet. This is normal and expected. In Kubernetes applications, it is idiomatic for containers to repeatedly attempt to start, crash, and be restarted by the Kubelet until their dependencies are ready, because this prevents wasted allocation of requested resources while dependencies are starting up. |
Within several minutes all the Pods should have the status Running:
NAME READY STATUS RESTARTS AGE minio-7b9d87c4c7-tsqss 1/1 Running 0 8m10s grafana-5bd84f6bff-jhl8v 2/2 Running 0 8m10s hive-metastore-7bfc5977b8-vwzkf 2/2 Running 2 (6m23s ago) 8m10s trino-worker-79f8ddfdd8-k6fc6 1/1 Running 0 8m10s trino-coordinator-7d8794689-jrqn6 1/1 Running 0 8m10s data-synchronizer-8489cd7749-gxv8h 1/1 Running 0 8m10s
The Develocity Reporting Kit has a /ping endpoint, which you use to verify that the application is accessible from your computer.
Connectivity to the Develocity Reporting Kit installation can be tested by running the following command on machines which need to connect to it:
$ curl -sw \\n --fail-with-body --show-error «develocity-reporting-kit-origin»/ping
This should show the text SUCCESS.
Once all Pods have a status of Running and the system is up and connected, you can interact with the Develocity Reporting Kit instance by visiting its address in a web browser. The root path of the Reporting Kit application is currently a generic Grafana landing page. If you navigate to the /dashboards path, you should be able to see a list of dashboard folders containing different dashboards for you to explore.
Within 10 minutes of the instance being up and running, data from your Develocity instance should have started to get synced to your Reporting Kit instance, which you will be able to see in the dashboards.
At this point, the Develocity Reporting Kit instance is installed and running.
Post-Installation
Once the Develocity Reporting Kit has been installed, files used during installation are not required at runtime and can be removed if desired. However, the following files may be useful to preserve, as they may aid in future upgrades or maintenance:
-
Helm values files
-
SSL certificates
-
Develocity license
| These files contain sensitive information and should be handled with care. |
Appendix A: Release history
1.0.4
-
[NEW] Upgrade components to their latest versions.
-
[NEW] The Project Volume dashboard now has a trends panel, matching the equivalent panel in the Global Volume dashboard.
-
[FIX] Airgap installations onto a standalone host should not require providing a license file.
1.0.3
-
[NEW] Support airgap installations.
-
[FIX] Hive Metastore repeatedly evicted due to disk pressure caused by excessive logging.
1.0.2
-
[FIX] Add the ability to remove pod security contexts, to support OpenShift installations.
-
[FIX] Store Trino logs in mounted ephemeral container storage.
1.0.1
-
[NEW] Users can now create custom Grafana dashboards which persist across restarts of the application.
-
[NEW] A configurable, limited number of days of historical data are pulled from Develocity by default.
-
[FIX] Making the most recent data available for querying is now always prioritised over making historical data available for querying.
-
[FIX] Increase default MinIO volume size.
-
[FIX] The oldest partition of stored build data was not being made available for querying.
-
[FIX] The project selection dropdown in the project-volume dashboard is now populated with a default value.
-
[FIX] A calculation within one of the Build Cache Errors dashboard panels was incorrect.
-
[FIX] Clean up intermediate processing data more frequently by default to avoid storage bloat.
1.0
-
Initial release.
Appendix B: Develocity compatibility
| Develocity version | Compatibility |
|---|---|
|
≥ 2024.1 |
fully supported |
|
2023.4 |
supported, except for
|
|
< 2023.4 |
not supported |
Appendix C: Randomly generated secrets
By default, the Helm chart generates a number of random secrets that allow the different components to talk to each other securely. In some installations, particularly those with infrastructure that re-runs Helm and compares chart output, the randomly generated values can be a problem by causing the infrastructure to falsely decide that an installation is out-of-date.
For these values, it’s possible instead to store random values as user-managed secrets, then configure the Helm chart with the secret names. To fully eliminate random values from the generated Kubernetes manifest, the following values must be set:
-
grafana.adminAccount.secretName, secret must have a singlepasswordproperty. -
minio.rootAccount.secretName, needs to have two data properties,userandpassword. -
minio.dataSynchronizerAccount.secretName, needs to have two data properties,userandpassword. -
minio.trinoAccount.secretName, needs to have two data properties,userandpassword. -
minio.trinoCacheAccount.secretName, must have a singlepasswordproperty. -
ingress.ssl.secretName(or set bothingress.ssl.certandingress.ssl.keydirectly in the chart), only necessary if TLS is enabled.
Example helm config with all possibly random values coming from user-managed secrets:
develocity:
address: "..."
accessKey: "..."
grafana:
adminAccount:
secretName: example-grafana-admin-account-secret
minio:
rootAccount:
secretName: example-minio-root-account-secret
dataSynchronizerAccount:
secretName: example-minio-data-sync-account-secret
trinoAccount:
secretName: example-minio-trino-account-secret
ingress:
hostname: "..."
ssl:
enabled: true
secretName: example-ingress-ssl-secret
Appendix D: Installing into the same Kubernetes cluster as Develocity
It is possible to install the Reporting Kit into the same Kubernetes cluster as Develocity.
If Develocity is installed using the standalone Helm chart into a K3s cluster on a single host, please see below for resource considerations.
Installing into the same Kubernetes namespace as Develocity
When configuring the Develocity Reporting Kit to pull data from a Develocity instance that runs in the same namespace as the Develocity Reporting Kit, set the develocity.address Helm value to the Develocity installation’s gradle-proxy Kubernetes service address. The URL should be http://gradle-proxy:80.
develocity:
address: "http://gradle-proxy:80"
accessKey: "..."
Installing into a different Kubernetes namespace as Develocity
When configuring the Develocity Reporting Kit to pull data from a Develocity instance that runs in a different namespace of the same cluster as the Develocity Reporting Kit, set the develocity.address Helm value to a cluster-local address based on the Develocity namespace. The URL should be http://gradle-proxy.«develocity-namespace».svc.cluster.local.
So for a Develocity instance installed in the develocity namespace, this would look like:
develocity:
address: "http://gradle-proxy.develocity.svc.cluster.local"
accessKey: "..."
Standalone Develocity resources
We do not recommend installing the Reporting Kit into the same standalone instance as Develocity. If there are no other options though, it can be done with some additional resource limits put in place for the Reporting Kit.
Please consider the Reporting Kit resource requirements and Develocity’s standalone resource requirements. At time of writing the most significant were 64GB of RAM for the Reporting Kit, and 16 GB RAM for Develocity. If your Develocity instance has been tuned to use more memory, please use that in subsequent calculations.
If your instance has enough memory for both applications, proceed without further configuration. It is possible to install both Develocity and the Reporting Kit into a single host with e.g. only 64GB of memory by tuning down the Reporting Kit’s memory consumption. To do so, make the following adjustment:
trino:
worker:
resources:
requests:
memory: 24Gi
limits:
memory: 32Gi
If your host has less memory, you will need to adjust these values lower.
Please note that when running the Reporting Kit with constrained memory, it is possible that some data sets will not be able to populate the dashboards. If you see errors in the dashboards relating to memory, try requesting a shorter window e.g. just 2 days instead of the default or 7 days. This will require less memory to process.
Appendix E: Storing data in a specified directory for a standalone installation using K3s
For users intending to install the Reporting Kit onto a single node using K3s, you may want to specify a particular directory on the installation machine in which the application should store its data.
To do this requires taking the below steps:
| If you already have data stored somewhere that isn’t your mounted data directory, these steps will not move the data into the right place. To get help with that problem, please contact Gradle support. |
1. Patch the K3s local-path-provisioner config
This instructs the local-path-provisioner to store Kubernetes application data within the directory of your choosing.
cat > local-path-config-patch.yaml << EOF
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/your/storage/directory"] (1)
}
]
}
EOF
kubectl -n kube-system patch configmap/local-path-config --patch-file local-path-config-patch.yaml (2)
| 1 | The directory in which you want the application to store data. |
| 2 | This applies the config patch. |
2. Recreate the K3s local-path-provisioner pod
This is done by deleting the pod, which will cause it to be immediately recreated and pick up the configuration change.
kubectl -n kube-system delete "$(kubectl -n kube-system get pods --selector "app=local-path-provisioner" --output "name")"
The pod should come back up within a few seconds, and now all new provisioned storage in the K3s cluster using the local-path storage class (which is the default) will be stored in the directory you configured in the first step.
Appendix F: Installing the Develocity Reporting Kit on OpenShift
If you are planning to install the Reporting Kit in an OpenShift cluster, you need to be running at least version 1.0.2, and you should configure your installation to disable pod security contexts.