By default, Develocity will store data in a database that runs in a container as part of the Develocity installation, and which stores its data in a local file system or persistent volume. This is referred to as the “embedded database”. Starting with Develocity 2021.3, it is possible to instead store data in a database that is outside of the Develocity application. This is referred to as a “user-managed database”.
This document describes how to switch a Develocity instance that has data stored in its embedded database to using a user-managed database, and migrating the data to the new database.
It is strongly recommended to read the entire guide before starting any migration steps.
Data to be migrated
There are a number of different ways to migrate data presented in this guide, all attempting to solve a single problem: migrating a potentially large set of data with minimal service downtime.
Typically, the vast majority of data in a Develocity database is Build Scan data, which can be orders of magnitude larger than the rest. For example, some Develocity installations have terabytes of Build Scan data, with the rest in the order of megabytes. However, the rest of data in the database is more important for continuous operation. This is data such as app configuration, Build Cache node registration and topology, users, permissions and access keys, which are important to migrate and have available immediately when switching between databases.
Many installations will have a small enough database that they can follow the first and simplest strategy of taking down the service and restoring a database backup to a new database. Other strategies involve either skipping migrating Build Scan data completely, or copying it gradually after switching to using the new database.
Data not migrated
Built-in Build Cache entries are not migrated with any of the strategies in this document. It is assumed that these will be repopulated by simply running the tasks or goals again in the relevant builds, and that this is an acceptable cost when doing this sort of system change.
Contact Develocity Support if this is of concern for your installation.
Prerequisites
Develocity version
Support for a user-managed database started with Develocity 2021.3. The system should be running at least that version. We recommend that you upgrade to the latest version of Develocity before performing a user-managed database migration.
Some migration strategies involve setting up a second Develocity instance. In these cases, the two instances should be running the same Develocity version.
Database scripts
All migration strategies in this guide require running some scripts to set up the new database and copy over some data and configuration settings. Please download them from the appendix prior to starting the migration.
User-managed database instance
This guide assumes that you have a compatible PostgreSQL database instance ready to migrate data to. A specific database does not need to have been created (using e.g. CREATE DATABASE
) - this will happen as part of the steps below. For more details on database setup, please see the relevant section in the standalone installation manual or the kubernetes installation manual.
Using PGBouncer with a user managed database is not supported. Develocity uses and depends on PostgreSQL connection parameters that are not supported by PgBouncer. |
PostgreSQL version compatibility
Develocity versions 2021.3 to 2022.1.1 are compatible with PostgreSQL version 12.
Develocity versions 2022.2 to 2024.2.6 are compatible with PostgreSQL versions 12, 13 and 14.
Develocity 2024.3 and later are compatible with PostgreSQL versions 14, 15, 16 and 17. In Develocity 2024.3, PostgreSQL versions 12 and 13 are deprecated, but still expected to work. Note that PostgreSQL 12 is now EOL and will no longer receive security and bug fixes. If your Develocity database is running PostgreSQL 12 or 13, please upgrade your database instance to at least version 14.
Do not use the specific PostgreSQL minor versions that were released on Nov 14th, 2024. These versions of PostgreSQL have a bug that breaks login roles, causing database objects to be created with unexpected ownership. This bug was immediately fixed in an out-of-cycle release. |
Additional Develocity instance
Some migration strategies described below involve copying Build Scan data after the initial migration of everything else. This requires setting up a new Develocity instance. Please check the feasibility, time and cost of setting up a new instance when deciding on migration strategy.
Develocity instance promotion
Some migration strategies described below require promoting a new Develocity instance to become your main instance that builds connect to. For example, Build Scans will be published to this new instance once the promotion happens. This switch can occur in several ways, depending on your infrastructure setup:
-
If you have a reverse proxy of some sort that sits in front of Develocity, this can likely be easily configured to point to a different Develocity instance.
-
Many setups use a DNS entry to point to their primary Develocity instance that is unrelated to the specific host that Develocity is installed on. Promoting a new instance then involves updating the DNS entry.
-
Alternatively, build configuration can be updated to point to a different instance. All build plugins and extensions, Build Cache nodes and Test Distribution agents must be updated to point to the new server. This is often more onerous than updating network infrastructure, but may be a reasonable strategy, depending on how centralized or duplicated your organization’s build configuration is.
If there is no easy promotion path, there are migration strategies that allow use of the existing instance, with certain tradeoffs. Before proceeding with a migration, you should understand the possible promotion strategies for your infrastructure to allow evaluation of the tradeoffs described below.
Service downtime during a backup
Some migration strategies below involve taking a backup of the embedded database to allow setting up a second Develocity instance. Knowledge of your current backup duration is useful here, to help in evaluating likely downtime during the migration. To get an idea of your current backup duration, you can inspect the timestamps in the backup history logs, which are written by the database-tasks
container.
Build scan copying
Some migration strategies below involve copying Build Scans into the new database. This requires a host on which to run Develocityctl. This may be a host with Develocity installed, or may need to be a different host, depending on your network settings and Develocity instances. See the appendix for more details.
Disk space cleanup
In all strategies below, the data directory or persistent volume on which the embedded database is stored is left in place until a final cleanup step. This means that if there is any unexpected issue during the migration, the original data is still where it was so it should be possible to revert any changes and resume using the embedded database. Once all other steps have been completed it should be safe to clean this up as described below.
Migration strategies
Different strategies are presented here, each with different trade-offs. Please read through them and choose the best for your build and network infrastructure.
Recommended options:
-
Use the Migrate all data up-front strategy if your database is small enough. See here to measure how long the migration would take.
-
Use the Migrate all non-Build-Scan data strategy if you can tolerate losing historical Build Scan data.
-
Use the Restore backup to a secondary Develocity instance, connect primary to new database strategy if you can tolerate historical Build Scan data being only available in a secondary instance, can set up such an instance, and can tolerate downtime while a backup is run.
-
If you need historical and recent Build Scan data together on the same Develocity instance:
-
Use the Set up and promote new Develocity instance, copy Build Scan data to it strategy if you can easily promote a new instance
-
Use the Copy Build Scan data from second, temporary instance strategy if you cannot easily promote a new instance
-
Please contact Develocity support for assistance in picking the correct migration strategy for your installation.
Strategy 1: Migrate all data up-front, offline
This approach migrates all data in one go, while the application is unavailable.
Tradeoffs
Pros:
-
All data copied in a single step
Cons:
-
Requires downtime while data is dumped and restored. This will be significant for installations with a lot of data.
Steps
-
Consider doing a test run, to measure the likely downtime and ensure that it is acceptable for your organization.
-
Perform initial database setup.
-
Stop the enterprise app while leaving the database running. This ensures that no data is lost when switching to the new database. Your Develocity instance will be unavailable at this point.
-
Run the data migration script, making sure to pass the
--include-build-scans
parameter. This copies all data to the new database. -
Configure your Develocity instance to point to your new database.
-
Wait for Develocity to complete restarting.
-
Smoke test your Develocity instance to ensure that it’s operational.
-
Test that your normal builds are functioning, including Build Scan publishing, Build Cache usage and Test Distribution, depending on which features are used in your builds.
-
Once you are satisfied that the instance attached to the new database is fully operational and has the correct data, it is safe to clean up the old data.
Strategy 2: Migrate all non-Build-Scan data
This approach migrates all data except for Build Scan data, which is not retained.
Steps
-
Perform initial database setup.
-
Stop the enterprise app while leaving the database running. This ensures that no data is lost when switching to the new database. Your Develocity instance will be unavailable at this point.
-
Run the data migration script. This copies all non Build Scan data to the new database.
-
Configure your Develocity instance to point to your new database.
-
Wait for Develocity to complete restarting.
-
Smoke test your Develocity instance to ensure that it’s operational.
-
Test that your normal builds are functioning, including Build Scan publishing, Build Cache usage and Test Distribution, depending on which features are used in your builds.
-
Once you are satisfied that the instance attached to the new database is fully operational and has the correct data, it is safe to clean up the old data.
Strategy 3: Restore backup to a secondary Develocity instance, connect primary to new database
This approach takes a backup of the data, starts a new, temporary instance of Develocity with that data, and configures the existing instance to point to the user-managed database. All new data will be on the primary instance, and historical data can be viewed on the secondary instance. After a period of time, the secondary instance can be decommissioned.
Tradeoffs
Pros
-
All data is available immediately
-
No need to promote an instance
Cons:
-
Historical data is only available on the second instance
-
Resources are required for the second instance as long as the historical data on it is needed
-
Service will be down while the database backup is taken
-
A potentially large backup must be copied to a new instance
Steps
-
Perform initial database setup.
-
Create and install a new Develocity instance. This instance will be a secondary Develocity instance that provides access to historical data. It will need enough storage to hold both a copy of the database and a backup of it. Configure the instance to connect to its embedded database.
-
On the primary instance, stop the enterprise app while leaving the database running. This ensures that no data is lost when switching to the new database. Your Develocity instance will be unavailable at this point.
-
Run the data migration script.
-
Take a backup of the database on your primary Develocity instance. See the administration manual for Helm-based installations.
-
Configure the primary Develocity instance to point to the new database.
-
Smoke test the primary instance to ensure that it’s operational.
-
Test that your normal builds are functioning, including Build Scan publishing, Build Cache usage and Test Distribution, depending on which features are used in your builds.
-
At this point, your primary instance is now connected to your new database and is storing its data there. New Build Scan data will be published to the database, but Build Scan history will not be present.
-
Copy the backup to your secondary Develocity instance.
-
Stop Develocity on the secondary instance.
-
Restore the backup on your secondary instance. See the administration manual for Helm-based installations.
-
Start the secondary instance up again.
-
At this point, you can access historical Build Scans and trends data up to the backup point on the secondary instance.
-
Keep the secondary instance running as long as the historical data is useful. It can be decommissioned on your preferred schedule.
Strategy 4: Set up and promote new Develocity instance, copy Build Scan data to it
This approach migrates all non Build Scan data and configuration to a new user-managed database, starts a new instance of Develocity connected to that database, promotes the new instance, then copies Build Scan data to the new instance.
Tradeoffs
Pros
-
All data is copied eventually
-
Second volume of storage capable of holding the database is not required
-
Short downtime
Cons:
-
Historical data is only available on the original instance initially
-
Resources for a second Develocity instance are required while Build Scan data is copied
-
Requires ability to promote a new instance
Steps
-
Perform initial database setup.
-
Create and install a new Develocity instance. This instance will be the new Develocity instance at the end of this procedure and so it needs to be provisioned (CPU, memory) as well as the current instance, with the exception that it does not need a large disk to handle Build Scan data in the database. At this stage, install it configured to use the embedded database.
-
Ensure that any pre-promotion steps have been completed.
-
Ensure that you have run through the Build Scan pre-copy verification steps.
-
On the original instance, stop the enterprise app while leaving the database running. This ensures that no data is lost when switching to the new database. Your Develocity instance will be unavailable at this point.
-
Run the data migration script.
-
Configure the new Develocity instance to point to the new database.
-
Smoke test the new instance to ensure that it’s operational.
-
Promote the new Develocity instance, using whatever promotion strategy is suitable for your environment, as discussed above.
-
Test that your normal builds are functioning using the new instance, including Build Scan publishing, Build Cache usage and Test Distribution, depending on which features are used in your builds.
-
At this point, your new instance is now the main Develocity instance for you organization. New Build Scan data will be published to it, but Build Scan history will not be present. If your original instance’s web interface is still available after the new instance is promoted, historical Build Scan data up to the switchover point will be available on that URL.
-
Run Develocityctl. Develocityctl will copy Build Scans from your original instance to the new one, starting with the most recent first, to restore trends data for the most recent Build Scans first.
-
At any point, if you decide that enough historical data has been copied to the new instance, you can terminate Develocityctl.
-
Once you are satisfied that the new instance is fully operational and has the correct data, the original instance can be decommissioned on your preferred schedule.
Strategy 5: Copy Build Scan data from second, temporary instance
This approach takes a backup of the data, starts a new, temporary instance of Develocity with that data, configures the existing instance to point to the user-managed database and then copies Build Scan data from the temporary instance.
Tradeoffs
Pros
-
All data is copied eventually
-
No need to promote a new instance
Cons:
-
Historical data is only available on the second instance initially
-
Resources for a second Develocity instance are required while Build Scan data is copied
-
Service will be down while the database backup is taken
-
Potentially large backup must be copied to new instance
Steps
-
Perform initial database setup.
-
Create and install a new Develocity instance. This instance will be a temporary Develocity instance that is used by Develocityctl to copy Build Scan data into the new database. It will need enough storage to hold both a copy of the database and a backup of it. Configure the instance to connect to its embedded database.
-
Ensure that you have run through the Build Scan pre-copy verification steps.
-
On your original instance, stop the enterprise app while leaving the database running. This ensures that no data is lost when switching to the new database. Your Develocity instance will be unavailable at this point.
-
Run the data migration script.
-
Take a backup of the database on your original Develocity instance. See the administration manual for Helm-based installations.
-
Configure the original Develocity instance to point to the new database.
-
Smoke test the original instance to ensure that it’s operational.
-
Test that your normal builds are functioning, including Build Scan publishing, Build Cache usage and Test Distribution, depending on which features are used in your builds.
-
At this point, your original instance is now connected to your database and is storing its data there. New Build Scan data will be published to it, but Build Scan history will not be present.
-
Copy the backup to your new Develocity instance.
-
Stop Develocity on the new instance.
-
Restore the backup on your new instance. See the administration manual for Helm-based installations.
-
Start the new instance up again.
-
At this point, you can access historical Build Scans and trends data up to the backup point on the new instance if necessary.
-
Start Develocityctl. Develocity will copy Build Scan data from your new, temporary instance back to the original instance, which will store them into the new database, starting with the most recent first, to restore trends data for the most recent Build Scans first.
-
At any point, if you decide that enough historical data has been copied to the new database, you can terminate Develocityctl.
-
Once you are satisfied that the new database is fully operational and has the correct data, the temporary Develocity instance can be decommissioned at your preferred schedule.
-
It is also safe to clean up the old data on the original instance at this point.
Appendix A: Testing a full migration to determine likely downtime
This method tests how long the system would likely be unavailable during a migration by performing one as a test run.
-
Perform initial database setup. If you are not providing superuser credentials for the database to Develocity, set the passwords on the new accounts.
-
Run the data migration script, ensuring that Build Scan data is copied, logging the starting and ending time. For example:
date ; ./migrate-from-embedded.sh --include-build-scans [other args] ; date
The time
utility can also be used to measure duration, if it’s available in your environment. -
At any point, if the process has taken longer than would be acceptable, it may be terminated.
When the script completes, the difference between the initial and final logged timestamps is an estimate of how long the system would be unavailable if data is migrated in this fashion. This can then be used to determine if a full up-front migration is a viable strategy for your organization.
After completing the test, the user-managed database should be dropped and recreated to have a clean database for the final migration.
Appendix B: Initial database setup
Normally, when connecting to a user-managed database, Develocity is configured with database superuser credentials. These are used to set up database schemas and less privileged accounts that it then uses to access the database day-to-day.
It is possible to configure Develocity without providing superuser credentials. In that configuration, once per major release, a database setup script must be run manually. This sets up the schemas and other accounts, and Develocity is then configured with credentials for the other accounts.
When performing any of the migration strategies in this document, this script must also be run.
Steps
-
Download the database script bundle from the appendix.
-
Create your database using
CREATE DATABASE
or an equivalent in your database provider interface. -
Run the setup script on a host that can connect to your database:
PGPASSWORD="xxx-postgres-user-password" ./setup.sh --host=db.myhost.com
Run the script without arguments to see full options (e.g. different username, non-default port etc).
This will create two accounts: ge_app
and ge_migrator
.
-
If your Develocity installation will not be configured with superuser credentials, set the passwords for the two new accounts, using
psql
or an equivalent query tool:psql -h db.myhost.com -U postgres ALTER USER ge_app PASSWORD 'the_app_password' psql -h db.myhost.com -U postgres ALTER USER ge_migrator PASSWORD 'the_migrator_password' psql -h db.myhost.com -U postgres ALTER USER ge_monitor PASSWORD 'the_monitor_password'
These passwords will be required when configuring Develocity to connect to the database.
Appendix C: Stopping the Develocity application while leaving its embedded database running
This is done to ensure that data such as user access keys and manually defined roles, build cache node registrations and topology, and Test Distribution agent pools do not have changes which are lost when switching databases.
Helm-based installations
-
Run the following commands:
kubectl --namespace=«your-namespace» scale --replicas=0 deployment/gradle-enterprise-app kubectl --namespace=«your-namespace» scale --replicas=0 deployment/gradle-keycloak
Or, for an OpenShift install:
oc scale --replicas=0 deployment/gradle-enterprise-app oc scale --replicas=0 deployment/gradle-keycloak
(Legacy) Before 2023.3 version
-
Run the following commands:
kubectl --namespace=«your-namespace» scale --replicas=0 statefulset/gradle-enterprise-app kubectl --namespace=«your-namespace» scale --replicas=0 statefulset/gradle-keycloak
Or, for an OpenShift install:
oc scale --replicas=0 statefulset/gradle-enterprise-app oc scale --replicas=0 statefulset/gradle-keycloak
Appendix D: Running the database migration script
Steps
-
Download the database script bundle from the appendix.
-
Run the migration script:
DESTINATION_PASSWORD="xxx" ./migrate-from-embedded.sh --installation-type=<kubernetes|openshift> --destination-host=my-host.db.com
Run the script without arguments to see full options (e.g. different username, non-default port etc). . Verify that the script terminates with a Database migration complete
message
If any errors occur, it is recommended to drop and recreate your database, then restart from the first steps of your chosen strategy.
Appendix E: Configuring a Develocity instance to connect to your new database
While following one of the above strategies, you will at some stage need to configure a Develocity instance to connect to your new database. This may be your original instance, or another instance, depending on which migration strategy you take.
For Helm-based installations
To configure an existing Helm-based Develocity installation (either standalone or Kubernetes) to connect to your new database, you need to run helm upgrade
for your installation, using updated Helm values. In particular, the new values need to specify that Develocity should connect to your new database, as described in the standalone and Kubernetes installation manuals.
When you’ve prepared the values you’re going to use, you should run helm to apply your updated configuration as described in the standalone and Kubernetes Helm chart configuration guides. This will result in Develocity being restarted.
Appendix F: Smoke testing a Develocity instance
This involves running a normal build, pointed at your Develocity instance, and verifying that:
-
A Build Scan is published and appears as expected when its reported URL is visited.
-
The Build Cache was used if expected.
-
Test Distribution was used if expected.
There are a few ways to do this:
-
Typically, the best smoke test is just to alter one of your normal builds, changing the configured Develocity server URL, see the Develocity Gradle Plugin User Manual or the Develocity Maven Extension User Manual.
-
Alternatively, there are also the Gradle Build Scan quickstart and Maven Build Scan quickstart projects that could be used. Configure these to point to the instance to be smoke tested using the instructions at the above user manual links.
Appendix G: Pre-promotion steps
The aim here is to ensure that builds can be converted to use a new instance as fast as possible. The specific steps will vary, depending on your infrastructure and promotion strategy. Some suggested steps for known setups:
-
If your promotion strategy involves updating DNS records, you should ensure that these have a small time-to-live (TTL) ahead of time. For example, if your DNS entries usually have a TTL of 24h, you should lower this to a short TTL a day ahead of the migration, to allow time for the entries with the longer TTL to expire from DNS caches in your infrastructure.
-
If your promotion strategy requires other staff in your organization to perform changes, e.g. to update DNS records or change reverse proxy or other network settings, it is recommended to schedule a time to perform the critical parts of the migration to ensure that they are available to perform a switchover at the appropriate moment.
-
If your promotion strategy involves updating build configuration, it is recommended to have a checklist of builds that need to be updated. If your code infrastructure supports it, having pull requests or equivalent for each case ready to go can reduce the effective downtime that builds will experience.
Appendix H: Build Scan pre-copy verification steps
Develocityctl is the CLI tool for interacting with Develocity installations. It has its own user manual. Develocityctl version 1.9 and later can be used to copy Build Scans from one Develocity server to another. It operates by copying Build Scans from one Develocity server to another. To do this, it needs network access to both servers, and access keys if necessary.
Steps:
-
Choose a host to run Develocityctl on that will be able to access the normal URL of each server involved. This may be the host of one of the Develocity instances - if both would work, it is recommended to use the server of the source installation (i.e. the installation that Build Scans will be copied from), to minimise the additional load on the production server.
-
Verify that both instances are accessible from the host that you will run Develocityctl on. This can be done in various ways, but typically a tool like
curl
is available to do it:curl https://source-gradle-enterprise-server.company.com/info/version curl https://destination-gradle-enterprise-server.company.com/info/version
If you cannot connect to either server, either discuss your network setup with an appropriate administrator or try another host.
-
To copy Build Scans, Develocityctl requires Develocity access keys if the equivalent Build Scan operations would require them. This means that it needs an access key for a user with Build Scan publishing permissions if anonymous access to publish Build Scans is disabled on the destination server. See the administration manual’s "Authenticated build access" section. It also will need an access key with “Build data export” permissions if this permission is not granted to anonymous users on the source server, see the Export API manual.
-
Develocityctl can be run as a Java executable JAR file or as a docker container.
-
If running as a JAR, ensure that you have a JDK 17 compatible Java runtime installed, download the latest version from the downloads section of the user manual and verify that it works by running it using:
java -jar develocityctl-«version».jar
-
If running as a docker container, verify that it can be pulled and run using:
docker run --rm gradle/develocityctl:latest
-
Appendix I: Running Develocityctl to copy Build Scans
Assuming that you have run through the pre-copy verification steps, using Develocityctl to copy Build Scans can be done by following the instructions below.
Please ensure that your system has at least 2GB of memory available.
Access keys
If your destination servers require access keys, export
the following environment variables before running Develocityctl:
-
SOURCE_ACCESS_KEY
for an access key with export permissions from the source server -
DESTINATION_ACCESS_KEY
for an access key with Build Scan publish permissions from the destination server
Docker
Run as a docker container, optionally providing the access key environmental variables if appropriate:
docker run \ --detach \ --name=develocityctl \ -e SOURCE_ACCESS_KEY \ -e DESTINATION_ACCESS_KEY \ gradle/develocityctl:latest build-scan copy \ --sourceUrl=https://source-gradle-enterprise-server.company.com/ \ --destinationUrl=https://destination-gradle-enterprise-server.company.com/
To follow progress, check the output in the docker logs:
docker logs --follow develocityctl
Develocityctl will write a file containing IDs of Build Scans that it was unable to copy to a file in the current directory.
This file can be retrieved from the container from its /home
directory. The default filename is failures.txt
, though this can be changed via a command line argument.
docker cp develocityctl:/home/failures.txt ./failures.txt
To instead have this file written to your local filesystem automatically, mount a local directory to the /home
directory inside the container. For example:
mkdir ~/copy-failures docker run \ --detach \ -v ~/copy-failures:/home \ --name=develocityctl \ gradle/develocityctl:latest build-scan copy \ --sourceUrl=https://source-gradle-enterprise-server.company.com/ \ --destinationUrl=https://destination-gradle-enterprise-server.company.com/
Executable JAR
Develocityctl requires a Java 17+ runtime to be installed, as described in the user manual.
Run it with the following command:
java -Xms1024m -Xmx1024m -XX:MaxDirectMemorySize=512m \ -jar develocityctl-«version».jar \ build-scan copy \ --sourceUrl=https://source-gradle-enterprise-server.company.com/ \ --destinationUrl=https://destination-gradle-enterprise-server.company.com/
As the tool is expected to run for a significant amount of time, it is recommended to detach it from the terminal and examine progress via logs:
nohup \ java -Xms1024m -Xmx1024m -XX:MaxDirectMemorySize=512m \ -jar develocityctl-«version».jar \ build-scan copy \ --sourceUrl=https://source-gradle-enterprise-server.company.com/ \ --destinationUrl=https://destination-gradle-enterprise-server.company.com/ >> develocityctl.log 2>&1 \ & disown tail -f develocityctl.log
Additional arguments
There are a few other command line arguments that the tool accepts - run it with java -jar develocityctl-«version».jar build-scan copy --help
to see them all.
The most commonly used is --copyScanDataSince
, which allows specifying a date beyond which only Build Scan ids should be copied, not the data itself. This effectively simulates the normal time-based retention window of Develocity’s disk space management. Scans copied whose date was after the given date will be copied in full; older Build Scans will be copied such that if they are visited in the Develocity UI, the user will be informed that the Build Scan has been deleted.
Restarting Develocityctl
Develocityctl may need to be restarted if interrupted.
Typically, the process is:
-
Examine the logs from the last run and identify the last successfully copied Build Scan
-
If running as a docker container, remove the old container using
docker stop
anddocker rm
-
Rerun with an additional argument,
--startWithBuildId=«build-scan-id»
- the tool will resume copying Build Scans, starting with the next oldest after the one specified and working backwards chronologically from there.
Note that it’s safe to rerun Develocityctl with a range of Build Scans to copy which overlaps with that of a previous run. Build Scans that have been previously copied will be detected and reported, and Develocityctl will continue to copy other Build Scans.
Appendix J: Cleaning up the data directory or persistent volume
Once you are happy that the new database has all the data that you wanted migrated and the system is operational, you may wish to remove the old data, and potentially remove attached storage.
For standalone installations, the data is typically in a data/postgresql
subdirectory of the installation directory. For example, if using the default /opt/gradle
installation directory, the database directory is /opt/gradle/data/postgresql
. This directory can be safely deleted once the new setup is fully operational.
For Kubernetes installations, the associated PersistentVolumeClaim is called gradle-database-volume
and can be deleted using the following commands:
kubectl --namespace=«your-namespace» delete --ignore-not-found=true pvc/gradle-database-volume
Or, for an OpenShift install:
oc delete --ignore-not-found=true pvc/gradle-database-volume
Depending on your reclaim policy, you may need to perform other manual steps to reclaim the space used. See the Kubernetes documentation for more details.
Appendix K: Reading the backup history logs
Warning: following procedure applies only for versions prior 2023.3.
In order to get an idea of your potential downtime during a backup and restore, it’s useful to be able to read the backup history for your existing embedded database.
A history of backups for your instance is recorded in a log file, accessible from the database
pod’s database-tasks container, under the path `/opt/gradle/data/logs/backups-history.log
.
To access it, you need to first use kubectl
(or oc
for OpenShift installations) to get the name of the database pod:
$ kubectl --namespace <your namespace> get pods --selector=app.kubernetes.io/part-of=gradle-enterprise --selector=app.kubernetes.io/component=database -o jsonpath='{.items[*].metadata.name}' gradle-database-5b75f7f9d9-t6knf
With the name of the database pod, you can print out the contents of the backup-history.log to see how long your backups usually take:
$ kubectl --namespace <your namespace> exec gradle-database-5b75f7f9d9-t6knf -c database-tasks -- tail /opt/gradle/data/logs/backups-history.log 2022-07-14 04:07:10 - Backup started 2022-07-14 04:09:23 - Backup completed
In this example, the logs indicate that the backup took ~2 hours.
Appendix L: Upgrading your user-managed PostgreSQL database for Develocity
Develocity supports using a user-managed database up to PostgreSQL 17. If your Develocity instance is using a database instance of an earlier version, you may want to upgrade to a newer version to benefit from improvements in PostgreSQL.
Steps
-
Download the database script bundle from the appendix.
-
Stop Develocity.
-
Run the
prepare-upgrade.sh
script from the database script bundle, which removes database objects that are incompatible with a PostgreSQL upgrade. These objects will be recreated later by the database setup scripts.DESTINATION_PASSWORD="xxx" ./prepare-upgrade.sh --installation-type=<kubernetes|openshift> --destination-host=my-host.db.com
-
Upgrade the PostgreSQL database. This may be done using tools provided by a vendor if you are using a managed PostgreSQL service, or it may be something you do yourself by other means if you manage your user-managed database directly.
-
(Do this step only if you run the database setup scripts manually i.e. you don’t configure Develocity to be able to connect to the database as a superuser or pseudo-superuser) Run the database setup scripts on the upgraded database.
-
Start Develocity back up and confirm that it is working.
If you need help with upgrading your user-managed database, Contact Develocity Support.
Appendix M: Database setup scripts
-
gradle-enterprise-database-setup-zip-2024.3.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2024.2.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2024.1.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2023.4.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2023.3.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2023.2.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2023.1.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2022.4.zip (SHA-256 checksum)
-
gradle-enterprise-database-setup-zip-2022.3.zip (SHA-256 checksum)