This manual covers the installation of Gradle Enterprise on Amazon’s Elastic Kubernetes Service.
Time to completion: 45 minutes
Before Installation
Gradle Enterprise is a Kubernetes-based application, distributed as a Helm chart. Helm is a package manager for Kubernetes applications.
Gradle Enterprise can generally be installed on Kubernetes clusters running modern Kubernetes versions.
Gradle Enterprise has been tested as compatible with Kubernetes API versions 1.11.x to 1.27.x. Later versions may be compatible but have not been verified to work.
Helm manages all Gradle Enterprise components.
Prerequisites
1. An AWS Account
An AWS paid account is required. Note that a free tier account is not sufficient.
This tutorial will not work on GovCloud accounts (us-gov regions). |
2. A Gradle Enterprise License
If you have purchased Gradle Enterprise or started a trial, you should already have a license file called gradle-enterprise.license
. Otherwise, you may request a Gradle Enterprise trial license.
3. An AWS IAM User
Grant the user that will manage the instance the AmazonEC2FullAccess
AWS managed policy.
To check the current user, run the following command:
aws sts get-caller-identity
If you are using AWS’s Cloud Shell (see section 1. AWS CLI), grant the user Cloud Shell permissions using the AWSCloudShellFullAccess
AWS managed policy.
If you choose to use Amazon RDS as your database or S3 to store your build scans, you will need the additional permissions described in the appendices. |
The IAM user must have permissions to work with Amazon EKS IAM roles, service linked roles, AWS CloudFormation, a VPC, and related resources.
You will need the permissions described by eksctl’s minimum IAM policies.
Host Requirements
This section outlines cluster and host requirements for the installation.
1. Database
Gradle Enterprise installations have two database options:
-
An embedded database that is highly dependent on disk performance.
-
A user-managed database that is compatible with PostgreSQL 12, 13, or 14, including Amazon RDS and Aurora.
By default, Gradle Enterprise stores its data in a PostgreSQL database that is run as part of the application itself, with data being stored in a directory mounted on its host machine.
RDS Database
There are instructions for using Amazon RDS as a user-managed database in the RDS appendix. This can have a number of benefits, including easier resource scaling, backup management, and failover support.
2. Storage
In addition to the database, Gradle Enterprise needs some storage capacity for configuration files, logs, and build cache artifacts. These storage requirements apply regardless of which type of database you use, although the necessary size varies based on the database type.
The majority of data is stored in the "installation directory", which defaults to /opt/gradle
(unless otherwise specific in your Helm values file below).
Capacity
The minimum capacity required for the installation directory for the embedded database case is 250 GB.
The minimum capacity required for the installation directory for the user-managed database case is 30 GB.
It is recommended to create a specific volume for the installation directory to avoid consuming the space required for Gradle Enterprise, and to ensure at least 10% of the volume’s space is free at all times.
The following are additional disk capacity requirements:
Location | Storage Size |
---|---|
|
1 GB |
|
30 GB |
These are not particularly performance sensitive.
Performance
For production workloads, storage volumes should exhibit SSD-class disk performance of at least 3000 IOPS (input/output operations per second). Most NFS based storage or desktop-class, non-SSD disk drives do not provide this level of performance.
S3 Storage
Gradle Enterprise can also be configured to store Build Scans in a S3 bucket. This can help performance in high-traffic installations by taking load off the database. See the S3 appendix for details.
3. Network Connectivity
Gradle Enterprise requires network connectivity for periodic license validation.
An installation of Gradle Enterprise will not start if it cannot connect to both registry.gradle.com and harbor.gradle.com . |
It is strongly recommended that production installations of Gradle Enterprise are configured to use HTTPS with a trusted certificate.
When installing Gradle Enterprise, you will need to provide a hostname, such as ge.example.com
.
Pre-Installation
If you decide to use Cloud Shell, complete sections 3. Eksctl, 4. Helm, 5. Hostname and then skip to Cluster Configuration.
1. AWS CLI
You will be using the aws
command line to provision and configure your server. To install it on your local machine, follow the instructions in the AWS documentation. Use version 2.11.26 or later or 1.27.150 or later.
The aws
CLI must be configured with an access key to be able to access your AWS account. If you do not have an access key, follow the AWS CLI prerequisites guide, and then the quick setup guide.
If you have an access key already, but have not configured the aws
CLI, you can follow the AWS CLI quick setup guide.
Choose the region you wish to install Gradle Enterprise in. You should generally pick the region closest to you geographically to ensure the best performance. AWS provides a list of all available EKS regions.
2. Kubectl
kubectl
is a command line tool for working with Kubernetes clusters. Use version 1.27 or later.
AWS hosts their own binaries of kubectl
, which you can install by following their guide.
You can also install kubectl
by following the steps in the Kubernetes documentation.
3. Eksctl
eksctl
is a CLI tool for creating and managing EKS clusters. While you can use the AWS CLI, eksctl
is much easier to use.
To install it, follow AWS’s eksctl
installation guide.
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
If you are using Cloud Shell, replace the command above with:
$ mkdir -p ~/.local/bin && mv /tmp/eksctl ~/.local/bin
4. Helm
Helm is a package manager for Kubernetes applications.
If you are using Cloud Shell, first run:
$ export HELM_INSTALL_DIR=~/.local/bin
If for some reason you skipped the previous step, you will have to create the ~/.local/bin directory. |
Openssl is a requirement for Helm. Install it by running:
$ sudo yum install openssl -y
Alternatively, if you don’t use yum
, you can use sudo apt install openssl -y
, brew install openssl
, or your package manager of choice.
If you don’t want to install openssl , you can disable Helm’s installer checksum verification using export VERIFY_CHECKSUM=false . |
To install Helm, run:
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
See Helm’s installation documentation for more details and non-Linux instructions. |
5. Hostname
AWS will automatically assign a hostname like a6d2f554d845844a69a0aac243289712-4696594a57a75795.elb.us-west-2.amazonaws.com
to the load balancer you create for Gradle Enterprise. You can use this hostname to access Gradle Enterprise.
If you want to access Gradle Enterprise by a host name of your choosing (e.g. ge.example.com
), you will need the ability to create the necessary DNS record to route this name to the AWS-created hostname.
You can start with AWS’s hostname, and later reconfigure to use a custom hostname if desired using AWS’s Route 53. |
Helm Configuration
Installation options for Grade Enterprise are depicted in a Helm values file.
Follow the instructions in the Kubernetes Helm Chart Configuration Guide and return to this document with a complete values.yaml
file.
Cluster Configuration
In this section you will create an EKS cluster to run Gradle Enterprise.
1. Create a Cluster
Create your Amazon EKS cluster called gradle-enterprise
.
To create it, run:
$ eksctl create cluster --name gradle-enterprise --region us-west-1 (1)
1 | Replace the region-code with your AWS Region of choice. |
[ℹ] creating EKS cluster "gradle-enterprises" in "X" region with managed nodes [ℹ] building cluster stack "eksctl-gradle-enterprises-cluster" [ℹ] deploying stack "eksctl-gradle-enterprises-cluster" [✔] EKS cluster "gradle-enterprises" in "us-west-1" region is ready
This will take several minutes, and will add the cluster context to kubectl
when it is done (note that this will persist across Cloud Shell sessions).
eksctl
creates a CloudFormation stack, which you can see in the CloudFormation web UI.
For more details, consult the eksctl getting started guide. |
2. Create Nodes
You will need three m5.large
nodes. These are Managed nodes that run Amazon Linux applications on Amazon EC2 instances.
Gradle Enterprise does not support Fargate nodes out of the box, because Fargate nodes do not support any storage classes by default. |
Once your cluster is up and running, you will be able to see the nodes:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION ip-192-168-45-72.us-west-2.compute.internal Ready <none> 7m1s v1.22.9-eks-810597c ip-192-168-72-77.us-west-2.compute.internal Ready <none> 6m58s v1.22.9-eks-810597c
You can also see the workloads running on your cluster.
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system aws-node-12345 1/1 Running 0 7m43s 192.0.2.1 ip-192-0-2-1.region-code.compute.internal kube-system aws-node-67890 1/1 Running 0 7m46s 192.0.2.0 ip-192-0-2-0.region-code.compute.internal kube-system coredns-1234567890-abcde 1/1 Running 0 14m 192.0.2.3 ip-192-0-2-3.region-code.compute.internal kube-system coredns-1234567890-12345 1/1 Running 0 14m 192.0.2.4 ip-192-0-2-4.region-code.compute.internal kube-system kube-proxy-12345 1/1 Running 0 7m46s 192.0.2.0 ip-192-0-2-0.region-code.compute.internal kube-system kube-proxy-67890 1/1 Running 0 7m43s 192.0.2.1 ip-192-0-2-1.region-code.compute.internal
Gradle Enterprise needs three of these nodes to run. To scale up the node group run:
$ eksctl scale nodegroup \
--cluster gradle-enterprise \
--nodes 3 \
--nodes-max 3 \
--name $(aws eks list-nodegroups --cluster-name gradle-enterprise --query 'nodegroups[0]' --output text)
If you scale to something other than 3 , you’ll want to update both --nodes and --nodes-max . The maximum (and minimum) are only used if you have autoscaling configured, but the maximum still must be increased above your target number of nodes. |
3. Install Nginx Ingress Controller
For ingress, we will use the Nginx ingress controller, behind an AWS Network Load Balancer (NLB).
To install the Nginx ingress controller, run:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml
For more details on installing the Nginx ingress controller, see the installation guide. |
This manifest will also create the AWS NLB.
You can find the hostname of your load balancer by running:
$ kubectl get \
-n ingress-nginx service ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
You can use AWS’s certificate management service to manage a trusted SSL certificate and provide it to your ingress. To use it, configure the Nginx ingress controller service using AWS’s annotations, and configure Gradle Enterprise for external SSL termination. Note that this requires using a hostname you own. |
You can validate the NGINX Ingress Controller by running:
$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx
$ kubectl get services ingress-nginx-controller --namespace=ingress-nginx
kubectl describe svc ingress-nginx --namespace=ingress-nginx
4. Create a Storage Class
This guide uses the embedded database. You may have a different setup depending on your Helm values file.
Instructions for using Amazon RDS as a user-managed database in the appendix. |
To use EBS volumes, you need to install the EBS CSI driver.
First, enable OIDC for your cluster and create a service account for the driver to use:
$ eksctl utils associate-iam-oidc-provider --cluster gradle-enterprise --approve
$ eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster gradle-enterprise \
--attach-policy-arn "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" \
--approve \
--role-only \
--role-name eksctl-managed-AmazonEKS_EBS_CSI_DriverRole
Then install the driver:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ eksctl create addon \
--name aws-ebs-csi-driver \
--cluster gradle-enterprise \
--force \
--service-account-role-arn "arn:aws:iam::${ACCOUNT_ID}:role/eksctl-managed-AmazonEKS_EBS_CSI_DriverRole"
For more details on installing and managing the EBS CSI driver, see AWS’s documentation. |
To use gp3
volumes, first add a gp3
storage class to the cluster. Create the manifest file:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
Then run the command to apply it:
kubectl apply -f gp3.yaml
For more details on the options available for EBS volumes using the CSI driver, see the driver’s GitHub project, specifically the StorageClass parameters documentation. |
The Build Scan® service of Gradle Enterprise can be configured to store the data in a Amazon S3 bucket. This can help performance in high-traffic installations by taking load off the database. See the appendix for details.
5. Configure the Hostname
If you intend to use a custom hostname to access your Gradle Enterprise instance, you now need to add the appropriate DNS records.
Add an CNAME
record for your hostname that points to the public hostname of your NLB. You can find this hostname by describing your NLB, as described in section 3. Install Nginx Ingress Controller.
ge.example.com CNAME abcdefg123456789-123456789.elb.us-west-2.amazonaws.com
You should verify that your DNS record works correctly before installing Gradle Enterprise, such as by using dig ge.example.com
.
Alternatively, you can use the hostname generated by AWS. You can find the generated hostname by describing the NLB as shown in section 3. Install Nginx Ingress Controller.
You can use AWS’s DNS Service, Route 53, to easily route traffic to your NLB by following this guide. |
Installation
In this section you will install Gradle Enterprise on your newly created instance.
1. Install the Helm chart
First, add the https://helm.gradle.com/
helm repository and update it:
$ helm repo add gradle https://helm.gradle.com/
$ helm repo update gradle
2. Install Gradle Enterprise
Then run helm install
with the following command:
$ helm install \
--create-namespace --namespace gradle-enterprise \
ge \
gradle/gradle-enterprise \
--values path/to/values.yaml \(1)
--set-file global.license.file=path/to/gradle-enterprise.license (2)
You should see output similar to this:
NAME: ge LAST DEPLOYED: Wed Jul 13 04:08:35 2022 NAMESPACE: gradle-enterprise STATUS: deployed REVISION: 1 TEST SUITE: None
3. Start Gradle Enterprise
You can see the status of Gradle Enterprise starting up by examining its pods.
$ kubectl --namespace gradle-enterprise get pods
NAME READY STATUS RESTARTS AGE gradle-enterprise-operator-76694c949d-md5dh 1/1 Running 0 39s gradle-monitoring-5545d7d5d8-lpm9x 1/1 Running 0 39s gradle-database-65d975cf8-dk7kw 0/2 Init:0/2 0 39s gradle-build-cache-node-57b9bdd46d-2txf5 0/1 Init:0/1 0 39s gradle-proxy-0 0/1 ContainerCreating 0 39s gradle-metrics-cfcd8f7f7-zqds9 0/1 Running 0 39s gradle-test-distribution-broker-6fd84c6988-x6jvw 0/1 Init:0/1 0 39s gradle-keycloak-0 0/1 Pending 0 39s gradle-enterprise-app-0 0/1 Pending 0 39s
Eventually the pods should all report as Running
:
$ kubectl --namespace gradle-enterprise get pods
NAME READY STATUS RESTARTS AGE gradle-enterprise-operator-76694c949d-md5dh 1/1 Running 0 4m gradle-monitoring-5545d7d5d8-lpm9x 1/1 Running 0 4m gradle-proxy-0 1/1 Running 0 3m gradle-database-65d975cf8-dk7kw 2/2 Running 0 3m gradle-enterprise-app-0 1/1 Running 0 3m gradle-metrics-cfcd8f7f7-zqds9 1/1 Running 0 3m gradle-test-distribution-broker-6fd84c6988-x6jvw 1/1 Running 0 3m gradle-build-cache-node-57b9bdd46d-2txf5 1/1 Running 0 4m gradle-keycloak-0 1/1 Running 0 3m
Gradle Enterprise has a /ping
endpoint, which can be used to verify network connectivity with Gradle Enterprise.
Connectivity to Gradle Enterprise installation can be tested by running the following command on machines which need to connect to Gradle Enterprise:
$ curl -sw \\n --fail-with-body --show-error https://«gradle-enterprise-host»/ping
It should return SUCCESS
.
Once all pods have a status of Running
and the system is up and connected, you can interact with it by visiting its URL in a web browser (i.e. the hostname).

Congratulations Gradle Enterprise is installed and running.
Post-Installation
Many features of Gradle Enterprise, including access control, database backups, and Build Scan retention can be configured in Gradle Enterprise, once it is running. Consult the Gradle Enterprise Administration guide to learn more.
For instructions on how to start using Gradle Enterprise in your builds, consult the Getting Started with Gradle Enterprise guide.
Appendix
Appendix A: Using Amazon RDS
This appendix will walk you through using an Amazon RDS PostgreSQL instance as your database.
1. Obtain the Required Permissions
You need permission to create and manage Amazon RDS instances and security groups.
The necessary permissions are granted using the AmazonRDSFullAccess
AWS managed policy.
2. Set up an RDS Instance
Gradle Enterprise is compatible with PostgreSQL 12, 13, or 14. The minimum storage space required is 250 GB with 3,000 or more IOPS.
A. Create a root username and password
Create a root username and password for the database instance, refered to below as «db-root-username»
and «db-root-password»
, respectively. These are the credentials you will use for your database connection; save them somewhere secure.
B. Create a security group and enable ingress
Before creating the database, you have to create a security group in the VPC you want to use.
In this tutorial you will use the eksctl
created VPC used by your cluster.
You can use a different VPC, but you will need to make the RDS instance accessible from your cluster (e.g. by peering the VPCs).
To create the security group, run:
$ CLUSTER_VPC_ID=$(
aws ec2 describe-vpcs \
--filters Name=tag:aws:cloudformation:stack-name,Values=eksctl-gradle-enterprise-cluster \
--query Vpcs[0].VpcId \
--output text
)
$ aws ec2 create-security-group --group-name ge-db-sg \
--description "Gradle Enterprise DB security group" \
--vpc-id ${CLUSTER_VPC_ID}
Then enable ingress to the RDS instance from your cluster for port 5432 by running:
$ CLUSTER_SECURITY_GROUP_ID=$(
aws eks describe-cluster --name gradle-enterprise \
--query cluster.resourcesVpcConfig.clusterSecurityGroupId --output text
)
$ RDS_SECURITY_GROUP_ID=$(
aws ec2 describe-security-groups \
--filters Name=group-name,Values=ge-db-sg \
--query 'SecurityGroups[0].GroupId' --output text
)
$ aws ec2 authorize-security-group-ingress \
--protocol tcp --port 5432 \
--source-group ${CLUSTER_SECURITY_GROUP_ID} \
--group-id ${RDS_SECURITY_GROUP_ID}
C. Create a subnet group
Before creating the database, you need to create a subnet group to specify how the RDS instance will be networked.
This subnet group must have subnets in two availability zones, and typically should use private subnets.
eksctl
has already created private subnets you can use.
Create a subnet group containing them by running:
$ CLUSTER_VPC_ID=$(
aws ec2 describe-vpcs \
--filters Name=tag:aws:cloudformation:stack-name,Values=eksctl-gradle-enterprise-cluster \
--query Vpcs[0].VpcId \
--output text
)
$ SUBNET_IDS=$(
aws ec2 describe-subnets \
--query 'Subnets[?!MapPublicIpOnLaunch].SubnetId' \
--filters Name=vpc-id,Values=${CLUSTER_VPC_ID} --output text
)
$ aws rds create-db-subnet-group --db-subnet-group-name ge-db-subnet-group \
--db-subnet-group-description "Gradle Enterprise DB subnet group" \
--subnet-ids ${SUBNET_IDS}
Consult RDS’s subnet group documentation for more details on subnet groups and their requirements. |
D. Create the RDS instance
Create the RDS instance:
$ RDS_SECURITY_GROUP_ID=$(
aws ec2 describe-security-groups \
--filters Name=group-name,Values=ge-db-sg \
--query 'SecurityGroups[0].GroupId' --output text
)
$ aws rds create-db-instance \
--engine postgres \
--engine-version 14.3 \
--db-instance-identifier gradle-enterprise-database \
--db-name gradle_enterprise \
--allocated-storage 250 \(1)
--iops 3000 \(2)
--db-instance-class db.m5.large \
--db-subnet-group-name ge-db-subnet-group \
--backup-retention-period 3 \(3)
--no-publicly-accessible \
--vpc-security-group-ids ${RDS_SECURITY_GROUP_ID} \
--master-username «db-root-username» \
--master-user-password «db-root-password»
1 | Gradle Enterprise should be installed with 250GB of database storage to start with. |
2 | Gradle Enterprise’s data volumes and database should support at least 3,000 IOPS. |
3 | The backup retention period, in days. |
While you don’t configure it here, RDS supports storage autoscaling.
Consult AWS’s database creation guide and the CLI command reference for more details on RDS instance creation. |
You can view the status of your instance with:
$ aws rds describe-db-instances --db-instance-identifier gradle-enterprise-database
Wait until the DBInstanceStatus
is available
.
You should then see the hostname of the instance under Endpoint
. This is the address you will use to connect to the instance; heretofore referred as «database-address»
.
3. Configure Gradle Enterprise with RDS
Add the following configuration snippet to your Helm values file:
database:
location: user-managed
connection:
host: «database-address»
databaseName: gradle_enterprise
credentials:
superuser:
username: «db-root-username»
password: «db-root-password»
You can substitute «database-address»
in the Helm values file by running (verbatim):
$ DATABASE_ADDRESS=$(
aws rds describe-db-instances \
--db-instance-identifier gradle-enterprise-database \
--query 'DBInstances[0].Endpoint.Address' \
--output text
)
$ sed -i "s/«database-address»/${DATABASE_ADDRESS}/g" path/to/values.yaml
The superuser is only used to set up the database and create migrator and application users. You can avoid using the superuser by setting up the database yourself, as described in the database configuration section of Gradle Enterprise’s installation manual. Please contact Gradle support for help with this. |
This action embeds your database superuser credentials in your Helm values file. It must be kept secure. If you prefer to provide the credentials as a Kubernetes secret, consult Gradle Enterprise’s database configuration instructions.
Appendix B: Storing Build Scans in S3
This appendix will walk you through using an Amazon S3 bucket to store Build Scans®.
1. Obtain the required permissions
You will need permission to create and manage Amazon S3 buckets. You also need to create IAM policies, roles, and instance profiles, but you already permission to do that from the eksctl
policies.
The necessary permissions can be easily granted by using the AmazonS3FullAccess
AWS managed policy.
2. Set up a S3 Bucket and Allow Access
Create an S3 bucket and create an IAM policy that allows access to it. Then, associate that policy with your EC2 instance.
A. Create a S3 bucket
To create the S3 bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ aws s3 mb s3://gradle-enterprise-build-scans-${ACCOUNT_ID} (1)
1 | S3 bucket names must be unique across all AWS accounts, within groups of regions. We recommend using your account ID as a suffix. |
If you have multiple installations of Gradle Enterprise you want to use S3 storage with, either add a suffix or use the same bucket with a different scans object prefix. |
B. Create a policy allowing bucket access
To create a role allowing access to your bucket, first create a policy.json
file with the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::gradle-enterprise-build-scans-«account-id»"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::gradle-enterprise-build-scans-«account-id»/*"
]
}
]
}
Then run the following commands:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" ./policy.json (1)
$ aws iam create-policy \
--policy-name "eksctl-gradle-enterprise-build-scan-access" \ (1)
--policy-document file://policy.json (2)
1 | Even though we aren’t using eksctl to create this policy, using the eksctl- prefix avoids the need for additional permissions. |
2 | The policy.json file you created. |
C. Create a role for EKS
To associate the service account with an AWS IAM role, we need to use an AWS OIDC provider.
We already installed one when setting up the EBS CSI driver, so we can use it here.
To create a role that can be used from EKS and uses the policy you just created, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ POLICY_ARN="arn:aws:iam::${ACCOUNT_ID}:policy/eksctl-gradle-enterprise-build-scan-access"
$ eksctl create iamserviceaccount \
--name gradle-enterprise-app \
--namespace gradle-enterprise \
--cluster gradle-enterprise \
--approve \
--role-only \
--role-name eksctl-managed-GradleEnterprise_BuildScans_S3_Role \
--attach-policy-arn ${POLICY_ARN}
3. Update your Helm Values File
You need to configure Gradle Enterprise to use the role you created. You also need to increase Gradle Enterprise’s memory request and limit.
These are both done by adding the following to your Helm values file:
enterprise:
resources:
requests:
memory: 6Gi (1)
limits:
memory: 6Gi (1)
serviceAccount:
annotations:
"eks.amazonaws.com/role-arn": "arn:aws:iam::«account-id»:role/eksctl-managed-GradleEnterprise_BuildScans_S3_Role" (2)
1 | If you have already set a custom value here, instead increase it by 2Gi . |
2 | «account-id» is the ID of your AWS account, which you will substitute in a moment. |
When adding items to your Helm values file, merge any duplicate blocks. |
Then substitute «account-id»
in the Helm values file by running (verbatim):
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" path/to/values.yaml
You may need to scale up your cluster or use nodes with more memory to be able to satisfy the increased memory requirements. See section 1. Create a Cluster for scaling instructions. |
4. Configure Build Scans with S3
To allow Gradle Enterprise to access S3, use a role associated with a service account.
It is necessary to annotate the service account with the ARN of the role being used. This is done in the Helm values file:
enterprise:
serviceAccount:
annotations:
"eks.amazonaws.com/role-arn": "arn:aws:iam::111122223333:role/your-role-name"
While you can use the same role for both pods, it is recommended to create separate roles with different attached IAM policies so that each pod only has the permissions it needs. Please refer to the Administration Manual for details on which permissions are required. |
Gradle Enterprise must now be configured to use S3. There are two ways to do this:
-
Using the unattended configuration mechanism.
-
Using the Gradle Enterprise web UI
The unattended configuration mechanism lets you configure it as part of your Helm values file.
S3 cannot be configured via the web UI until Gradle Enterprise is installed and running.
Instructions for both methods are provided:
A. Using unattended configuration
Before using the unattended configuration mechanism, you should read the relevant section of the administration manual.
First, you need to choose a system password and hash it. To do this, install the Gradle Enterprise Admin CLI.
Then run:
$ gradle-enterprise-admin config-file hash -o secret.txt -s -
To hash your password from stdin
and write it to secret.txt
. We will refer to the hashed password as «hashed-system-password»
.
To use your S3 bucket, add the following to your Helm values file:
global:
unattended:
configuration:
version: 5
systemPassword: "«hashed-system-password»" (1)
buildScans:
storage:
incomingStorageType: s3
s3:
bucket: gradle-enterprise-build-scans-«account-id» (2)
region: «region» (3)
credentials:
source: environment
advanced:
app:
heapMemory: 5632 (4)
1 | Your hashed system password. |
2 | Your account ID, which we will substitute in below. |
3 | The region of your S3 bucket, which should be your current region. Viewable by running aws configure list | grep region . |
4 | If you have already set a custom value here, instead increase it by 2048 . |
Then substitute «account-id»
in the Helm values file by running (verbatim):
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" path/to/values.yaml
B. Using the web UI
To configure Gradle Enterprise to use your bucket using the web UI, follow the instructions in the administration manual with the following configuration:
-
Bucket:
gradle-enterprise-build-scans-«account-id»
.«account-id»
can be found by runningaws sts get-caller-identity --query Account --output text
. -
Region: your current region. Viewable by running
aws configure list | grep region
. -
S3 credentials:
Obtain from environment
To print the actual name of your bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ echo "gradle-enterprise-build-scans-${ACCOUNT_ID}"
5. Verify S3 Storage is Used
Gradle Enterprise will start even if your S3 configuration is incorrect.
Once Gradle Enterprise has started, you can verify that the S3 configuration is correct by using the Test S3 Connection button on the /admin/build-scans
page. You can also upload a build scan and then check for its presence in your S3 bucket.
To view the build scans stored in your S3 bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ aws s3 ls s3://gradle-enterprise-build-scans-${ACCOUNT_ID}/build-scans/ \(1)
--recursive --human-readable --summarize
1 | If you used a custom prefix, use it here instead of build-scans . |
2022-09-27 19:11:06 6.6 KiB build-scans/2022/09/27/aprvi3bnnxyzm Total Objects: 1 Total Size: 6.6 KiB