This tutorial will show how to run Gradle Enterprise on Amazon’s Elastic Kubernetes Service, EKS.
This tutorial requires using Gradle Enterprise 2022.3.1 or later. |
You can complete this tutorial in:
-
10 minutes (read the tutorial)
-
30 minutes (read the tutorial and perform the installation)
Gradle Enterprise can be installed into an existing Kubernetes cluster. It can also be installed on a standalone virtual machine, as shown in our Amazon EC2 installation tutorial. This tutorial shows how to set up a cluster installation on an Amazon EKS cluster.
Gradle Enterprise can generally be installed on Kubernetes clusters running modern Kubernetes versions. The exact supported versions are listed in Gradle Enterprise’s installation guide. Later versions may be compatible but have not been verified to work.
The majority of this tutorial is a quick start guide to creating and minimally configuring a cluster in EKS for a Gradle Enterprise installation. If you already have AWS and EKS expertise and are able to provision a cluster with an ingress controller and high-performance storage classes, you may wish to skip straight to the Gradle Enterprise installation instructions.
Other installation tutorials
Amazon Web Services
-
Kubernetes on EKS (you are here)
Prerequisites
1. An AWS Account
You can create a free account if you do not already have one. However, you will not be able to complete this tutorial on the free tier.
2. A Gradle Enterprise license
You can request a Gradle Enterprise trial here. If you have purchased Gradle Enterprise, you will already have a license file.
3. An AWS IAM user with access to EKS and related resources
You will need the permissions described by eksctl’s minimum IAM policies
. The `<account_id>
referenced in their policies can be found by running (after completing Setting up your shell environment):
$ aws sts get-caller-identity --query Account --output text
And it can easily be substituted into the policy files by running:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/<account_id>/${ACCOUNT_ID}/g" \
path/to/policy.json
On macOS, you may need to replace -i with -i '' when using sed . |
If you want to use AWS’s Cloud Shell (see Setting up your shell environment), you will also need Cloud Shell permissions, which can be easily granted with the AWSCloudShellFullAccess
AWS managed policy.
4. Hostname (optional)
AWS will automatically assign a hostname like a6d2f554d845844a69a0aac243289712-4696594a57a75795.elb.us-west-2.amazonaws.com
to the load balancer you create for Gradle Enterprise. You can use this hostname to access Gradle Enterprise.
If you want to access Gradle Enterprise by a host name of your choosing (e.g. ge.example.com
), you will need the ability to create the necessary DNS record to route this name to the AWS-created hostname.
You will be able to update this later. You can start with AWS’s hostname, and later reconfigure to use a custom hostname if desired.
You can use AWS’s Route 53 to register your domain name, and easily route traffic to your Gradle Enterprise instance. |
Setting up your shell environment
You need to use a number of tools to create AWS resources and install Gradle Enterprise. You can either install them locally, or use AWS’s Cloud Shell, which comes with most of the tools you will need preinstalled and preconfigured.
We assume you are using bash as your shell, although any shell that is fairly compatible (e.g. zsh ) should work without much trouble. |
Keep in mind the version requirements from the Gradle Enterprise installation manual. The rest of the requirements in the manual are either not applicable or fulfilled later in this tutorial, but the section is worth a read, especially if you want to customize your infrastructure.
When files are referred to in this tutorial, they are assumed to be on the machine you are running the command on. If you are using Cloud Shell, you will need to upload them to the shell machine. Use the "Upload File" action to do this, as explained in AWS’s guide. |
The only persistent storage in Cloud Shell is $HOME . We provide instructions in the installation steps to move the tools there. |
If you decide to use Cloud Shell, complete 4. Install eksctl
and 5. Install helm
and then skip to Creating an EKS Cluster.
1. Install the AWS CLI
You will be using the aws
command line tool to provision and configure your server. To install it on your local machine, follow the instructions in the AWS documentation.
2. Configure the AWS CLI
The aws
CLI must be configured with an access key to be able to access your AWS account. If you have already configured the CLI, you can skip this step.
If you have an access key already, but have not configured the aws
CLI, you can follow the AWS CLI quick setup guide. Choose the region you wish to install Gradle Enterprise in. You should generally pick the region closest to you geographically to ensure the best performance. AWS provides a list of all available EKS regions.
If you do not have an access key, follow the AWS CLI Prerequisites guide, and then the quick setup guide.
3. Install kubectl
AWS hosts their own binaries of kubectl
, which you can install by following their guide.
You can also install kubectl
through any other means. The Kubernetes documentation lists some of the most popular ways (note that you only need to install kubectl
, not any of the other tools listed there).
4. Install eksctl
eksctl
is the official CLI tool for creating and managing EKS clusters. While you can use the AWS CLI, eksctl
is much easier to use.
To install it, follow AWS’s eksctl
installation guide.
In most cases, Linux installation is as simple as downloading the binary and moving it to /usr/local/bin
:
$ curl --silent --location \
"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
| tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
If you are using Cloud Shell, replace the second command above with:
$ mkdir -p ~/.local/bin && mv /tmp/eksctl ~/.local/bin
5. Install helm
You need to install Helm. If you are using Cloud Shell, before installing, run:
$ export HELM_INSTALL_DIR=~/.local/bin
$ sudo yum install openssl -y
If for some reason you skipped the previous step, you will have to create the ~/.local/bin directory. |
If you don’t want to install openssl, you can disable Helm’s installer checksum verification using export VERIFY_CHECKSUM=false . |
To install Helm, run:
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
See Helm’s installation documentation for more details, and non-Linux instructions. |
Creating an EKS Cluster
In this section you will create an EKS cluster to run Gradle Enterprise.
If you’re using Cloud Shell, remember to run these commands there. |
1. Create a cluster
You will use a regular cluster for this tutorial, with three m5.large
nodes.
Gradle Enterprise does not support Fargate nodes out of the box, because Fargate nodes do not support any storage classes by default. |
Name this cluster gradle-enterprise
. To create it, run:
$ eksctl create cluster --name gradle-enterprise
This will take several minutes, and will add the cluster context to kubectl
when it is done (note that this will persist across Cloud Shell sessions).
eksctl
creates a CloudFormation stack, which you can see in the CloudFormation web UI.
For more details, consult the eksctl getting started guide. |
Once your cluster is up and running, you will be able to see the nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION ip-192-168-45-72.us-west-2.compute.internal Ready <none> 7m1s v1.22.9-eks-810597c ip-192-168-72-77.us-west-2.compute.internal Ready <none> 6m58s v1.22.9-eks-810597c
Gradle Enterprise needs three of these nodes to run well, so we will scale up the node group:
$ eksctl scale nodegroup \
--cluster gradle-enterprise \
--nodes 3 \
--nodes-max 3 \
--name $(aws eks list-nodegroups --cluster-name gradle-enterprise --query 'nodegroups[0]' --output text)
If you scale to something other than 3 , you’ll want to update both --nodes and --nodes-max . The maximum (and minimum) are only used if you have autoscaling configured, but the maximum still must be increased above your target number of nodes. |
2. Install the Nginx ingress controller
For ingress, we will use the Nginx ingress controller, behind an AWS Network Load Balancer (NLB). To install the Nginx ingress controller, run:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml
For more details on installing the Nginx ingress controller, see the installation guide. |
This manifest will also create the AWS NLB. You can find the hostname of your load balancer by running:
$ kubectl get \
-n ingress-nginx service ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
You can use AWS’s certificate management service to manage a trusted SSL certificate and provide it to your ingress. To use it, configure the Nginx ingress controller service using AWS’s annotations, and configure Gradle Enterprise for external SSL termination. Note that this requires using a hostname you own or can add DNS records to. |
3. Create a high performance storage class
Gradle Enterprise installations that use the embedded database are highly dependent on disk performance. The recommended minimum disk performance for Gradle Enterprise’s data volume is 3,000 IOPS.
Alternatively, Gradle Enterprise can use a user-managed database that is compatible with PostgreSQL 12, 13, or 14, including Amazon RDS and Aurora. For simplicity, this tutorial will stick with the embedded database. However, there are instructions for using Amazon RDS as a user-managed database in an appendix.
Gradle Enterprise can also be configured to store Build Scans in a S3 bucket. This can help performance in high-traffic installations by taking load off the database. See the S3 appendix for details.
Regardless of the above options, you need to choose the disk type for Gradle Enterprise’s data volumes. Consult the Gradle Enterprise AWS storage requirements and AWS’s volume type guide. gp3
is a good starting point, and defaults to 3,000 IOPS, which is our recommended minimum.
To use EBS volumes, we need to install the EBS CSI driver. First, enable OIDC for your cluster and create a service account for the driver to use:
$ eksctl utils associate-iam-oidc-provider --cluster gradle-enterprise --approve
$ eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster gradle-enterprise \
--attach-policy-arn "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" \
--approve \
--role-only \
--role-name eksctl-managed-AmazonEKS_EBS_CSI_DriverRole
Then install the driver:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ eksctl create addon \
--name aws-ebs-csi-driver \
--cluster gradle-enterprise \
--force \
--service-account-role-arn "arn:aws:iam::${ACCOUNT_ID}:role/eksctl-managed-AmazonEKS_EBS_CSI_DriverRole"
For more details on installing and managing the EBS CSI driver, see AWS’s documentation. |
To use gp3
volumes, we need to first add a gp3
storage class to our cluster. To do this, apply the following manifest (e.g. by running kubectl apply -f -
and pasting it):
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
When writing or pasting to a shell’s stdin , use EOF (usually ctrl+d ) to end the input. |
For more details on the options available for EBS volumes using the CSI driver, see the driver’s GitHub project and especially the StorageClass parameters documentation. |
4. Configure the hostname
If you intend to use a custom hostname to access your Gradle Enterprise instance, you now need to add the appropriate DNS records.
Add an CNAME
record for your hostname that points to the public hostname of your NLB. You can find this hostname by describing your NLB, as described in 2. Install the Nginx ingress controller.
ge.example.com CNAME a6d2f554d845844a69a0aac243289712-4696594a57a75795.elb.us-west-2.amazonaws.com
You should verify that your DNS record works correctly before installing Gradle Enterprise, such as by using dig ge.example.com
.
Alternatively, you can use the hostname generated by AWS. You can find the generated hostname by describing the NLB as shown in 2. Install the Nginx ingress controller.
You can use AWS’s DNS Service, Route 53, to easily route traffic to your NLB by following this guide. |
Installing Gradle Enterprise
In this section you will install Gradle Enterprise on your newly created instance. For full details on installation options, please see the Gradle Enterprise Helm Kubernetes Installation Manual.
1. Prepare a Helm values file
Create a Helm values file named values.yaml
as shown below:
global:
hostname: ge.example.com (1)
storage:
data:
class: gp3 (2)
backup:
class: gp2 (3)
logs:
class: gp2 (3)
ingress:
enabled: true
ingressClassName: nginx (4)
1 | Use the hostname you decided on in 4. Configure the hostname or substitute it later as shown below. |
2 | Use the high-performance storage class you created in 3. Create a high performance storage class for data volumes. |
3 | Use a low-performance storage class (gp2 , which is preinstalled) for backup and log volumes. |
4 | Use the Nginx ingress controller you installed in 2. Install the Nginx ingress controller. |
When adding things to your Helm values file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file» when running Helm commands. |
This file configures Gradle Enterprise and its installation. You can find more details on what is configurable in the Gradle Enterprise installation manual’s Helm configuration section.
If you want to use AWS’s autogenerated hostname, you can substitute it in the values.yaml
file by running:
$ INGRESS_HOSTNAME=$(
kubectl get \
-n ingress-nginx service ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
)
$ sed -i "s/ge.example.com/${INGRESS_HOSTNAME}/g" path/to/values.yaml
On macOS, you may need to replace -i with -i '' when using sed . |
If you want to provide an SSL certificate instead of having Gradle Enterprise generate a self-signed one, follow the instructions in the installation manual. |
If you want to use Amazon RDS as your database instead of the embedded database, follow Using Amazon RDS as a Gradle Enterprise user-managed database. |
You can use S3 to store Build Scans, which can improve performance in high-traffic instances. To do so, follow Storing Build Scans in Amazon S3. |
2. Install the gradle-enterprise
Helm chart
First, add the https://helm.gradle.com/
helm repository and update it:
$ helm repo add gradle https://helm.gradle.com/
$ helm repo update gradle
Then run helm install
with the following command:
$ helm install \
--create-namespace --namespace gradle-enterprise \
ge \
gradle/gradle-enterprise \
--values path/to/values.yaml \(1)
--set-file global.license.file=path/to/gradle-enterprise.license (2)
1 | The Helm values file you created in 1. Prepare a Helm values file. |
2 | The license you obtained in 2. A Gradle Enterprise license. |
You should see output similar to this:
NAME: ge LAST DEPLOYED: Wed Jul 13 04:08:35 2022 NAMESPACE: gradle-enterprise STATUS: deployed REVISION: 1 TEST SUITE: None
3. Wait for Gradle Enterprise to start
You can see the status of Gradle Enterprise starting up by examining its pods.
$ kubectl --namespace gradle-enterprise get pods
NAME READY STATUS RESTARTS AGE gradle-enterprise-operator-76694c949d-md5dh 1/1 Running 0 39s gradle-database-65d975cf8-dk7kw 0/2 Init:0/2 0 39s gradle-build-cache-node-57b9bdd46d-2txf5 0/1 Init:0/1 0 39s gradle-proxy-0 0/1 ContainerCreating 0 39s gradle-metrics-cfcd8f7f7-zqds9 0/1 Running 0 39s gradle-test-distribution-broker-6fd84c6988-x6jvw 0/1 Init:0/1 0 39s gradle-keycloak-0 0/1 Pending 0 39s gradle-enterprise-app-0 0/1 Pending 0 39s
Eventually the pods should all report as Running
:
$ kubectl --namespace gradle-enterprise get pods
NAME READY STATUS RESTARTS AGE gradle-enterprise-operator-76694c949d-md5dh 1/1 Running 0 4m gradle-proxy-0 1/1 Running 0 3m gradle-database-65d975cf8-dk7kw 2/2 Running 0 3m gradle-enterprise-app-0 1/1 Running 0 3m gradle-metrics-cfcd8f7f7-zqds9 1/1 Running 0 3m gradle-test-distribution-broker-6fd84c6988-x6jvw 1/1 Running 0 3m gradle-build-cache-node-57b9bdd46d-2txf5 1/1 Running 0 4m gradle-keycloak-0 1/1 Running 0 3m
Once all pods have a status of Running
, the system is up and can you can interact with it by visiting its URL in a web browser.
You can also visit the URL immediately, and will see a starting screen, which will then redirect to a Build Scan list once the app has started.
Gradle Enterprise uses a self-signed SSL certificate by default, so most browsers will warn about an untrusted certificate. To avoid this, use a managed SSL certificate as described in 2. Install the Nginx ingress controller or use a custom trusted SSL certificate as described in 1. Prepare a Helm values file. |
If the pods do not all start correctly, please see the troubleshooting section in the administration manual.
4. Update the system user password
Unless you have configured the system password using unattended configuration, such as when configuring S3 scans storage, Gradle Enterprise will create a system user with elevated permissions and a default password. This is insecure, and you should change the password as soon as possible.
To change the password, simply visit Gradle Enterprise using a web browser and sign in (using the "Sign In" button at the top right of the page) with the system user (username system
, password default
). You will then be prompted to select a new password for the system user account. You should record the new password and keep it secret.
It is recommended to not use the system user account regularly. Instead, real administrator user accounts should be created by configuring access control (see the next section).
Using Gradle Enterprise
Many features of Gradle Enterprise, including access control, database backups, and Build Scan retention can be configured in Gradle Enterprise itself, once it is running. The administration manual walks you through the various features you can configure post-installation - you should give the section a read.
For instructions on how to start using Gradle Enterprise in your builds, consult the Getting Started with Gradle Enterprise guide.
Further reading
-
Gradle Enterprise Helm Kubernetes Installation Manual — Full installation description and options for this type of installation.
-
Gradle Enterprise Admin Manual — Admin tasks around Gradle Enterprise.
-
The Gradle Enterprise tutorials — Covering reliability, caching, performance and insights.
-
Using the Build Cache guide — Improving cache performance and fixing common problems.
Appendix A: Using Amazon RDS as a Gradle Enterprise user-managed database
Gradle Enterprise can use a user-managed database instead of using its own embedded database. This can have a number of benefits, including easier resource scaling (and even autoscaling), easier backup and snapshot management, and failover support. For details on the pros and cons of using a user-managed database with Gradle Enterprise, see the database section of the installation manual. This appendix will walk you through using an Amazon RDS PostgreSQL instance as your database.
Obtain the required permissions
You will need permission to create and manage Amazon RDS instances. You will also need to create a security group, but you already have permissions to do that.
The necessary permissions can be easily granted by using the AmazonRDSFullAccess
AWS managed policy.
Set up an RDS instance
Before starting, it is a good idea to review Gradle Enterprise’s supported Postgres versions and storage requirements.
1. Decide on a root username and password
Decide on a root username and password for the database instance. We will refer to them as «db-root-username»
and «db-root-password»
, respectively. These are the credentials you will use for your database connection, so save them somewhere secure.
The superuser is only used by Gradle Enterprise to set up the database and create migrator and application users. You can avoid using the superuser from Gradle Enterprise by setting up the database yourself, as described in the database configuration section of Gradle Enterprise’s installation manual. Please contact Gradle support for help with this. |
2. Create a security group
Before creating the database, you have to create a security group in the VPC you want to use. In this tutorial you will use the eksctl
-created VPC used by your cluster. You can use a different VPC, but you will need to make the RDS instance accessible from your cluster (e.g. by peering the VPCs).
To create the security group, run:
$ CLUSTER_VPC_ID=$(
aws ec2 describe-vpcs \
--filters Name=tag:aws:cloudformation:stack-name,Values=eksctl-gradle-enterprise-cluster \
--query Vpcs[0].VpcId \
--output text
)
$ aws ec2 create-security-group --group-name ge-db-sg \
--description "Gradle Enterprise DB security group" \
--vpc-id ${CLUSTER_VPC_ID}
3. Enable ingress from your EKS cluster
You can enable ingress to the RDS instance from your cluster for port 5432 by running:
$ CLUSTER_SECURITY_GROUP_ID=$(
aws eks describe-cluster --name gradle-enterprise \
--query cluster.resourcesVpcConfig.clusterSecurityGroupId --output text
)
$ RDS_SECURITY_GROUP_ID=$(
aws ec2 describe-security-groups \
--filters Name=group-name,Values=ge-db-sg \
--query 'SecurityGroups[0].GroupId' --output text
)
$ aws ec2 authorize-security-group-ingress \
--protocol tcp --port 5432 \
--source-group ${CLUSTER_SECURITY_GROUP_ID} \
--group-id ${RDS_SECURITY_GROUP_ID}
4. Create a subnet group
Before creating the database, you need to create a subnet group to specify how the RDS instance will be networked. This subnet group must have subnets in two availability zones, and typically should use private subnets.
eksctl
has already created private subnets we can use. Create a subnet group containing them by running:
$ CLUSTER_VPC_ID=$(
aws ec2 describe-vpcs \
--filters Name=tag:aws:cloudformation:stack-name,Values=eksctl-gradle-enterprise-cluster \
--query Vpcs[0].VpcId \
--output text
)
$ SUBNET_IDS=$(
aws ec2 describe-subnets \
--query 'Subnets[?!MapPublicIpOnLaunch].SubnetId' \
--filters Name=vpc-id,Values=${CLUSTER_VPC_ID} --output text
)
$ aws rds create-db-subnet-group --db-subnet-group-name ge-db-subnet-group \
--db-subnet-group-description "Gradle Enterprise DB subnet group" \
--subnet-ids ${SUBNET_IDS}
Consult RDS’s subnet group documentation for more details on subnet groups and their requirements. |
5. Create the RDS instance
Now create the RDS instance:
$ RDS_SECURITY_GROUP_ID=$(
aws ec2 describe-security-groups \
--filters Name=group-name,Values=ge-db-sg \
--query 'SecurityGroups[0].GroupId' --output text
)
$ aws rds create-db-instance \
--engine postgres \
--engine-version 14.3 \
--db-instance-identifier gradle-enterprise-database \
--db-name gradle_enterprise \
--allocated-storage 250 \(1)
--iops 3000 \(2)
--db-instance-class db.m5.large \
--db-subnet-group-name ge-db-subnet-group \
--backup-retention-period 3 \(3)
--no-publicly-accessible \
--vpc-security-group-ids ${RDS_SECURITY_GROUP_ID} \
--master-username «db-root-username» \
--master-user-password «db-root-password»
1 | Gradle Enterprise should be installed with 250GB of database storage to start with. |
2 | As discussed in 3. Create a high performance storage class, Gradle Enterprise’s data volumes and database should support at least 3,000 IOPS. |
3 | The backup retention period, in days. |
While you don’t configure it here, RDS supports storage autoscaling. |
Consult AWS’s database creation guide and the CLI command reference for more details on RDS instance creation. |
You can then view the status of your instance with:
$ aws rds describe-db-instances --db-instance-identifier gradle-enterprise-database
Wait until the DBInstanceStatus
is available
. You should then see the hostname of the instance under Endpoint
. This is the address you will use to connect to the instance. We will refer to it as «database-address»
.
Configure Gradle Enterprise to use your RDS instance
Add the following configuration snippet to your Helm values file:
database:
location: user-managed
connection:
host: «database-address»
databaseName: gradle_enterprise
credentials:
superuser:
username: «db-root-username»
password: «db-root-password»
When adding things to your Helm values file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file» when running Helm commands. |
You can substitute «database-address»
in the Helm values file by running (verbatim):
$ DATABASE_ADDRESS=$(
aws rds describe-db-instances \
--db-instance-identifier gradle-enterprise-database \
--query 'DBInstances[0].Endpoint.Address' \
--output text
)
$ sed -i "s/«database-address»/${DATABASE_ADDRESS}/g" path/to/values.yaml
The superuser is only used to set up the database and create migrator and application users. You can avoid using the superuser by setting up the database yourself, as described in the database configuration section of Gradle Enterprise’s installation manual. Please contact Gradle support for help with this. |
This embeds your database superuser credentials in your Helm values file, meaning it must be kept secure. If you prefer to provide the credentials as a Kubernetes secret, consult Gradle Enterprise’s database configuration instructions. |
While we recommend completing this appendix before installing the Gradle Enterprise Helm chart, it is possible to do it afterwards and then update the the Helm release. To do this, follow the instructions in the installation manual. |
Switching to a user-managed database after installing Gradle Enterprise will result in the lose of any data stored prior to the switch. This may not be an issue for new installations. If it is, follow the user-managed database migration guide. |
Appendix B: Storing Build Scans in Amazon S3
Gradle Enterprise can be configured to store most Build Scan data in an S3 bucket rather than its database. This can have a number of benefits, including better performance for very high-traffic instances, high scalability and fault tolerance, and cheaper storage costs. For details on the pros and cons of using S3 Build Scan storage with Gradle Enterprise, see the S3 storage section of the administration manual. This appendix will walk you through using an Amazon S3 bucket to store Build Scans.
Gradle Enterprise will still use its database to store other information. However, the size and performance requirements of the database will be much smaller. |
Obtain the required permissions
You will need permission to create and manage Amazon S3 buckets. You will also need to create IAM policies, roles, and instance profiles, but you already permission to do that from the eksctl
policies.
The necessary permissions can be easily granted by using the AmazonS3FullAccess
AWS managed policy.
Set up a S3 bucket and allow access from your EC2 instance
You need to create an S3 bucket and create an IAM policy that allows access to it. Then you need to associate that policy with your EC2 instance.
1. Create a S3 bucket
To create the S3 bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ aws s3 mb s3://gradle-enterprise-build-scans-${ACCOUNT_ID} (1)
1 | S3 bucket names must be unique across all AWS accounts, within groups of regions. To comply with this, we use your account ID as a suffix. If you have multiple installations of Gradle Enterprise you want to use S3 storage with, either add a suffix or use the same bucket with a different scans object prefix. |
2. Create a policy allowing bucket access
To create a role allowing access to your bucket, first create a policy.json
file with the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::gradle-enterprise-build-scans-«account-id»"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::gradle-enterprise-build-scans-«account-id»/*"
]
}
]
}
Then run the following commands:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" ./policy.json (1)
$ aws iam create-policy \
--policy-name "eksctl-gradle-enterprise-build-scan-access" \(1)
--policy-document file://policy.json(2)
1 | Even though we aren’t using eksctl to create this policy, using the eksctl- prefix allows us to avoid needing additional permissions. |
2 | The policy.json file you created. |
3. Create a role for EKS
To associate the service account with an AWS IAM role, we need to use an AWS OIDC provider. We already installed one when setting up the EBS CSI driver, so we can use it here.
To create a role that can be used from EKS and uses the policy you just created, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ POLICY_ARN="arn:aws:iam::${ACCOUNT_ID}:policy/eksctl-gradle-enterprise-build-scan-access"
$ eksctl create iamserviceaccount \
--name gradle-enterprise-app \
--namespace gradle-enterprise \
--cluster gradle-enterprise \
--approve \
--role-only \
--role-name eksctl-managed-GradleEnterprise_BuildScans_S3_Role \
--attach-policy-arn ${POLICY_ARN}
Update your Helm values file
You need to configure Gradle Enterprise to use the role you created. You also need to increase Gradle Enterprise’s memory request and limit. These are both done by adding the following to your Helm values file:
enterprise:
resources:
requests:
memory: 6Gi (1)
limits:
memory: 6Gi (1)
serviceAccount:
annotations:
"eks.amazonaws.com/role-arn": "arn:aws:iam::«account-id»:role/eksctl-managed-GradleEnterprise_BuildScans_S3_Role" (2)
1 | If you have already set a custom value here, instead increase it by 2Gi . |
2 | «account-id» is the ID of your AWS account, which you will substitute in a moment. |
When adding things to your Helm values.yaml file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file» when running Helm commands. |
Then substitute «account-id»
in the Helm values file by running (verbatim):
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" path/to/values.yaml
You may need to scale up your cluster or use nodes with more memory to be able to satisfy the increased memory requirements. See 1. Create a cluster for scaling instructions. |
While we recommend completing this appendix before installing the Gradle Enterprise Helm chart, it is possible to do it afterwards and then update the the Helm release. To do this, follow the instructions in the installation manual. |
Configure Gradle Enterprise to store Build Scans in your S3 bucket
Now that we have created a S3 bucket and allowed Gradle Enterprise to access it, we need to configure Gradle Enterprise to actually use it. There are two ways to do this: using Gradle Enterprise’s web UI or using the unattended configuration mechanism. While using the web UI is often easier, it requires starting your Gradle Enterprise instance and then configuring the Build Scans storage. The unattended configuration mechanism lets you configure it as part of your Helm values file. We will provide instructions for both methods.
Using the web UI
To configure Gradle Enterprise to use your bucket using the web UI, follow the instructions in the administration manual with the following configuration:
-
Bucket:
gradle-enterprise-build-scans-«account-id»
.«account-id»
can be found by runningaws sts get-caller-identity --query Account --output text
. -
Region: your current region. Viewable by running
aws configure list | grep region
. -
S3 credentials:
Obtain from environment
To print the actual name of your bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ echo "gradle-enterprise-build-scans-${ACCOUNT_ID}"
Using unattended configuration
Before using the unattended configuration mechanism, you should read the relevant section of the administration manual.
1. Choose and hash system password
First, you need to choose a system password and hash it. To do this, install the Gradle Enterprise Admin CLI. Then run:
$ gradle-enterprise-admin config-file hash -o secret.txt -s -
To hash your password from stdin
and write it to secret.txt
. We will refer to the hashed password as «hashed-system-password»
.
2. Modify your Helm values file
To use your S3 bucket, add the following to your Helm values file:
global:
unattended:
configuration:
version: 5
systemPassword: "«hashed-system-password»" (1)
buildScans:
storage:
incomingStorageType: s3
s3:
bucket: gradle-enterprise-build-scans-«account-id» (2)
region: «region» (3)
credentials:
source: environment
advanced:
app:
heapMemory: 5632 (4)
1 | Your hashed system password. |
2 | Your account ID, which we will substitute in below. |
3 | The region of your S3 bucket, which should be your current region. Viewable by running aws configure list | grep region . |
4 | If you have already set a custom value here, instead increase it by 2048 . |
When adding things to your Helm values file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file» when running Helm commands. |
Then substitute «account-id»
in the Helm values file by running (verbatim):
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ sed -i "s/«account-id»/${ACCOUNT_ID}/g" path/to/values.yaml
While we recommend completing this appendix before installing the Gradle Enterprise Helm chart, it is possible to do it afterwards and then update the the Helm release. To do this, follow the instructions in the installation manual. |
Verifying S3 storage is used
Gradle Enterprise will start even if your S3 configuration is incorrect. Once Gradle Enterprise has started, you can verify that the S3 configuration is correct by using the "Test S3 Connection" button on the /admin/build-scans
page, or by uploading a scan and then checking for its presence in your S3 bucket.
To view the build scans stored in your S3 bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ aws s3 ls s3://gradle-enterprise-build-scans-${ACCOUNT_ID}/build-scans/ \(1)
--recursive --human-readable --summarize
2022-09-27 19:11:06 6.6 KiB build-scans/2022/09/27/aprvi3bnnxyzm Total Objects: 1 Total Size: 6.6 KiB
1 | If you used a custom prefix, use it here instead of build-scans . |
You won’t see anything at first, but once you upload a build scan (see Using Gradle Enterprise), you should see it there.
Appendix C: Teardown and Cleanup
This appendix will walk you through tearing down Gradle Enterprise and deleting any resources created by following this tutorial. To start, uninstall the Gradle Enterprise helm chart:
$ helm uninstall --namespace gradle-enterprise ge
If you have other resources, like a user-managed database or S3 Build Scan storage, you should delete them now. RDS and S3 teardown instructions are in their respective sections, below.
If you are using other resources in the cluster’s VPC (such as an RDS instance), eksctl will fail to delete the VPC unless you delete those resources first. If this happens, the VPC and CloudFormation stack can be manually deleted. However, it’s generally easier to delete those resources first. |
Then you can delete your cluster by running:
$ eksctl delete cluster --name gradle-enterprise
RDS
If you followed Using Amazon RDS as a Gradle Enterprise user-managed database, you have some additional cleanup to do.
To delete the RDS instance, run:
Deleting an RDS instance also deletes any automated backups of its database. However, by default, the deletion command will create a final snapshot of the database. |
$ aws rds delete-db-instance \
--db-instance-identifier gradle-enterprise-database \
--final-db-snapshot-identifier gradle-enterprise-db-snapshot
The delete-db-instance command requires that your instance is running so that it can create a final snapshot. You can skip the final snapshot and avoid this requirement by passing --skip-final-snapshot instead of --final-db-snapshot-identifier gradle-enterprise-db-snapshot . |
The command will complete immediately, but deletion will likely take some time.
For more details on RDS instance deletion, consult AWS’s guide. |
To also delete the security group, run:
$ RDS_SECURITY_GROUP_ID=$(
aws ec2 describe-security-groups \
--filters Name=group-name,Values=ge-db-sg \
--query 'SecurityGroups[0].GroupId' --output text
)
$ aws ec2 delete-security-group --group-id ${RDS_SECURITY_GROUP_ID}
This will fail until deletion of the database instance has finished. |
And for the subnet group, run:
$ aws rds delete-db-subnet-group --db-subnet-group-name ge-db-subnet-group
This will fail until deletion of the database instance has finished. |
S3
If you followed Storing Build Scans in Amazon S3, you have some additional cleanup to do.
Deleting your S3 bucket will delete all stored Build Scans. Gradle Enterprise features that rely on historical analysis (e.g. test analytics, predictive test selection) will fail or will provide less useful information. |
To delete your S3 bucket, run:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ aws s3 rb s3://gradle-enterprise-build-scans-${ACCOUNT_ID} --force
The role you created will be deleted when you delete the cluster. Once the cluster is deleted, you can also delete your IAM policy by running the following commands:
$ ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ POLICY_ARN="arn:aws:iam::${ACCOUNT_ID}:policy/eksctl-gradle-enterprise-build-scan-access" (1)
$ aws iam delete-policy --policy-arn ${POLICY_ARN}
1 | The ARN of the policy you created. |