Creating an EKS cluster
After you have installed your system and configured your AWS account, you can create an EKS cluster using the AWS Management Console.
Step 1: Log in to the AWS Management Console
Log in with your account credentials at https://aws.amazon.com/console/ to get to the management console.
Enter EKS
in the search bar and select Elastic Kubernetes Service from the search results.
Step 2: Obtain the bootstrap kit
You install Hybrid Manager (HM) using a bootstrap kit. This kit is a collection of scripts and Helm charts that are used to install the EDB Software Deployment platform.
You need the following files:
Copy the file to a directory on your local machine and cd
into that directory.
Then, using the AWS Management Console:
Select the AWS region you want to create the cluster in.
Make a note of the region you select. You will need this later in the process.
Go to the EKS page. Enter
EKS
in the search bar and select Elastic Kubernetes Service from the search results.On the EKS page, select Create Cluster.
Step 3: Configure the cluster
In the configuration options:
Select Quick configuration (with EKS Auto Mode) - new.
In the Cluster name field, enter a name for your cluster and make a note of it.
In the Kubernetes version field, select the version you want to use. (The default is 1.31 for EKS Auto Mode.)
In the Cluster IAM Role field, leave the value
AmazonEKSAutoClusterRole
.In the Node IAM Role field, leave the value
AmazonEKSAutoNodeRole
.
Select or create a VPC for your cluster
You need to provide a VPC for your cluster. You can either create a VPC or use an existing one.
Select Create VPC.
On the Create VPC page, under Resources to create, select VPC and more.
Enter the following information:
Under Name tag auto-generation, enter a name for your VPC.
Under Number of Availability Zones, select 3.
Under Number of public subnets, ensure that 3 is selected.
Under Number of private subnets, ensure that 3 is selected.
Under NAT gateways, select In 1 AZ
Select Create VPC. The Create VPC workflow page appears and steps through creating the VPC. When complete, select View VPC.
In the view of the VPC, for each public subnet in the resource map:
Hover over the subnet and select the square box with arrow icon that appears. A page opens where you can edit the subnet.
Select Actions > Edit subnet settings.
Select Auto-assign public IPv4 address > Enable auto-assign public IPv4 address.
Select Save.
Add tags for each public subnet:
- Select the Tags tab.
- Select Manage Tags
- Select Add Tag
- Enter
kubernetes.io/role/elb
in the Key field, and1
in the Value field. - Select Save
- Close the tab/window and return to the VPC view.
Add tags for each private subnet in the resource map:
- Hover over the subnet and select the square box with arrow icon that appears.
- Select the Tags tab.
- Select Manage Tags
- Select Add Tag
- Enter
kubernetes.io/role/internal-elb
in the Key field, and1
in the Value field. - Select Save
- Close the tab/window and return to the VPC view.
If you have an existing VPC you want to use, select it from the dropdown.
The VPC must have at least two public subnets and two private subnets in different availability zones and must have the following tags:
Public subnets:
kubernetes.io/role/elb
:1
Private subnets:
kubernetes.io/role/internal-elb
:1
Return to the EKS cluster creation page
If you created a new VPC, you need to refresh the EKS cluster creation page. To refresh the dropdown so you can select the VPC you just created, select the circular arrow icon next to Create VPC.
Select the subnets for your cluster. You must select at least two private subnets in different availability zones.
Note
These subnets must be private, not public, as their purpose is to facilitate communication between the Kubernetes control plane and the Kubernetes worker nodes. Public subnets could leave the cluster internals open to unnecessary security threats. The subnet tagging informs EKS to create the ingress load balancers on the correct subnets.
Select Create.
The cluster is now being created. This can take a few minutes. You can monitor the creation on the EKS console by selecting Clusters in the breadcrumb at the top of the page.
Step 4: Configure kubectl
After you create the cluster, you need to configure kubectl to interact with it.
Open your terminal and run:
aws eks --region <REGION> update-kubeconfig --name <CLUSTER-NAME>
Where:
<REGION>
is the region in which you created the cluster.<CLUSTER-NAME>
is the name you gave the cluster.
Note
If you haven't set the AWS_PROFILE
environment variable, include the --profile <profile-name>
option in the command.
You can now interact with your EKS cluster. To test, run:
kubectl get pods -A
Step 5: Create an S3 bucket and connect it to the cluster
HM requires an S3 bucket to store backups and other data. Create an S3 bucket and connect it to the cluster.
Create an S3 bucket
Open the S3 service in the AWS Management Console. Enter
S3
in the search bar and select S3 from the search results.Select Create bucket.
Enter a name for the bucket name field. This must be globally unique.
Select Create Bucket.
Create the role for access
Return to the EKS dashboard. Enter
EKS
in the search bar and select Elastic Kubernetes Service from the search results.Select the cluster you created.
In the Details section of the overview, copy the OpenID Connect provider URL.
Open the IAM service in the AWS Management Console. enter
IAM
in the search bar and select IAM from the search results.In the left menu, select Identity Providers.
Select Add Provider.
On the Configure Provider page, select OpenID Connect, then paste the URL you copied into the Provider URL field.
In the audience field, enter
sts.amazonaws.com
.Select Add Provider.
A green banner displays the message
Provider created successfully
and an Assign Role button appears.Select Assign Role.
Under Role Options, select Create a New Role. Select Next.
Select a trusted entity:
On the Select trusted entity page, select Web identity.
In the Web identity section, the Identity Provider field shows the provider you just created.
Select
sts.amazonaws.com
in the Audience field and select Next.
Grant permissions to the new role:
On the Add Permissions page, enter
s3full
to the search field. Select theAmazonS3FullAccess
policy and select Next.Enter a role name, such as
<clustername>-s3access
, and select Create Role.A green banner displays with the message
Role created successfully
and a View Role button appears.Select View Role.
In the view of the role, copy the role's ARN value.
Create the secret
You must create the secret in the default namespace (-n default
) for it to function correctly.
In your terminal, run:
kubectl create secret generic -n default edb-object-storage \ --from-literal auth_type=workloadIdentity \ --from-literal aws_region=<REGION> \ --from-literal aws_role_arn=<ROLE-ARN> \ --from-literal bucket_name=<BUCKET-NAME>
Where:
<REGION>
is the region where you are creating the cluster.<ROLE-ARN>
is the ARN of the role you created.<BUCKET-NAME>
is the name of the bucket you created.
Step 6: Add the EBS CSI driver and CSI snapshot controller
The PGAIHM requires two AWS CSI-related drivers to manage snapshots correctly: the EBS CSI driver and CSI snapshot controller.
AWS EBS CSI driver
The EBS CSI diver allows Kubernetes clusters to dynamically provision and manage storage resources using AWS services like Elastic Block Store (EBS) and Elastic File System (EFS). The integration helps Kubernetes workloads seamlessly use AWS storage services as persistent storage.
To install the EBS CSI driver:
In the EKS cluster page of the EKS cluster you're installing PGAIHM on, select the Add-ons tab.
Select Get More Addons.
On the Add-ons page, on the tile for EBS CSI Driver, select the check box, then select Next.
On the Configure page, select Next.
On the Review page, select Create.
The EBS CSI add-on driver is added to your cluster.
AWS CSI snapshot controller
The CSI snapshot controller is a Kubernetes controller that manages volume snapshots.
To install the CSI snapshot controller:
In the EKS cluster page of the EKS cluster you are installing PGAIHM on, select the Add-ons tab.
Select Get More Addons.
On the Add-ons page, on the tile for CSI Snapshot Controller, select the check box, then select Next.
On the Configure page, select Next
On the Review page, select Create.
The CSI snapshot controller add-on is added to your cluster.
Step 7: Update the cluster for compatibility
The EKS cluster created by the AWS Management Console isn't fully compatible with HM, so you need to make some changes to the cluster. A script is available to make those changes for you.
Download the Helm eks-fix-auto.sh script or Operator eks-fix-auto.sh script and run it in your terminal.
$SHELL eks-fix-auto.sh
The script replaces the default nodepool and storage classes with ones more appropriate for the workload and configures a few other settings.
Step 8: Install Hybrid Manager
- On this page
- Step 1: Log in to the AWS Management Console
- Step 2: Obtain the bootstrap kit
- Step 3: Configure the cluster
- Step 4: Configure kubectl
- Step 5: Create an S3 bucket and connect it to the cluster
- Step 6: Add the EBS CSI driver and CSI snapshot controller
- Step 7: Update the cluster for compatibility
- Step 8: Install Hybrid Manager
← Prev
Setting up an AWS account
↑ Up
EKS installation prerequisites
Next →
Install Hybrid Manager on EKS
Could this page be better? Report a problem or suggest an addition!