Kubernetes is contagious and nowadays hard to ignore. So we decided to look into EKS to see how it would work for
our microservice suite. As of today, there are 2 ways of creating (official) an EKS cluster: eksctl
via CLI or
point and click through the Web-UI.
We have multiple accounts and use services in multiple regions, so I developed a custom CloudFormation template to build our EKS cluster with StackSets. While this worked perfectly, I found an issue of not being able to access the cluster at all.
It turned out to be perfectly normal,
since when an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the
Kubernetes RBAC authorization table as the administrator (with system:master permissions). Initially, only that IAM
user (initiated the cluster creation) can make calls to the Kubernetes API server using kubectl
.
About StackSets
The basics of StackSets, is that you create a role AWSCloudFormationStackSetAdministrationRole on your MASTER account (where the StackSet is launched from) and a role AWSCloudFormationStackSetExecutionRole on your TARGET account(s) where StackSets will manage CloudFormation stacks for you.
Assume role setup
So in this case, you need to make sure that your IAM user entity can assume (temporarily) the AWSCloudFormationStackSetExecutionRole. This can be done multiple ways, but the easiest would be to create a TEMP group, attach an inline policy to it and at completion, make your IAM user member of this TEMP group.
The inline policy for your TEMP group:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
The <123456789100>
number is YOUR target account ID where the EKS cluster is created.
Create a TEMP AWS config entry:
1 2 3 4 5 6 |
|
Test your assume role before proceeding:
1
|
|
Update Kubernetes RBAC
Get the ConfigMap template from AWS:
1
|
|
Edit it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The first element of the mapRoles
is required for the worker nodes, so they can join the cluster automatically.
For this, you need to get the ARN of the InstanceRole that is attached to your nodes, normally it is generated
on the fly when the stack is created.
The second element is our required admin access details. In my case a special role Administrator
, that only certain
users can assume, but you could also add IAM user arn.
When ready, get a copy of your cluster config with your new TEMP profile, then update the cluster:
1 2 |
|
At completion, delete the kube config you got with your TEMP profile, get a new one with your normal aws profile. Alternatively, edit the config and update the AWS_PROFILE key at the bottom…
1 2 |
|
From this point on, your normal IAM user profile should be able to access the cluster:
1 2 3 4 |
|
Clean up
Do not forget to remove the TEMP group and the TEMP AWS config profile at completion.