Thursday, 14 June 2018

Amazon EKS - Kubernetes on AWS


By Komal Devgaonkar

Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service. It is now available in the US East and US West regions.  EKS creates the highly available Kubernetes control plane which runs highly available etcd cluster in multi-az mode. Amazon EKS simplifies the process of building, operating, securing and maintaining k8s cluster.

How Does Amazon EKS Work?


 
Following are the steps to setup cluster

            1. Create Amazon EKS service role.
            2. Create Amazon EKS cluster VPC.
            3. Create EKS cluster.
            4. Install & Configure kubectl for EKS.
            5. Install heptio-authenticator-AWS for EKS.
            6. Launch and configure EKS worker nodes.
            7. Enable worker nodes to join your cluster.

Step 1: Create Amazon EKS service role
 Open the IAM console at https://console.aws.amazon.com/iam/.
1.Choose Roles, then Create role.

2.Choose EKS from the list of services, then Next: Permissions.

3.Choose Next: Review.

4.For Role name, enter a unique name for your role, such as eks-service-role, then choose Create role.


Step 2: Create Amazon EKS cluster VPC
1. Open the AWS CloudFormation console at  - LINK.
 Note
 Amazon EKS is available in the following Regions at this time: US West (Oregon) (us-west-2) & US East (N. Virginia) (us-east-1) 

2. Choose Create stack.

3. For Choose a template, select Specify an Amazon S3 template URL.

4. Paste the following URL into the text area and choose Next:
 https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml

 5. On the Specify Details page, fill out the parameters accordingly, and then choose Next
 6.(Optional) On the Options page, tag your stack resources. Choose Next.

7. On the Review page, choose Create.

8. When your stack created record vpcId, subnetids and security groups from console output. You need this when you create EKS cluster.

Step 3:  Create EKS cluster

Note:
Login to the console using IAM user. Do not use root credentials to create cluster.
1. Open the Amazon EKS console at - LINK    
                                                                          
2. Create cluster:




3. On the Create cluster page, fill in the following fields and then choose Create:
·         Cluster Name: A unique name for cluster
·         Role ARN: Select the IAM role that you created in step 1.
·         VPC: Select VPC that you created in step 2.
·         Subnets: Select subnetId values which are recorded from step 2.
·         Security Groups: Select SG value recored from step 2.
           

Step 4: Install and configure kubectl for EKS
 1. Download the Amazon EKS-vended kubectl binary from Amazon S3:
     Linux: (Link)

2. Apply execute permissions to the binary
     #chmod +x ./kubectl

3. Copy the binary to a folder in your $PATH.
     # cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

4. After you install kubectl, you can verify its version with the following command:
     # kubectl version --short –client

5. Create the default kubectl folder if it does not already exist.
     #mkdir -p ~/.kube

6. Open your favorite text editor and copy the below kubeconfig code into it

   apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"

7. Replace the <endpoint-url>with the endpoint URL<base64-encoded-ca-cert>with the certificate Authority data,<cluster-name>with your cluster name.

8. Save the file to the default kubectl folder, with your cluster name in the file name. For    example, if your cluster name is cluster1, save the file to~/.kube/config-cluster1

 9. Add that file path to your KUBECONFIG environment variable so that kubectl knows                        where to look for your cluster configuration.
     # export KUBECONFIG=$KUBECONFIG:~/.kube/config-cluster1

10. Test your configuration.
    #kubectl get svc
           

Step 5: Install heptio-authenticator-AWS for EKS.
            
 1. Download the Amazon EKS-vended heptio-authenticator-awsbinary from Amazon S3:           
    Linux: (Link)

 2. Apply execute permissions to the binary.
     #chmod +x ./heptio-authenticator-aws

 3. Copy the binary to a folder in your $PATH
    #cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export                                   //PATH=$HOME/bin:$PATH 

 4. Test that the heptio-authenticator-aws binary works.
    # heptio-authenticator-aws help

Step 6: Launch and configure EKS worker nodes

 To launch your worker nodes:
1. Open the AWS CloudFormation console (Link)

2. Select region US West (Oregon) (us-west-2) OR US East (N. Virginia) (us-east-1)

3. Choose Create stack.

4. For Choose a template, select Specify an Amazon S3 template URL.
    URL-(Link 
            
5. On the Specify Details page, fill out the following parameters, and choose Next:

6. On the Options page, you can choose to tag your stack resources.Choose Next.  
            
7. Choose create

8. Record the NodeInstanceRole for the node group that was created.

Step 6: To enable worker nodes to join your cluster

1. Download the config map.
    # curl -O (Link

2. Open the file with any text editor. Replace the <ARN of instance role (not instance profile)>with the NodeInstanceRole and save the file.


apiVersion: v1
kind: ConfigMap
metadata:
 name: aws-auth
 namespace: kube-system
data:
 mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
           

            3. Apply the configuration.
                #kubectl apply -f aws-auth-cm.yaml

            4. Check status of nodes by using following command:
                # kubectl get nodes
   

REFERENCES - 
https://aws.amazon.com/eks
https://www.zdnet.com/article/amazon-eks-is-generally-available-bringing-fully-managed-kubernetes-to-aws/
https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/
                           


9 comments:

  1. The most suitable choice could also be to mix each on-premise and cloud to create a hybrid cloud setting. This can enable for regular workloads to be stored onsite whereas bursts in demand will be processed by an on-demand, public cloud.This is great blog. If you want to know more about this visit here AWS Cloud Certified.

    ReplyDelete
  2. Really good information to show through this blog. I really appreciate you for all the valuable information that you are providing us through your blog. Google cloud Online training

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Really It is very useful information for us. thanks for sharing.
    AWS Training In Hyderabad

    ReplyDelete

Amazon EKS - Kubernetes on AWS

By Komal Devgaonkar Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service....