Thursday, 14 June 2018

Amazon EKS - Kubernetes on AWS


By Komal Devgaonkar

Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service. It is now available in the US East and US West regions.  EKS creates the highly available Kubernetes control plane which runs highly available etcd cluster in multi-az mode. Amazon EKS simplifies the process of building, operating, securing and maintaining k8s cluster.

How Does Amazon EKS Work?


 
Following are the steps to setup cluster

            1. Create Amazon EKS service role.
            2. Create Amazon EKS cluster VPC.
            3. Create EKS cluster.
            4. Install & Configure kubectl for EKS.
            5. Install heptio-authenticator-AWS for EKS.
            6. Launch and configure EKS worker nodes.
            7. Enable worker nodes to join your cluster.

Step 1: Create Amazon EKS service role
 Open the IAM console at https://console.aws.amazon.com/iam/.
1.Choose Roles, then Create role.

2.Choose EKS from the list of services, then Next: Permissions.

3.Choose Next: Review.

4.For Role name, enter a unique name for your role, such as eks-service-role, then choose Create role.


Step 2: Create Amazon EKS cluster VPC
1. Open the AWS CloudFormation console at  - LINK.
 Note
 Amazon EKS is available in the following Regions at this time: US West (Oregon) (us-west-2) & US East (N. Virginia) (us-east-1) 

2. Choose Create stack.

3. For Choose a template, select Specify an Amazon S3 template URL.

4. Paste the following URL into the text area and choose Next:
 https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml

 5. On the Specify Details page, fill out the parameters accordingly, and then choose Next
 6.(Optional) On the Options page, tag your stack resources. Choose Next.

7. On the Review page, choose Create.

8. When your stack created record vpcId, subnetids and security groups from console output. You need this when you create EKS cluster.

Step 3:  Create EKS cluster

Note:
Login to the console using IAM user. Do not use root credentials to create cluster.
1. Open the Amazon EKS console at - LINK    
                                                                          
2. Create cluster:




3. On the Create cluster page, fill in the following fields and then choose Create:
·         Cluster Name: A unique name for cluster
·         Role ARN: Select the IAM role that you created in step 1.
·         VPC: Select VPC that you created in step 2.
·         Subnets: Select subnetId values which are recorded from step 2.
·         Security Groups: Select SG value recored from step 2.
           

Step 4: Install and configure kubectl for EKS
 1. Download the Amazon EKS-vended kubectl binary from Amazon S3:
     Linux: (Link)

2. Apply execute permissions to the binary
     #chmod +x ./kubectl

3. Copy the binary to a folder in your $PATH.
     # cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH

4. After you install kubectl, you can verify its version with the following command:
     # kubectl version --short –client

5. Create the default kubectl folder if it does not already exist.
     #mkdir -p ~/.kube

6. Open your favorite text editor and copy the below kubeconfig code into it

   apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"

7. Replace the <endpoint-url>with the endpoint URL<base64-encoded-ca-cert>with the certificate Authority data,<cluster-name>with your cluster name.

8. Save the file to the default kubectl folder, with your cluster name in the file name. For    example, if your cluster name is cluster1, save the file to~/.kube/config-cluster1

 9. Add that file path to your KUBECONFIG environment variable so that kubectl knows                        where to look for your cluster configuration.
     # export KUBECONFIG=$KUBECONFIG:~/.kube/config-cluster1

10. Test your configuration.
    #kubectl get svc
           

Step 5: Install heptio-authenticator-AWS for EKS.
            
 1. Download the Amazon EKS-vended heptio-authenticator-awsbinary from Amazon S3:           
    Linux: (Link)

 2. Apply execute permissions to the binary.
     #chmod +x ./heptio-authenticator-aws

 3. Copy the binary to a folder in your $PATH
    #cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export                                   //PATH=$HOME/bin:$PATH 

 4. Test that the heptio-authenticator-aws binary works.
    # heptio-authenticator-aws help

Step 6: Launch and configure EKS worker nodes

 To launch your worker nodes:
1. Open the AWS CloudFormation console (Link)

2. Select region US West (Oregon) (us-west-2) OR US East (N. Virginia) (us-east-1)

3. Choose Create stack.

4. For Choose a template, select Specify an Amazon S3 template URL.
    URL-(Link 
            
5. On the Specify Details page, fill out the following parameters, and choose Next:

6. On the Options page, you can choose to tag your stack resources.Choose Next.  
            
7. Choose create

8. Record the NodeInstanceRole for the node group that was created.

Step 6: To enable worker nodes to join your cluster

1. Download the config map.
    # curl -O (Link

2. Open the file with any text editor. Replace the <ARN of instance role (not instance profile)>with the NodeInstanceRole and save the file.


apiVersion: v1
kind: ConfigMap
metadata:
 name: aws-auth
 namespace: kube-system
data:
 mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
           

            3. Apply the configuration.
                #kubectl apply -f aws-auth-cm.yaml

            4. Check status of nodes by using following command:
                # kubectl get nodes
   

REFERENCES - 
https://aws.amazon.com/eks
https://www.zdnet.com/article/amazon-eks-is-generally-available-bringing-fully-managed-kubernetes-to-aws/
https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/
                           


Tuesday, 8 May 2018

Kubernetes Log Shipping to Elasticsearch



By Lokesh Jawane

As we all know while setting up the infrastructure there a lot of tasks involved - configuration, managing the data and files, searching the logs, troubleshooting ,debugging etc.If you have a large and complex infrastructure you have to ensure that your data gets stored properly. This ensures better data filter and analysis which helps in easy troubleshooting and make your infra environment stable.

Let us now talk about Kubernetes. Generally when its comes to Kubernetes we talk about testing, monitoring and configuration management. Let's look at how to collect data about Kunernetes for data/log analysis.
FileBeat version 6.0.0 or later has added processor add_kubernetes_metadata which allows to gather the k8s container logs and send it to Elasticsearch.

PROCESS
add_kubernetes_metadata enriches logs with metadata from the source container, it adds pod name, container name, and image, Kubernetes labels and, optionally, annotations. It works by watching Kubernetes API for pod events to build a local cache of running containers. When a new log line is read, it gets enriched with metadata from the local cache
Configuration with Elasticsearch & Kibana


It’s great  if you have the ElasticSearch & Kibana inplace, if not then not to worry, just follow the below link to setup it.
Note: make sure you have configured ElasticSearch Basic auth for elastic, kibana,logstash users

Now connect to the K8s workstation(kubectl) and download the manifest from : https://raw.githubusercontent.com/elastic/beats/6.0/deploy/kubernetes/filebeat-kubernetes.yaml

Change the auth details in manifest
# Update Elasticsearch connection details
- name: ELASTICSEARCH_HOST
 value: elasticsearch
- name: ELASTICSEARCH_PORT
 value: “9200”
- name: ELASTICSEARCH_USERNAME
 value: elastic
- name: ELASTICSEARCH_PASSWORD
 value: your elastic use password

Now deploy the daemonset using updates manifest.
kubectl create -f filebeat-kubernetes.yaml

Now got to the Kibana Dashboard & configure the index with filebeat-* pattern and within a minute you should see the Kubernetes containers logs.

Cheers!



Friday, 4 May 2018

Getting Started with AWS Lambda


DevOps is about speed. AWS Lambda and serverless architectures allow you to develop and deliver faster than before. When used effectively, they can lower your costs and allow you to embrace DevOps with ease.

Lambda is a serverless computing environment which allows you to connect your code with an event. Your code will be executed when an event fires. You can also put Lambda functions behind a REST API, which we’ll see how to do momentarily.
Lambda supports many different languages and execution environments. For this example, we’ll be using Node.js since it’s pretty simple to get a Lambda function up and running with that environment. You can also use Python, Go, C#, and Java. Let’s create our first Lambda function!
Tools We Need
No need of local development environment, AWS Lambda will execute our code.
Creating your first Lambda function
Select “Lambda” from the “Services” menu. Alternately, you can type “Lambda” in the search box on AWS console and then select it.
Once you’re on the Lambda screen, click the “Create a Function” button to begin the process of creating your first function. Fill out the form as you see it below:

The role field refers to the permissions you want your Lambda function to have within AWS. You can further research AWS Identity and Access Management, but we won’t cover that here. For our purposes, we’ll create a role for our function based on the “Simple Microservice” permissions.
Click the “Create function” button to navigate to the Lambda function creation screen. This screen looks busy at first, so we’ll walk through it piece by piece and set what we need as we go along. For now, close the “Designer” section of the page by clicking on the section header. We will come back to that while integrating our code with another AWS service in order to make it available for use. Your screen should look like this:
AWS Lambda serverless screenshot
In the “Function code” section, you’ll see an integrated code editor for you to use. Let’s use simple example to illustrate working of plumping in AWS lambda. Edit code as below:
exports.handler = (event, context, callback) = {
    
const result = event.number1 + event.number2;
    callback(
null, result);
};
Let’s test our Lambda function to see if it works. The Lambda interface gives us ability to do that. Click on dropdown next to “Test” and “Save” buttons and choose “configure test events.” (Note: “configure test events” will be your only option.)
In the “Configure Test Event” dialog box, select “Hello World” as the template and update the JSON inside the box to the following:
{
    
"number1": 2,
    
"number2": 3
}
Once done, your screen should look like this:
AWS Lambda serverless configure test screenshot
Click the “Create” button, and you’re ready to test. What we just did was create a test input to our function. Now we can execute the function and make sure the result is what we expect.
Click the “Save” button if it’s active. Make sure your new test function is selected in the dropdown and click the “Test” button. Your function will execute, and you’ll see the results displayed at the bottom of the page. In this case, the result returned from the function should be 5.
AWS Lambda test showing 5 screenshot
Congratulations! We’ve created a Lambda function and successfully tested it so we can be confident it will do what we think it will do.

So now we have working lambda function. We can invoke it from Events/ Triggers. This event can be many different things, such as database records being updated or files being uploaded to an S3 bucket.

What's Next?
Awesome, you just configured, wrote and executed your first code on AWS Lambda!
What else can you do with Python and Lambda? Stay tuned to Crevise Blogs!
We can understand the real power of AWS Lambda when we connect a trigger to it, so our code will be executed based on the events. We'll take a look at that in the next tutorial.


Tuesday, 17 April 2018

Deploy a Kubernetes Cluster using Kubespray

By Alok Patra


Image from Gordon Smith’s post in Maritimeherald.com

Kubernetes is the next big thing. If you are here I am assuming you already know of Kubernetes. If you don’t you better get started soon.
Kubernetes also called K8s, is a system for automating deployment, scaling and management of containerized applications — basically a container Orchestration tool.
Google had been using Borg for years, an in-house container orchestration tool they had built and in 2014 they open-sourced it to advance the technology along with Linux Foundation.


So let’s get started..
There are multiple ways to set up a Kubernetes Cluster. One of them is using Kubespray which uses Ansible. The official Github link for installation via kubespray is just about crisp but has a lot of reading between the lines. I spent days getting this right. If you are getting started with Kubernetes then use can follow the steps.
We would be going through the following to deploy the cluster.
  1. Infra Requirements
  2. Network Configuration
  3. Installations
  4. Set configuration parameters for Kube cluster
  5. Deploy the Kube Cluster
I have created a 1 master 2 node cluster.
I have used another machine to deploy the whole cluster which I call my base machine
So, I would require 4 VM’s (1 Base Machine and 3 for my Kubernetes Cluster)
Since I already have an AWS account, I will be using it to spin up 4 Ubuntu machines. You may choose to use Google Cloud or Microsoft Azure.

Infra Requirements

Create the following infra on AWS.
Base machine: Used to clone the kubespray repo and trigger the ansible playbooks from there.

Base Machine

Type: t2.micro — 1 Core x 1 GB Memory
OS: Ubuntu 16.04
Number of Instances: 1

Cluster Machines

Create the 3 instances in one shot so that they remain in the same security group and subsequent changes in the security group will reflect on the whole cluster.
Type: t2.small — 1 Core x 2 GB RAM
OS: Ubuntu 16.04
Number of Instances: 3
Using t2.micro for the cluster machines will fail. There is a check in the installation which fails further installation if the memory is not sufficient.
Also, when you create your instances on AWS, create a new pem file for the cluster. Pem file is like a private key used for authentication. Save this pem file as K8s.pem on your local machine. This will be used later by you/ansible to ssh into the cluster machines.

Network Configurations

On the AWS console in the EC2 section. Click on the security group corresponding any instance of the cluster (since they all belong to the same security )
Click on Inbound rules
Click on Edit and under Type click on ‘’All Traffic” to allow internal communication within the cluster

Installations

Tools to be installed on the Base Machine
  1. Ansible v2.4
  2. Python-netaddr
  3. Jinja 2.9

Install latest ansible on debian based distributions.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Check Installation

$ ansible — version

Install Jinja 2.9 (or newer)

Execute below commands to install Jinja 2.9 or upgrade existing Jinja to version 2.9
$ sudo apt-get install python-pip
$ pip2 install jinja2 — upgrade

Install python-netaddr

$ sudo apt-get install python-netaddr

Allow IPv4 forwarding

You can check IPv4 forwarding is enabled or disabled by executing below command.
$ sudo sysctl net.ipv4.ip_forward
If the value is 0 then, IPv4 forwarding is disabled. Execute below command to enable it.
$ sudo sysctl -w net.ipv4.ip_forward=1

Check Firewall Status

$ sudo ufw status
If the status is active then diable is using the following
$ sudo ufw disable

Set configuration parameters for the Kube Cluster

Clone the kubespray github repository

Clone the repo onto this base machine https://github.com/kubernetes-incubator/kubespray

Copy the key file into the Base machine

Navigate into the kubespray folder
$ cd kubespray
Now you can either copy the pem file which you used to create the cluster on AWS into this directory from your local machine OR just copy the contents into a new file on the base machine.
Navigate to the location where you have downloaded your pem file from AWS when you created your cluster. This I have downloaded on my local machine (which is different from the base machine and the cluster machines).
View the contents of K8s.pem file on your local machine using the command line.
$ cat K8s.pem
Copy the contents of the file
Connect / ssh onto the Base machine
On Base Machine
$ vim K8s.pem
This will create and open a new file by the name K8s.pem. Paste the contents here.
To save Hit Esc key and then type :wq
Change permissions of this file.
$ chmod 600 K8s.pem

Modify the inventory file as per your cluster

Copy the inventory sample inventory and create your own duplicate as per your cluster
$ cp -rfp inventory/sample inventory/mycluster
Since I will be creating a 1 master 2 node cluster, I have accordingly updated the inventory file. Update Ansible inventory file with inventory builder. Run the following 2 commands to update the inventory file
Replace the sample IP’s with Private IP’s of the newly created instances before running the command
$ declare -a IPS=(172.31.66.164 172.31.72.173 172.31.67.194)
$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Now edit/verify the hosts.ini file to ensure there is one master and 2 nodes as shown below. Keep only node1 under [kube-master] group and node2 and node3 under [kube-node] group.
Hosts.ini file
[all]
node1 ansible_host=172.31.66.164 ip=172.31.66.164
node2 ansible_host=172.31.67.194 ip=172.31.67.194
node3 ansible_host=172.31.72.173 ip=172.31.72.173
[kube-master]
node1
[kube-node]
node2
node3
[etcd]
node1
node2
node3
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
[vault]
node1
node2
node3
The above is how the file finally looks.

Verify other kube cluster configuration parameters

Review and change parameters under ``inventory/mycluster/group_vars``
$ vim inventory/mycluster/group_vars/all.yml
Change the value of the variable ‘boostrap_os’ from ‘none ’to ‘ubuntu’ in the file all.yml.
Save and exit the file.
Make necessary changes in the k8s-cluster.yml file if any.
$ vim inventory/mycluster/group_vars/k8s-cluster.yml
Save and exit the file

Deploy Kubespray with Ansible Playbook

$ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml — private-key=K8s.pem — flush-cache -s

Check your Deployment

Now SSH into the Master Node and check your installation
Command to fetch nodes in the namespace ‘kube-system’
$ kubectl -n kube-system get nodes
Command to fetch services in the namespace ‘kube-system’
$ kubectl -n kube-system get services
Wohhoooo!!! We are done!!!
You now have your kubernetes cluster up and running.

Amazon EKS - Kubernetes on AWS

By Komal Devgaonkar Amazon Elastic Container Service for Kubernetes (Amazon EKS), which is highly available and scalable AWS service....