By Lokesh Jawane
As we all know while setting up the infrastructure there a lot of tasks involved - configuration, managing the data and files, searching the logs, troubleshooting ,debugging etc.If you have a large and complex infrastructure you have to ensure that your data gets stored properly. This ensures better data filter and analysis which helps in easy troubleshooting and make your infra environment stable.
Let us now talk about Kubernetes. Generally when its comes to Kubernetes we talk about testing, monitoring and configuration management. Let's look at how to collect data about Kunernetes for data/log analysis.
FileBeat version 6.0.0 or later has added processor add_kubernetes_metadata which allows to gather the k8s container logs and send it to Elasticsearch.
add_kubernetes_metadata enriches logs with metadata from the source container, it adds pod name, container name, and image, Kubernetes labels and, optionally, annotations. It works by watching Kubernetes API for pod events to build a local cache of running containers. When a new log line is read, it gets enriched with metadata from the local cache
Configuration with Elasticsearch & Kibana
It’s great if you have the ElasticSearch & Kibana inplace, if not then not to worry, just follow the below link to setup it.
Note: make sure you have configured ElasticSearch Basic auth for elastic, kibana,logstash users
Now connect to the K8s workstation(kubectl) and download the manifest from : https://raw.githubusercontent.com/elastic/beats/6.0/deploy/kubernetes/filebeat-kubernetes.yaml
Change the auth details in manifest
# Update Elasticsearch connection details
- name: ELASTICSEARCH_HOST
- name: ELASTICSEARCH_PORT
- name: ELASTICSEARCH_USERNAME
- name: ELASTICSEARCH_PASSWORD
value: your elastic use password
Now deploy the daemonset using updates manifest.
kubectl create -f filebeat-kubernetes.yaml
Now got to the Kibana Dashboard & configure the index with filebeat-* pattern and within a minute you should see the Kubernetes containers logs.