Kubernetes Overview and Solution For Deployment And Security

Understanding Kubernetes and Cloud Native Applications

Kubernetes is an open-source container orchestration engine and also an abstraction layer for managing full stack operations of hosts and containers. From deployment, Scaling, Load Balancing and to rolling updates of containerized applications across multiple hosts within a cluster. Kubernetes make sure that your applications are in the desired state.

Kubernetes 1.8 released on September 28, 2017, with some new features and fulfil the most demanding enterprise environments. There are now new features related to security, stateful applications and extensibility. With Kubernetes 1.7 we can now store secrets in namespaces in a much better way, We ‘ll discuss that below.


12 Factor App Methodology and Microservices

In the modern era, software is commonly delivered as a service called web apps, or software as a service. The 12-factor app is a methodology for building software as a service app that -

The 12-factor methodology can be applied to apps wContinuous Delivery, DevOps Toolsritten in any programming language, and which use any combination of backing services (database, queue, memory cache etc).

Best practices for building cloud-native applications

12-Factors App Priniciples , Application Patterns

Codebase

Dependencies

Config

Backing Services

Build, Release, Run

Processes

Port Binding

Concurrency

Disposability

Development / Production Parity

Logs

Admin Processes

You May also Love To Read Testing Strategies in Microservices Architecture

Beyond the Twelve-Factor App

Other Best Practices for Twelve Factor App


Kubernetes Architecture and Deployment

Kubernetes Cluster operates in master and worker architecture. In which Kubernetes Master get all management tasks and dispatch to appropriate kubernetes worker node based on given constraints.

 

Kubernetes Components

Below I have created two sections so that you can understand better what are the components of the kubernetes architecture and where we exactly using them.

Kubernetes Master Node Architecture

 

Kube API Server

Kubernetes API server is the centre of each and every point of contact to kubernetes cluster. From authentication, authorization, and other operations to kubernetes cluster. API Server store all information in the etcd database which is a distributed data store.

 

Setting up Etcd Cluster

Etcd is a database that stores data in the form of key-values. It also supports Distributed Architecture and High availability with a strong consistency model. tcd is developed by CoreOS and written in GoLang. Kubernetes components stores all kind of information in etcd like metrics, configurations and other metadata about pods, service, and deployment of the kubernetes cluster.

 

Kubernetes kube-controller-manager

The kube-controller-manager is a component of Kubernetes Cluster which manages replication and scaling of pods. It always tries to make kubernetes system in the desired state by using kubernetes API server.

There are other controllers also in kubernetes system like

Kubernetes kube-scheduler

The kube-scheduler is another main component of Kubernetes architecture. The Kube Scheduler check availability, performance, and capacity of kubernetes worker nodes and make plans for creating/destroying of new pods within the cluster so that cluster remains stable from all aspects like performance, capacity, and availability for new pods.

It analyses cluster and reports back to API Server to store all metrics related to cluster resource utilisation, availability, and performance.

It also schedules pods to specified nodes according to submitted manifest for the pod.


Kubernetes Worker Node Architecture

 

Kubernetes kubelet

The Kubernetes kubelet is a worker node component of kubernetes architecture responsible for node level pod management.

API server put HTTP requests on kubelet API to executes pods definition from the manifest file on worker nodes and also make sure containers are running and healthy. Kubelet talks directly with container runtimes like docker or RKT.

Kubernetes kube-proxy

The kube-proxy is networking component of the Kubernetes Architecture. It runs on each and every node of the Kubernetes Cluster.

Kubernetes Docker

Docker is an open source container runtime developed by docker. To Build, Run, and Share containerized applications. Docker is focused on running a single application in one container and container as an atomic unit of the building block.

 

rkt, a security-minded, standards-based container engine - CoreOS

rkt is another container runtime for the containerized application. Rocket is developed by CoreOS and have more focus towards security and follow open standards for building Rocket runtime.

Kubernetes Supervisor

kubernetes-supervisor is a lightweight process management system that runs kubelet and container engine in running state.

Kubernetes Logging with Fluentd

Fluentd is an open source data collector for kubernetes cluster logs.


Understanding Basic Kubernetes Concepts

Kubernetes Nodes

Kubernetes Nodes are the worker nodes in the kubernetes cluster. Kubernetes worker node can be a virtual machine or bare metal server.

Node has all the required services to run any kind of pods. Node is also managed by the master node of the kubernetes cluster.

Following are the few services of Nodes

1. Docker

2. Kubelet

3. Kube-Proxy

4. Fluentd

Docker Containers

A container is a standalone, executable package of a piece of software that includes everything like code, runtime, libraries, configuration.

1. Supports both Linux and Windows-based apps

2. Independent of the underlying infrastructure.

Docker and CoreOS are the main leaders in containers race.

Kubernetes Pods

Pods are the smallest unit of kubernetes architecture. It can have more than 1 containers in a single pod. A pod is modelled as a group of Docker containers with shared namespaces and shared volumes.

Example:- pod.yml

Kubernetes Deployment

A Deployment is JSON or YAML file in which we declare Pods and Replica Set definitions. We just need to describe the desired state in a Deployment object, and the Deployment controller will change the actual state to the desired state at a controlled rate for you.

We can

Example:- deployment.yml

Kubernetes Service YAML/JSON

A Kubernetes Service definition is also defined in YAML or JSON format. It creates a logical set of pods and creates policies for each set of pods that what type of ports and what type of IP address will be assigned. The Service identifies set of target Pods by using Label Selector.

Example: - service.yml

Kubernetes Replication Controller

A Replication Controller is a controller who ensures that a specified number of pod “replicas” should be in running state.

Example :- rc.yml

Kubernetes Labels

Labels are key/value pairs. It can be added any kubernetes objects, such as pods, service, deployments. Labels are very simple to use in the kubernetes configuration file.

Below mentioned code snippet of labels

Because labels provide meaningful and relevant information to operations as well as developers teams. Labels are very helpful when we want to roll update/restore application in a specific environment only. Labels can work as filter values for kubernetes objects. Labels can be attached to kubernetes objects at any time and can also be modified at any time.

Non-identifying information should be recorded using annotations.

Container Registry

Container Registry is a private or public online storage that stores all container images and let us distribute them.There are so many container registries in the market.


Deploying Microservices Application with Kubernetes

Kubernetes is a collection of APIs which interacts with computer, network and storage.

There are so many ways to interact with the kubernetes cluster.

Direct Kubernetes API is available to do all tasks on the kubernetes cluster from deployment to maintenance of anything inside the kubernetes cluster.

Kubernetes Dashboard is simple and intuitive for daily tasks. We can also manage our kubernetes cluster from the kubernetes dashboard.

 

Kubernetes CLI is also known as kubectl. It is written in GoLang. It is the most used tool to interact with either local or remote kubernetes cluster.


Continuous Delivery for Application on Kubernetes

Below mentioned deployment guides can be used to most of the popular language application on kubernetes.


Kubernetes Monitoring: Best Practices, Methods

Kubernetes gives us an easier and managing infrastructure by creating many levels of abstractions such as node, pods, replication controllers, services. Nowadays due to this, we don’t worry about where applications are running or related to its resources to work properly. But in order to ensure good performance, we need to monitor our deployed applications and containers.

There are many tools like cAdvisor, Grafana available to monitor the kubernetes environment with visualization. Nowadays Grafana is booming in the industry to monitor kubernetes environment.

Using cAdvisor to Monitor Kubernetes

cAdvisor is an open source tool to monitor kubernetes resource usage and performance. cAdvisor discovers all the deployed containers in the kubernetes nodes and collects the information like CPU, Memory, Network, file system. cAdvisor provides us with a visualise monitoring web dashboard.

 

Monitoring Kubernetes Using Grafana

Grafana is an open source metrics analytics and visualisation suite. Grafana commonly used for visualizing time series data for application analytics. In Grafana, we need a time series database like “influxdb” and a cluster-wide aggregator of monitoring and event data like heapster.

There are 4 steps to get information of kubernetes and visualise it to grafana dashboard.

Step 1: Hepster collects the cluster-wide data from the kubernetes environment.

Step 2: After collecting the data hepster provide it to influxdb.

Step3: And now grafana execute the metrics through the influxdb client to collect required data.

Step4: After getting required data grafana visualise the same in graphs.

You can create a custom dashboard on Grafana as per your requirement.


Enterprise Kubernetes & Production Grade Cluster

Workloads API GA in Kubernetes 1.9

Kubernetes 1.9 introduced General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default.  The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. 

Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback.

Windows Support (Beta) For Kubernetes 1.9 

Kubernetes 1.9 introduces SIG-Windows,Support for running Windows workloads.

Storage Enhancements in Kubernetes 1.9

Kubernetes 1.9 introduces an alpha implementation  of the Container Storage Interface (CSI), which will make installing new volume plugins as easy as deploying a pod, and enable third-party storage providers to develop their solutions without the need to add to the core Kubernetes codebase.


How Can Don Help You?

Don Kubernetes Consulting Services For Enterprises and Startups - 

Migrating to Cloud-Native Application Architectures

Migrating to Cloud-Native Application Architectures means re-platforming, re-hosting, recoding, rearchitecting, re-engineering, interoperability,  of the legacy Software application for current business needs. Application Modernization services enable the migration of monolithic applications to new Microservices  architecture. An Application based on Microservices Architecture is Small, Messaging–enabled, Bounded by contexts, Autonomously developed, Independently deployable, Decentralized, Language–agnostic, Built and released with automated processes. 

Building Big Data Stack On Kubernetes

Deploy, Manager and Monitor your Big Data Stack Infrastructure on Kubernetes. Run large scale multi-tenant Hadoop Clusters and Spark Jobs on Kubernetes with proper Resource utilization and Security. 

Private Cloud with Kubernetes

Deploy Kubernetes Cluster on the existing OpenStack setup with the click of a single button or API Call. Auto-Scale your Kubernetes Cluster which automatically scale the cluster according to the load and demand. 

Kubernetes Managed Services

Amazon Elastic Container Service for Kubernetes is a managed service which makes easy to run Kubernetes on AWS without needing to operate your own Kubernetes Cluster.