Kubernetes is a popular container orchestration tool used by DevOps engineers. Major companies such as Huwaei, SAP and Sound Cloud use this platform. They use it for software development. If you have an interview for a position that requires a Kubernetes certification, you have to know the basics.
So, you need to brush up your knowledge about Kubernetes. Here is a list of the most-asked Kubernetes interview questions.
What is Kubernetes?
Kubernetes is an open-source container orchestration platform. Companies use it for automating different tasks. These can be management, scaling, monitoring and deployment of containerized apps. It is an extensible and portable platform.
You can use it to manage services and containerized workloads. It is a large and fast-growing ecosystem. Its services, tools, and supports are widely available.
What are the primary features of Kubernetes?
The main features of Kubernetes include –
- It automates tasks such as launching containers and hosting containers on a server
- Allows vertical and horizontal scaling of resources
- It can work with hybrid, public cloud and on-premise environment. This allows you to transfer your workloads
- It can replace, reschedule and restart containers
- Supports automatic rollback and rollout for your applications
- It offers you a stable environment for developing, testing and production of application
- It can manage your Command line and batch workloads
What are the main components of the Kubernetes architecture?
The primary components of Kubernetes are –
- Master node
This is required for managing the Kubernetes cluster. For checking fault tolerance, there may be many Master nodes. It has components such as kube-controller-manager, kube-apiserver, kube-scheduler, etcd.
- Worker node
These contain the services required for communication between the master node and between the containers. It has kubelet and kube-proxy running on each node.
This handles scheduling tasks for worker nodes. While doing so, it takes resource limitations, affinity and anti-affinity specifications into consideration.
This is a key-value store in Kubernetes which is a backup for your cluster data. It stores configuration details, handles port forwarding and network rules.
What is POD & Node in Kubernetes?
In Kubernetes, a POD runs on a node and is the smallest execution unit. If a POD crashes or stops working, Kubernetes will create a new replica of it. It will then continue executing operations.
A node can be a virtual machine or a physical machine in Kubernetes. The Master node manages this, and can contain many PODs.
Which process runs on the Kubernetes Master Node?
The Kube-api server process runs on the master node.
Define daemon sets.
Daemon sets in Kubernetes are used for ensuring that some or all the nodes run a copy of a POD. This way, you can run a daemon on every node. These are used for node monitoring, cluster storage and log collection.
How are Kubernetes and Docker related?
Kubernetes and Docker both managed containerized applications.
Docker is used to build, run and distribute Docker containers. You can use it to package and ship the application.
Kubernetes is a container orchestration platform. You can use it to manage and scale the application.
What is a Kubernetes Cluster?
A cluster is a set of node machines used for running containerized applications. You can run containers across different environments. These can be on-premise, virtual, cloud and physical. They are OS independent.
Clusters consist of one master node and some worker nodes.
What are the components of Kubernetes cluster?
A Kubernetes cluster contains six main components –
- API server – It works as the front end of the control panel of Kubernetes.
- Scheduler – This schedules nodes as per their resource needs. It schedules a POD to a compute node according to its resource requirements.
- Controller manager – Handles node controllers, replication controllers and endpoint controllers. It ensures that the correct number of PODS are running.
- Kubelet – Communicates with the Docker engine and ensures that containers are running in a Pod. It interacts with the Control plane when to execute an action.
- Kube-proxy – This is a network proxy that facilitates Kubernetes networking. Implements the Kubernetes Service concept across every node in a given cluster.
- Etcd – Stores cluster data such as state information and configuration information.
What is the Google Container Engine?
The Google Container Engine is an open-source cluster management and container orchestration platform. It is designed to manage and run Docker containers. It schedules containers into a cluster.
You can interact with it using the gcloud command-line interface. The Google Cloud Platform Console can be used too.
What are the recommended security measures for Kubernetes?
Standard Kubernetes security measures include –
- Always keep the software updated to the latest version
- Enable Role-Based Access Control (RBAC) to check who accesses the Kubernetes API and their permissions
- Use namespaces to establish security boundaries and isolate sensitive workloads
- Control traffic between PODs and clusters using network segmentation policies
- Secure sensitive cloud metadata using Google Container Engine’s metadata hiding feature
- Disable anonymous access to the Kubernetes API server using TLS encryption
What is Kube-proxy?
Kube-proxy is a network proxy that runs on each node in your cluster. It implements part of the Kubernetes Service concept. It handles the load balancing of the traffic. This is between the services to the appropriate backend PODs.
Is Kubernetes IaaS or PaaS?
Kubernetes is neither PaaS or IaaS. It is a container orchestration engine that can be considered a Container As A Service.
How do you get a static IP address for a Kubernetes load balancer?
You can get a static IP address for a Kubernetes load balancer by altering the DNS records.
Mention the different types of controller managers.
Types of controllers in Kubernetes are -
- namespace controller
- replication controller
- serviceaccounts controller
- endpoints controller
What are the differences between Kubernetes and Docker Swarm?
Basis of comparison
Installation is time-consuming. The installation instructions differ based on OS and provider.
Easy to install. It offers the flexibility for a node to join an existing cluster.
Apps can be deployed using pods, microservices and pods.
You can use apps only as a microservice in swarm clusters.
It has a set of APIs and strong guarantees, making it complicated and thus slowing down the rate of scaling.
This deploys containers faster, making scalability faster too.
By tolerating application failure and distributing pods among nodes, it offers high availability.
As all the services can be cloned using the Swarm nodes, availability is high here.
Pods are used within a cluster for load balancing. Manual service configuration is needed.
A DNS element within the Swarm mode can be used for handling incoming requests.
Container networking requires TLS authentication for networking that has to be manually configured
Inter-node connections can be established using TLS that is automatically configured
Data volumes can be shared between pods.
Data volumes can be shared between many containers. But, they exist locally on the node where they are created.
You can define a container as a service which simplifies service discovery
Service discovery is simple here too as containers communicate with each other. They use private IP addresses
What is Kubectl?
The kubetcl is a command line tool that you can use to control clusters. It lets you perform different Kubernetes operations. These include create, delete, apply, annotate, attach, explain and expose.
It is a client of the Kubernetes API.
What are Secrets in Kubernetes?
You can store sensitive information in Kubernetes secrets. These can be ssh keys and passwords. You can access them via an environment variable or a volume, from a container that is running in a POD.
You can create a Secret from a text file or a yaml file.
Mention important kubectl commands.
Important kubectl commands include –
- kubectl apply
- kubectl annotate
- kubectl attach
- kubectl api-versions
- kubectl autoscale
- kubectl config set
- kubectl edit
- kubectl cluster-info dump
- kubectl set cluster
- kubectl get clusters
- kubectl set-credentials
- kubectl config
- kubectl cluster-info
- kubectl config current-context
- kubectl drain NODE
What is Autoscaling in Kubernetes?
Autoscaling is needed for increasing/ decreasing the number of nodes in a cluster. This is as per the service response demands. The feature can scale the infrastructure horizontally using Horizontal Pod Autoscaler (HPA).
What is Container Orchestration?
Container orchestration is the process of handling containers for apps. The activities include managing, scheduling, networking and deployment of containers. Using this process, you can deploy the same application across different environments. It also helps in the following –
- Allocating resources between containers
- Service discovery load balancing between containers
- Monitoring the health of hosts and containers
- Managing application load across host infrastructure by scaling or removing containers
Can Pods in different namespaces communicate?
Yes, pods in different namespaces can communicate using their IP addresses. The IP address of each pod can be seen using this command –
kubectl get pods -o wide --all-namespaces
What will happen while adding new API to Kubernetes?
Adding a new API to Kubernetes will enhance its features. You can also define some sets for the API. This will preserve the cost and complexity of the system.