Created: 2023-03-2610:08
The Problem:
what problem does Kubernetes help us to solve?
Problem:
- Example: Hosting a personal E-Commerce website
- The site becomes so popular that the host may crash if there is a lot of incoming traffic at a given time
- One idea as a solution would be to just bring on more hosts
- add load balancers to distribute traffic
- changes have to be done individually for each host/container
- quickly becomes tedious..
In Comes Kubernetes!:
Kubernetes is aimed to be a solution to horizontal scaling problems
what is Kubernetes and why do you need it
- Containers provide a good way to bundle up and run an application
- Kubernetes provides a systemized way of managing those containers and running distributed systems resiliently
- For example: if one container goes down, starting up another
some of the things Kubernetes provides you with
- Service discovery and load balancing
- expose a container using DNS naming or its own IP
- if traffic is too high to a container, Kubernetes will load balance to distribute network traffic
- Storage orchestration
- mount storage systems of your choice
- local storage
- cloud providers
- mount storage systems of your choice
- Automated rollouts and rollbacks
- control over the desired state of deployed containers
- automation available to update the desired state
- Automatic bin packaging
- You provide Kubernetes with nodes that can run containerized tasks
- Tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
- Self-healing
- restarts failed containers automatically
- replaces containers
- kills containers that do not respond
- Secret and configuration management
- store and manage sensitive info
- deploy and update secrets w/o rebuilding container images
- without exposing secrets
container orchestration
- Utilizes master server
- Kubernetes API Server
- how we talk to the master server
- Scheduler
- Controller Manager
- etcd
- Kubernetes API Server
- Worker Nodes consist of:
- kubelet
- kube-proxy
- Docker
- OS
- Hardware
Kubernetes Components:
when you deploy Kubernetes you get a cluster each cluster consists of:
worker nodes
- Worker nodes are machines that run containerized applications
- Each cluster has at least one worker node
- Worker nodes host the Pods that are the components of the application
- The control plane manages the worker nodes and their pods
control plane (master server) components
- Makes global decisions about the cluster
- ex: scheduling
- Detects and responds to cluster events
- ex: starting new pods when
replicas
field is unsatisfied
- ex: starting new pods when
kube-apiserver
- exposes the Kubernetes API
- API is the front end of the control plane
- scales horizontally
- several instances can be run and traffic balanced across instances
etcd
- highly-available key-value storage
scheduler
- watches for new pods without nodes to assign nodes to
kube-controller-manager
- Runs the control loop that watches the shared state of the cluster
- Makes changes attempting to move the current state toward the desired state
- types of controllers:
- Node Controller
- notices and responds when nodes go down
- Job Controller
- Watches for one-off-task job objects and creates pods to run the task
- EndpointSlice Controller
- Populates EndpointSlice objects
- the link between services and pods
- ServiceAccount Controller
- create default service accounts for new namespaces
- Node Controller
cloud-controller-manager
- embeds cloud-specific control logic
- allows you to link your cluster to your cloud provider’s API
Node Components
kubelet
- the agent running on each node in the cluster
- makes sure containers are running in a pod
- takes a set of pod specs and ensures that the containers described in the pod spec are running and healthy
kube-proxy
- a network proxy running on each node to implement the Kubernetes service concept
- maintains network rules on nodes
- uses OS packet filtering layer if available otherwise it filters packets itself
container runtime
- container runtime software responsible for running containers
- ex: docker
kubectl
- command line tool to interact with master
- a similar process to docker commands
- kubectl commands
get nodes
- list nodes
cluster-info
- info on cluster
- run
pod name
–image=docker_image
–port=port_number
- spins up worker nodes
describe pods
- get more info on pods
delete pods
- deletes pods
deployments (aka manifests)
- Contains the specifics on how you want Kubernetes to run your pods
- To apply the deployment:
kubectl apply -f deployment_file.yml
- You can change the specifics of a deployment with
edit deployment
command
Exposing applications to the internet:
the application that the worker nodes are running will only be available within the Kubernetes network unless exposed
to expose an application to a network (the internet) you need to deploy a service
- deploying a service will act as a load balancer across pods as well
- done by creating a manifest (.yml) similar to the deployment manifests
- Apply the service manifest with the command:
kubectl apply -f <service_file>
- Updating docker images with Kubernetes
kubectl edit deployment deployment_manifest
- change container images
References:
- https://www.youtube.com/watch?v=7bA0gTroJjw
- https://kubernetes.io/docs/concepts/overview/components/