Article From:https://www.cnblogs.com/fxjwind/p/9969129.html

Traditional applications are “monoliths”, meaning big applications, where all logic and modules are coupled together.

This obviously has a lot of problems, such as scaling up, upgrading must be upgraded as a whole, expansion.

So we want to breakdown big applications into small, independent modules or components, so that components can be upgraded and expanded independently. Components can also be implemented in different languages. Components communicate through protocols, and each component is a micro-service.

Micro-service technology can be said to be traditional component technology, such as COM, Corba, docker-based evolution, for user transparent hardware, implementation language, components through Restful or RPC interaction.

 

The question is, is it difficult to maintain and deploy a large application when you break it down into many small ones?

Well, if you deploy so many small applications on a physical machine and upgrade them independently, what if their dependencies conflict?

And each component may run in different environments. Do all your machines need to install these environments? Think about explosions.

So any design has its pros and cons, monoliths applications have their problems, but dismantling micro services will introduce a host of new problems.

The reason why micro-service is so popular now, of course, COM and Corba were also popular for a while and then silent, because of the emergence of docker technology, existing docker and related frameworks can better solve these problems, such as kubernetes.

Based on these frameworks, we can achieve continuous development, even Devops, Noops.

Here we need to understand Dev ops, not to kill the OPS team, and then give the work of OPS to Dev, but that with the support of the framework, the original maintenance and release has become very simple, then Dev can do it more efficiently, there is no need for OPS to do it, and OPS can do it.It is a virtuous circle to emancipate and provide a better basic framework. Otherwise, open-mouth and close-mouth Devops is just a vicious circle.

 

So what’s great about container technology? Why can’t the original VM work? The key is “Light weight”upper

Because micro-services need to split big applications into many small applications, which are mixed on the machine, the first consideration is isolation.

Of course, if you use VM, isolation is no problem, but it’s a waste of resources, using VM may consume one order of magnitude more resources than using an application.

So container technology is a lightweight VM

The biggest difference is that each VM runs a separate OS, kernel, and container shares the OS on the host.

So for VM, because OS is independent, its isolation is intuitive.

So what technology does container use to isolate since it shares OS?

linux containerMainly using the following two technologies to achieve container isolation

First, Linux namespaces,

LinuxYou can also create more namespaces by default. The names of these resources below namespaces are segregated, that is to say, the namespace can’t see PID or userid of other namespaces.Other resources

 Mount (mnt)

 Process ID (pid)

 Network (net)

 Inter-process communication (ipc)

 UTS (host name,domain name)

 User ID (user)

Furthermore, CGroup, Linux kernel feature that limits the resource usage of a process (or a group of processes).

Restricted processes, resources used, cpu, memory, can be limited to a specified range

CgroupIn fact, it is not perfect, for example, for IO, it is difficult to limit the real traffic, whether on disk or on network; Cpu can not target burst traffic, the utilization rate exceeds the limit in an instant, and then is restricted, which can easily affect other processes, unless the core is explicitly bound;

 

Container technology existed a long time ago, but it was not until the docker technology came into being that it was widely concerned and accepted because docker was “Portable”By docker image

Docker was the first container system that made containers easily portable across different machines.
It simplified the process of packaging up not only the application but also all its libraries and other dependencies,
even the whole OS file system, into a simple, portable package that can be used to provision the application to any other machine running Docker

docker imagePackage the entire execution environment, including OS file system, so that it doesn’t matter if the docker and the host are different OS kernels, such as CentOS and debian.

docker imageIt’s also borrowed from VM image itself, but docker image is lighter

And a good design of docker image is that it’s hierarchical, so if many images use a layer, the layer only needs to be downloaded once.

A big difference between Docker-based container images and VM images is that
container images are composed of layers, which can be shared and reused across multiple images.

So understand.dockerThe core of technology is package technology,Instead of isolation, docker isolation is guaranteed through Linux kernel features, namespaces and cgroup, and docker itself does not care about isolation.

So in general, docker is a packaging and management technology, just like maven, but it’s not a Java jar package that he manages, it’s an image.

 

The difference between container and VM has been mentioned before. Now let’s look at the difference between docker container and VM in detail to deepen our understanding.

From this picture we can see that,

First of all, the difference I mentioned before is that VM needs to have an OS kernel, and VM is completely independent; docker shares the OS of the host and needs a docker process to manage it.

Furthermore, for VM, if application A and B need the same execution environment, we need to put them in a VM, but they are not isolated; for docker, A and B need to run in separate containers, but also share the execution environment.

So the key to what docker does is that the docker image is hierarchical, docker can start the container based on the same layer; but here the layer is read-only, so if a container changes the environment, he will.Add a new layer and place all changes in the new layer

 

Let’s talk about k8s.

We have dockers, containers can move from machine to machine, so if I have many containers, and mainframe, how to manage them, relying on manual migration and management is certainly not appropriate.

So kubernetes does this. He can be seen as a cluster operating system, providing similar services discovery, scaling, load-balancing, self-healing.And even leader selection and other functions

KubernetesThe structure is as follows.

Firstly, kubernetes nodes are divided into master and worker.

master,Control plane,Includes API server for communication, client and control plane, or between multiple control planes; Scheduler, as its name implies, is responsible for scheduling applications to various worker nodes; ETCD,Similar to zk, storage configuration and consistency are guaranteed; Controller Manager, responsible for cluster-level management, monitoring worker nodes, node failover, etc.

woker node,First, you need a Container Runtime, such as a docker process to execute the container; a Kubelet to communicate with the master and manage all the containers on the modified woker; and a Kube-proxy, similar to SLB, to do service access load.Balance

Here’s an example of how users submit applications through kubernetes.

1. Users first submit the docker image associated with the application to the image registry

2. Then the user needs to write, App descriptor, to describe how the containers in the application are organized.
The concept here is that pods, which can be understood as container grouping, are required to execute together in a pods, are called as a whole when dispatching, and containers are not completely isolated from each other.
So in descriptor, we need to divide containers into pods and give the number of concurrencies per pods.

3. The application is then submitted to master, who dispatches the pods to woker, and through kubelet on woker, kubelet lets the docker runtime on the node start the container.

4,docker runtimeFollowing the previous steps, download the image registry first, and then start the container.

 

Let’s start with action, Docker

1. Start docker

docker run <image> 

docker run <image>:<tag>

For example, execute the busybox image, passing in the parameter echo “hello world”

 

2. Create docker image

First of all, you need a program to run in docker. Here we use js.

const http = require('http');
const os = require('os');
console.log("Kubia server starting...");
var handler = function(request, response) {
  console.log("Received request from " + request.connection.remoteAddress);
  response.writeHead(200);
  response.end("You've hit " + os.hostname() + "\n");
};
var www = http.createServer(handler);
www.listen(8080);

Then, you need to write a Docker file.

FROM node:7   #Based on the image layer, “node” container image, tag 7

ADD app.js /app.js  #Place app. JS in the root directory of the container

ENTRYPOINT [“node”, “app.js”]  #Execute that command when the container starts, here is “node app. js”

Finally, the docker builder creates the image.

docker build -t kubia .

Let’s take another look at the layering of image layers.

As you can see, for each line of command in dockerfile, a layer is generated

You may think that each Dockerfile creates only a single new layer, but that’s not the case. When building an image, a new layer is created for each individual command

in the Dockerfile.

At this point, we can view the image we just created.

Now you can start the container with docker run.

docker run –name kubia-container -p 8080:8080 -d kubia

–name,Container name
-p,Port 8080 on the local machine will be mapped to port 8080 inside the container,dockerThe ports are isolated, so if you want to access them from outside, you need to match the ports of the host.
-d,daemon,Background program;

After the container is started, it can be accessed through http://localhost:8080

 

3. View container

 docker ps   #View Container Basic Information

 docker inspect kubia-container  #View all container-related information

 Log in to the container.

 docker exec -it kubia-container bash 

-i, which makes sure STDIN is kept open. You need this for entering commands into the shell.

-t, which allocates a pseudo terminal (TTY).

It should be noted here that because the container process is actually running on the OS of the host, the container process can be seen on the host of the container kernel, but the PID in the docker is isolated unlike the PID.

 

4. Stop and delete containers

docker stop kubia-container  #Stop containers, which can be viewed with docker PS-A

docker rm kubia-container   #Delete container

 

5. Upload and Register Container Image

After the above steps, the container can already be used locally, but if you want to use it across machines, you need to register the image on the docker hub.

First tag the image, because docker hub only allows users to upload, starting with the image of the user docker hub ID

docker tag kubia luksa/kubia #If dockerhub ID is luksa

docker push luksa/kubia  #upload

So you can start the image on other machines.

docker run -p 8080:8080 -d luksa/kubia

 

Kubernetespiece

Cluster versions of kubernetes are cumbersome, so miniKube is usually used to start miniKube first.

minikube start

Then, you can use kubectl to connect to kubernetes. kubectl is a client that connects to API Server of kubernetes.

You can see the cluster situation.

You can also see all the nodes.

Look at a node.

kubectl describe node gke-kubia-85f6-node-0rr

Now start deploying applications to kubernetes.

$ kubectl run kubia –image=luksa/kubia –port=8080 –generator=run/v1

replicationcontroller “kubia” created

When you start an application, you create a replication controller, because the application needs to be deployed concurrently, so you need RC to manage the replica of each application.

After deployment, what do we think of deployed applications?
For kubernetes, the granularity of application deployment is pod, not container

A pod is a group of one or more tightly related containers that will always run together on the same worker node and in the same Linux namespace(s).

podLike logical machines, containers within pods are not completely isolated, but share the same Linux namespaces

container,pod,work nodeThe relationship is as follows.

So we can see the state of the pod.

Detailed information,

 

Then, how to expose the services provided by pod to external users? Here we need to create a service to expose the rc, because the pod managed by RC is dynamic and temporary. If it hangs up, it will pull up a new one, but the IP of the service can not change, so we need a service.CE layer proxy

kubectl expose rc kubia –type=LoadBalancer –name kubia-http

service “kubia-http” exposed

View the status of services.

So the component diagram of the whole application is as follows.

 

kubernetesIt can be expanded dynamically. Here we can see the situation before and after expansion.

 

 

 

 

 

 

 

Link of this Article: kubernetes in action – Overview

Leave a Reply

Your email address will not be published. Required fields are marked *