Полная версия
IT Cloud
docker network create –driver overlay –subnet 10.10.1.0/24 –opt encrypted services
Now we need to fill the cluster with containers. To do this, we create not a container, but a service, which is a template for creating containers on different nodes. The number of replicas to be created is specified during creation in the –replicas key , and the distribution is random over the nodes, but as uniform as possible. In addition to replicas, the service has a load balancer, the ports from which (input ports for all replicas) are proxied are specified in the -p switch , and Server Discovery (discovery of working replicas, determining their IP addresses, restarting) replicas are performed by the balancer independently.
docker service create -p 80:80 –name busybox –replicas 2 –network services busybox sleep 3000
Let's check the status of the docker service ls service , the status and uniformity of the distribution of the docker service ps busybox container replicas and its work wget -O- 10.10.1.2 . Service is a higher-level abstraction, which includes the container and organizing its update (by no means only), that is, to update the container parameters, you do not need to delete and create it, but simply update the service, and the service will first create a new one with the updated configuration, and only after it starts will delete the old one.
Docker Swarm has its own load balancer Ingress load balacing, which balances the load between replicas on the port declared when creating the service, in our case it is port 80. The entry point can be any server with this port, but the response will be received from the server to which the request was forwarded by the balancer.
We can also save data to the host machine, as in a regular container, there are two options for this:
docker service create –mount type = bind, src = …, dst = .... name = .... ..... #
docker service create –mount type = volume, src = …, dst = .... name = .... ..... # to host
The application is deployed via Docker-compose running on nodes (replicas). When updating the Docker-compose configuration, you need to update the Docker stack, and the cluster will be consistently updated: one replica will be deleted and a new one will be created in its place in accordance with the new config, then the next one. If an error occurs, a rollback will be made to the previous configuration. Well, let's get started:
docker stack deploy -c docker-compose.yml test_stack
docker service update –label-add foo = bar Nginx docker service update –label-rm foo Nginx docker service update –publish-rm 80 Nginx docker node update –availability = drain swarm-node-3 swarm-node-3
Docker swarm
$ sudo docker pull swarm
$ sudo docker run –rm swarm create
docker secrete create test_secret docker service create –secret test_secret cat / run / secrets / test_secret health check: hello-check-cobbalt example pipeline: trevisCI -> Jenkins -> config -> https://www.youtube.com/watch? v = UgUuF_qZmWc https://www.youtube.com/watch?v=6uVgR9WPjYM
Docker company-wide
Let's take a company-wide view: we have containers and we have servers. It doesn't matter if these are two virtual machines and several containers or hundreds of servers and thousands of containers, the problem is to distribute containers on the servers, you need a system administrator and time, if there is little time and a lot of containers, you need a lot of system administrators, otherwise they will be suboptimally distributed. that is, the server (virtual machine) is working, but not at full capacity and the resources are being sold. In this situation, container orchestration systems are designed to optimize distribution and save human resources.
Consider evolution:
* The developer creates the necessary containers by hand.
* The developer creates the necessary containers using previously prepared scripts.
* The system administrator, using any configuration and deployment management system, such as Chef, Pupel, Ansible, Salt, sets the topology of the system. The topology indicates which container is located in which place.
* Orchestration (schedulers) – semi-automatic distribution, maintenance of the state and reaction to system changes. For example: Google Kubernetes, Apache Mesos, Hashicorp Nomad, Docker Swarm mode, and YARN, which we'll cover. New ones also appear: Flocker (https://github.com/ClusterHQ/Flocker/), Helios (https://github.com/spotify/helios/).
There is a native Docker-swarm solution. Of the adult systems, Kubernetes (Kubernetes) and Mesos showed the most popularity, while the former is a universal and completely ready-to-use system, and the latter is a complex of various projects combined into a single package and allowing you to replace or change their components. There is also a huge number of less popular solutions that are not promoted by such giants as Google, Twitter and others: Nomad, Scheduling, Scalling, Upgrades, Service Descovery, but we will not consider them. Here we will consider the most ready-made solution – Kubernetes, which has gained great popularity for its low entry threshold, support and sufficient flexibility in most cases, pushing Mesos into the niche of customizable solutions when customization and development is economically justified.
Kubernetes has several ready-made configurations:
* MiniKube – a cluster of one local machine, designed to overcome the threshold of entry and experiments;
* kubeadm;
* kops;
* Kubernetes-Ansible;
* microKubernetes;
* OKD;
* MicroK8s.
To start the cluster yourself, you can use
KubeSai – Free Kubernetes
The smallest structural unit is called POD, which corresponds to the YML file in Docker-compose. The process of creating a POD, like other entities, is done declaratively: by writing or changing a configuration YML file and applying it to a cluster. And so, let's create a POD:
# test_pod.yml
# kybectl create -f test_pod.yaml
containers:
– name: test
image: debian
To run multiple replicas:
# test_replica_controller.yml
# kybectl create -f test_replica_controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: Nginx
spec:
replicas: 3
selector:
app: Nginx // label by which the replica determines the presence of running containers
template:
containers:
– name: test
image: debian
For balancing, a type of service (logical entity) is used – LoadBalancer, in addition to which there is also ClasterIP and Node Port:
appVersion: v1
kind: Service
metadata:
name: test_service
apec:
type: LoadBalanser
ports:
– port: 80
– targetPort: 80
– protocol: TCP
– name: http
selector:
app: WEB
Overlay network plugins (created and configured automatically): Contig, Flannel, GCE networking, Linux bridging, Calico, Kube-DNS, SkyDNS. #configmap apiVersion: v1 kind: ConfigMap metadata: name: config_name data:
Similar to secrets in Docker-swarm, there is a secret for Kubernetes, an example of which can be NGINX settings:
#secrets
apiVersion: v1
kind: Secrets
metadata: name: test_secret
data:
password: ....
And to add a secret to POD, you need to specify it in the POD config:
....
valumes:
secret:
secretName: test_secret
…
Kubernetes has more flavors of Volumes:
* emptyDir;
* hostPatch;
* gcePersistentDisc – drive on Google Cloud;
* awsElasticBlockStore – A disk on Amazon AWS.
volumeMounts:
– name: app
nountPath: ""
volumes:
– name: app
hostPatch:
....
Feature for UI: Dashbord UI
Additionally available:
* Main metrics – collection of metrics;
* Logs collect – collecting logs;
* Scheduled JOBs;
* Autentification;
* Federation – distribution by data centers;
* Helm is a package manager similar to Docker Hub.
https://www.youtube.com/watch?v=FvlwBWvI-Zg
Docker commands
Docker is a more modern counterpart to RKT containers.
In Linux, when a process terminates with PID = 1, then NameSpace is also buried, which leads to the shutdown of the OS, in the case of a container, similarly, since it is a special case of the OS. The delimitation of processes in itself does not provide additional overhead, as well as monitoring and limiting resources for processes, because systemd provides the same configuration options in the host OS. Network virtualization occurs completely: both localhost and bridge, which allows you to create bridges from several containers to one localhost and thereby make it a single one for them, which is actively used in POD Kubernetes.
Run a temporary container interactively -it . To enter, you need to press Ctrl + D, which will send a signal to shutdown, after which it will be removed by –rm to avoid clogging the system with stopped modern containers. If the image is created in such a way that the application is launched in the shell in the container, which is wrong, then the signal will be poisoned to the application, and the container will continue to work with the shell, in which case, to exit in a separate terminal, you will need to kill it by its name –name name_container. For instance,:
Docker run –rm -it –name name_container ubuntu BASH
In the beginning, the Docker CLI had a simple set of commands to manage the lifecycle of containers. Among them:
* Docker run to run the container;
* Docker ps to view running containers;
* Docker rm to remove a container;
* Docker build to create your own image;
* Docker images to view existing containers;
* Docker rmi to remove the image.
But with the growing popularity, the teams became more and more and it was decided to group them into groups, so instead of the simple "Docker run", the "Docker container" command appeared, which has 25 commands in the 19 version of Docker. These are cleanup, and stop and restore, and logs and various kinds of container connections. The same fate befell the work with images. But, the old commands have remained so far due to compatibility and convenience, because in most cases a basic set is required. Let's stop at it:
Starting a container:
docker run -d –name name_container ubuntu bash
Remove a running container:
docker rm -f name_container
Output of all containers:
docker ps -a
Output of running containers:
docker ps
Output of containers with consumed resources:
docker stats
Displaying processes in a container:
docker top {name_container}
Connect to the container through the sh shell (there is no BASH in alpine containers):
docker exec -it sh
Cleaning the system from unused images:
docker image prune
Remove hanging images:
docker rmi $ (docker images -f "dangling = true" -q)
Show image:
docker images
Create image in dir folder with Dockerfile:
docker build -t docker_user / name_image dir
Delete image:
docker rmi docker_user / name_image dir
Connect to Docker hub:
docker login
Submit the latest revision (the tag is added and shifted automatically, if not specified otherwise) the image on the Docker hub:
docker push ocker_user / name_image dir: latest
For a broader list at https://niqdev.github.io/devops/docker/.
Building a Docker Machine can be described in the following steps:
Creating a VirtualBox virtual machine
docker-machine create name_virtual_system
Creating a generic virtual machine
docker-machine create -d generic name_virtual_system
List of virtual machines:
docker-machine ls
Stop the virtual machine:
docker-machine stop name_virtual_system
Start a stopped virtual machine:
docker-machine start name_virtual_system
Delete virtual machine:
docker-machine rm name_virtual_system
Connect to virtual machine:
eval "$ (docker-machine env name_virtual_system)"
Disconnect Docker from VM:
eval $ (docker-machine env -u)
Login via SSH:
docker-machine ssh name_virtual_system
Quit the virtual machine:
exit
Run the sleep 10 command in the virtual machine:
docker-machine ssh name_virtual_system 'sleep 10'
Running commands in BASH environment:
docker-machine ssh dev 'bash -c "sleep 10 && echo 1"'
Copy the dir folder to the virtual machine:
docker-machine scp -r / dir name_virtual_system: / dir
Make a request to the containers of the virtual machine:
curl $ (docker-machine ip name_virtual_system): 9000
Forward port 9005 of host machine to 9005 virtual machine
docker-machine ssh name_virtual_system -f -N -L 9005: 0.0.0.0: 9007
Master initialization:
docker swarm init
Running multiple containers with the same EXPOSE:
essh @ kubernetes-master: ~ / mongo-rs $ docker run –name redis -p 6379 -d redis
f3916da35b6ba5cd393c21d5305002b78c32b089a6cc01e3e2425930c9310cba
essh @ kubernetes-master: ~ / mongo-rs $ docker ps | grep redis
f3916da35b6b redis "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:32769->6379/tcp redis
essh @ kubernetes-master: ~ / mongo-rs $ docker port reids
Error: No such container: reids
essh @ kubernetes-master: ~ / mongo-rs $ docker port redis
6379 / tcp -> 0.0.0.0:32769
essh @ kubernetes-master: ~ / mongo-rs $ docker port redis 6379
0.0.0.0:32769
Build is the first solution to copy all files and install. As a result, when any file changes, all packages will be reinstalled:
COPY ./ / src / app
WORKDIR / src / app
RUN NPM install
Let's use caching and split the static files and the installation:
COPY ./package.json /src/app/package.json
WORKDIR / src / app
RUN npm install
COPY. / src / app
Using the base image template node: 7-onbuild:
$ cat Dockerfile
FROM node: 7-onbuild
EXPOSE 3000
$ docker build.
In this case, files that do not need to be included in the image, such as system files, for example, Dockerfile, .git, .node_modules, files with keys, they need to be added to node_modules, files with keys, they need to be added to .dockerignore .
–v / config
docker cp config.conf name_container: / config /
Real-time statistics of used resources:
essh @ kubernetes-master: ~ / mongo-rs $ docker ps -q | docker stats
CONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I / O BLOCK I / O PIDS
c8222b91737e mongo-rs_slave_1 19.83% 44.12MiB / 15.55GiB 0.28% 54.5kB / 78.8kB 12.7MB / 5.42MB 31
aa12810d16f5 mongo-rs_backup_1 0.81% 44.64MiB / 15.55GiB 0.28% 12.7kB / 0B 24.6kB / 4.83MB 26
7537c906a7ef mongo-rs_master_1 20.09% 47.67MiB / 15.55GiB 0.30% 140kB / 70.7kB 19.2MB / 7.5MB 57
f3916da35b6b redis 0.15% 3.043MiB / 15.55GiB 0.02% 13.2kB / 0B 2.97MB / 0B 4
f97e0697db61 node_api 0.00% 65.52MiB / 15.55GiB 0.41% 862kB / 8.23kB 137MB / 24.6kB 20
8c0d1adc9b9c portainer 0.00% 8.859MiB / 15.55GiB 0.06% 102kB / 3.87MB 57.8MB / 122MB 20
6018b7e3d9cd node_payin 0.00% 9.297MiB / 15.55GiB 0.06% 222kB / 3.04kB 82.4MB / 24.6kB 11
^ C
When creating images, you need to consider:
** changing a large layer, it will be recreated, so it is often better to split it, for example, create one layer with 'NPM i' and copy the code on the second;
* if the file in the image is large and the container is changed, then from the read-only image layer the file will be completely copied to the editing layer, therefore, the containers are supposed to be lightweight, and the content is usually placed in a special storage. code-as-a-service: 12 factors (12factor.net)
* Codebase – one service – they are a repository;
* Dependeces – all dependent services in the config;
* Config – configs are available through the environment;
* BackEnd – exchange data with other services via an API-based network;
* Processes – one service – one process, which allows in the event of a fall to unambiguously track (the container itself ends) and restart it;
* Independence of the environment and no influence on it.
* СI / CD – code control (git) – build (jenkins, GitLab) – relies (Docker, jenkins) – deploy (helm, Kubernetes). Keeping the service lightweight is important, but there are programs not designed to run in containers like databases. Due to their peculiarity, certain requirements are imposed on their launch, and the profit is limited. So, because of big data, they are not only slow to scale, and rolling-abdate is unlikely, and the restart must be performed on the same nodes as their data for reasons of performance of access to them.
* Config – service relationships are defined in the configuration, for example, docker-compose.yml;
* Port bindign – services communicate through ports, while the port can be selected automatically, for example, if EXPOSE PORT is specified in the Dockerfile, then when a container is called with the -P flag, it will be terminated to the free one automatically.
* Env – environment settings are passed through environment variables, not through configs, which allows them to be added to the service config configuration, for example, docker-compose.yml
* Logs – logs are streamed over the network, for example, ELK, or printed to the output, which is already streamed by Docker.
Dockerd internals:
essh @ kubernetes-master: ~ / mongo-rs $ ps aux | grep dockerd
root 6345 1.1 0.7 3257968 123640? Ssl Jul05 76:11 / usr / bin / dockerd -H fd: // –containerd = / run / containerd / containerd.sock
essh 16650 0.0 0.0 21536 1036 pts / 6 S + 23:37 0:00 grep –color = auto dockerd
essh @ kubernetes-master: ~ / mongo-rs $ pgrep dockerd
6345
essh @ kubernetes-master: ~ / mongo-rs $ pstree -c -p -A $ (pgrep dockerd)
dockerd (6345) – + – docker-proxy (720) – + – {docker-proxy} (721)
| | – {docker-proxy} (722)
| | – {docker-proxy} (723)
| | – {docker-proxy} (724)
| | – {docker-proxy} (725)
| | – {docker-proxy} (726)
| | – {docker-proxy} (727)
| `– {docker-proxy} (728)
| -docker-proxy (7794) – + – {docker-proxy} (7808)
Docker-File:
* cleaning caches from package managers: apt-get, pip and others, this cache is not needed in production, only
takes up space and loads the network, but nowadays it is not often not relevant, since there are multi-stage
assembly, but more on that below.
* group commands of the same entities, for example, get APT cache, install programs and uninstall
cache: in one instruction – the code of only programs, with the spaced version – the code of the programs and the cache,
because if you do not delete the cache in one instruction, then it will be saved in the layer, regardless of
follow-up actions.
* separate instructions by frequency of change, so for example, if not split installation
software and code, then when you change something in the code, then instead of using the ready-made
layer with programs, they will be reinstalled, which will entail significant preparation time
image that is critical for developers:
ADD ./app/package.json / app
RUN npm install
ADD ./app / app
Docker alternatives
** Rocket or rkt – containers for the CoreOS operating environment from RedHut, specially designed to use containers.
** Hyper-V is an environment for running Docker on the Windows operating system, which is a wrapper (lightweight virtual machine) of the container.
Docker has branched off its core components, which it uses as primitives, which have become standard components for implementing containers such as RKT, bundled into the containerd project:
* CRI-O – OpanSource project aimed from the beginning to fully support CRI (Container Runtime Interface) standards, github.com/opencontainers/runtime-spec/">Runtime Specification and github.com/opencontainers/image-spec">Image Specification as a general interface for the interaction of the orchestration system with containers. Along with Docker, support for CRI-O 1.0 has been added to Kubernetes (more on this) since version 1.7 in 2007, as well as MiniKube and Kubic. Has a CLI (Common Line Interface) implementation in the Pandom project, which almost completely repeats Docker commands, but without orchestration (Docker Swarm), which is the default tool in Linux Fedora.
* CRI (Kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-Kubernetes/">Container Runtime Interface) – an environment for running containers, universally providing primitives (Executor, Supervisor, Metadata, Content, Snapshot , Events and Metrics) for working with Linux containers (process spaces, groups, etc.).
** CNI (Container Networking Interface) – work with the network.
Portainer
The simplest monitoring option would be Portainer:
essh @ kubernetes-master: ~ / microKubernetes $ cat << EOF> docker-compose.monitoring.yml
version: '2'
>
services:
portainer:
image: portainer / portainer
command: -H unix: ///var/run/Docker.sock
restart: always
ports:
– 9000: 9000
volumes:
– /var/run/Docker.sock:/var/run/Docker.sock
– ./portainer_data:/data
>
EOF
essh @ kubernetes-master: ~ / microKubernetes $ docker-compose -f docker-compose.monitoring.yml up -d
Monitoring with Prometheus
Monitoring – maintaining the continuity of work, tracking the current situation (identifying, localizing and sending about the incident, for example, in SaaS PagerDuty), predicting possible situations, visualization, building models for the normal operation of IAOps (Artificial Intelligence For It Operations, https: //www.gartner .com / en / information-technology / glossary / aiops-artificial-intelligence-operations).
Monitoring contains the following steps:
* identification of the incident;
* notification of the incident;
* localization;
* decision.
Monitoring can be classified by level into the following types:
* infrastructure (operating system, servers, Kubernetes, DBMS),;
* applied (application logs, traces, application events),;
* business processes (points in transactions, traces of transactions).
Monitoring can be classified according to the principle:
* distributed (traces),;
* synthetic (availability),;
* IAOps (forecasting, anomalies).
Monitoring is divided into two parts according to the degree of analysis: logging systems and incident investigation systems. An example of logging
serves as ELK stack, and incident investigation – Sentry (SaaS). For micro-services, a tracing system is also added.
requests such as Jeger or Zipkin. The logging system simply writes all the logs that are available.
The incident investigation system writes much more information, but writes it only in case of errors in the application, for example,
environment parameters, versions of installed packages, stack trace and so on, which allows you to get maximum information when viewing
by mistake, rather than collecting it piece by piece from the server and the GIT repository. But the set and format of information depends on the environment, therefore
the incident system needs to be integrated with various language platforms, and even better with specific frameworks. So Sentry
poisons environment variables, a piece of code and an indication of where the error occurred, parameters of the program and platform
environments, method calls.
Ecosystem monitoring can be divided into:
* Built into Cloud Cloud: Azure Monitoring, Amazon CloudWatch, Google Cloud Monitoring
* Provided as a service with support for various SaaS integrations: DataDog, NewRelic
* CloudNative: Prometheus
* For dedicated servers OnPremis: Zabbix
Zabbix was developed in 1998 and released to OpenSource under the GPL in 2001. At that time, the traditional interface:
without any design, with a lot of tabs, selectors and the like. Since it was developed for
own needs, it contains specific solutions. He is oriented
monitoring devices and their components such as disks, networks, printers, routers and the like. For
interactions can be used:
Agents – installed on servers, collecting many metrics and poisoning Zabbix server