bannerbanner
IT Cloud
IT Cloud

Полная версия

IT Cloud

Язык: Русский
Год издания: 2021
Добавлена:
Настройки чтения
Размер шрифта
Высота строк
Поля
На страницу:
11 из 12

load_config_file = false

}

essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf

resource "kubernetes_deployment" "nodejs" {

metadata {

name = "terraform-nodejs"

labels = {

app = "NodeJS"

}

}

spec {

replicas = 3

selector {

match_labels = {

app = "NodeJS"

}

}

template {

metadata {

labels = {

app = "NodeJS"

}

}

spec {

container {

image = "Nginx: 1.17.0"

name = "node-js"

command = ["/ bin / bash"]

args = ["-c", "echo $ HOSTNAME> /usr/share/Nginx/html/index.html && / usr / sbin / Nginx -g 'daemon off;'"]

}

}

}

}

}

resource "kubernetes_service" "nodejs" {

metadata {

name = "terraform-nodejs"

}

spec {

selector = {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

port {

port = 80

target_port = var.target_port

}

type = "LoadBalancer"

}

Let's check the work using kubectl, for this we transfer the secrets from gcloud to kubectl.

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

essh @ kubernetes-master: ~ / node-cluster $ gcloud container clusters get-credentials node-ks –region = europe-west2-a

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.

essh @ kubernetes-master: ~ / node-cluster $ kubectl get deployments -o wide

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR

terraform-nodejs 3 3 3 3 25m node-js Nginx: 1.17.0 app = NodeJS

essh @ kubernetes-master: ~ / node-cluster $ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6bd565dc6c-8768b 1/1 Running 0 4m45s 10.4.3.15 gke-node-ks-node-ks-pool-07115c5b-bw15 none>

terraform-nodejs-6bd565dc6c-hr5vg 1/1 Running 0 4m42s 10.4.5.13 gke-node-ks-node-ks-pool-27e2e52c-9q5b none>

terraform-nodejs-6bd565dc6c-mm7lh 1/1 Running 0 4m43s 10.4.2.6 gke-node-ks-default-pool-2dc50760-757p none>

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker ps | grep node-js_terraform

152e3c0ed940 719cd2e3ed04

"/ bin / bash -c 'ech …" 8 minutes ago Up 8 minutes

Kubernetes_node-js_terraform-nodejs-6bd565dc6c-8768b_default_7a87ae4a-9379-11e9-a78e-42010a9a0114_0

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker exec -it 152e3c0ed940 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-8768b

esschtolts @ gke-node-ks-node-ks-pool-27e2e52c-9q5b ~ $ docker exec -it c282135be446 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-default-pool-2dc50760-757p ~ $ docker exec -it 8d1cf9ef44e6 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.2.6

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.5.13

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.3.15

terraform-nodejs-6bd565dc6c-8768b

The Balancers load balance between PODs that are filtered by matching their selectors in the meta information and the Selector specified in the balancer description in the spec section . All nodes are connected to one common network, so you can connect to any node (I did this via SSH of the GCP WEB interface in the section with Compute Engine virtual machines). You can address both the IP address in the container or node host, and the host of the terraform-nodejs service in the terraform-NodeJS: 80 curl container , which is created by the internal DNS by the name of the service. You can view the external IP address EXTERNAL -IP both using kubectl at the service and using the web interface: GCP -> Kubernetes Engine -> Services:

essh @ kubernetes-master: ~ / node-cluster $ kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

kubernetes ClusterIP 10.7.240.1 none> 443 / TCP 6h58m

terraform-nodejs LoadBalancer 10.7.246.234 35.197.220.103 80: 32085 / TCP 5m27s

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-8768b

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-hr5vg

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

Now let's move on to implementing the NodeJS server:

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform destroy

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it –rm node: 12 which node

/ usr / local / bin / node

sudo docker run -it –rm -p 8222: 80 node: 12 / bin / bash -c 'cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git &&

/ usr / local / bin / node /usr/src/nodejs-hello-world/index.js'

firefox http: // localhost: 8222

Let's replace the block in our container with:

container {

image = "node: 12"

name = "node-js"

command = ["/ bin / bash"]

args = [

"-c",

"cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git && / usr / local / bin / node /usr/src/nodejs-hello-world/index.js "

]

}

If you comment out a Kubernetes module, and it remains in the cache, it remains to remove the excess from the cache:

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

Error: Provider configuration not present

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

data.google_client_config.default

module.Kubernetes.google_container_cluster.node-ks

module.Kubernetes.google_container_node_pool.node-ks-pool

module.nodejs.kubernetes_deployment.nodejs

module.nodejs.kubernetes_service.nodejs

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs

Removed module.nodejs.kubernetes_deployment.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_service.nodejs

Removed module.nodejs.kubernetes_service.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

module.Kubernetes.google_container_cluster.node-ks: Refreshing state … [id = node-ks]

module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state … [id = europe-west2-a / node-ks / node-ks-pool]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Terraform Cluster Reliability and Automation

For a general overview of automation, see https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html # 0. We will dwell in more detail. Now if we run the ./terraform destroy and try to recreate the entire infrastructure from the beginning , we will get errors. Errors are received due to the fact that the order of creation of services is not specified and terraform, by default, sends requests to the API in 10 parallel threads, although this can be changed by specifying or removing the -parallelism = 1 switch during application or removal . As a result, Terraform tries to create Kubernetes services (Deployment and service) on servers (node-pull) that do not yet exist, the same situation when creating a service that requires proxying a Deployment that has not yet been created. By telling Terraform to request the API in a single thread ./terraform apply -parallelism = 1, we reduce possible provider-side restrictions on the frequency of API calls, but we do not solve the problem of lack of order in which entities are created. We will not comment out dependent blocks and gradually uncomment and run ./terraform apply , nor will we run our system piece by piece by specifying specific blocks ./terraform apply -target = module.nodejs.kubernetes_deployment.nodejs . We will indicate in the code the dependencies themselves on the initialization of the variable, the first of which is already defined as external var.endpoint , and the second we will create locally:

locals {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

Now we can add dependencies to the code depends_on = [var.endpoint] and depends_on = [kubernetes_deployment .nodejs] .

The service unavailability error may also appear: Error: Get https: //35.197.228.3/API/v1 …: dial tcp 35.197.228.3:443: connect: connection refused , then the connection time has been exceeded, which is 6 minutes by default (3600 seconds), but here you can just try again.

Now let's move on to solving the problem of the reliability of the container, the main process of which we run in the command shell. The first thing we will do is separate the creation of the application from the launch of the container. To do this, you need to transfer the entire process of creating a service into the process of creating an image, which can be tested, and by which you can create a service container. So let's create an image:

essh @ kubernetes-master: ~ / node-cluster $ cat app / server.js

const http = require ('http');

const server = http.createServer (function (request, response) {

response.writeHead (200, {"Content-Type": "text / plain"});

response.end (`Nodejs_cluster is working! My host is $ {process.env.HOSTNAME}`);

});

server.listen (80);

essh @ kubernetes-master: ~ / node-cluster $ cat Dockerfile

FROM node: 12

WORKDIR / usr / src /

ADD ./app / usr / src /

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "server.js"]

essh @ kubernetes-master: ~ / node-cluster $ sudo docker image build -t nodejs_cluster.

Sending build context to Docker daemon 257.4MB

Step 1/6: FROM node: 12

––> b074182f4154

Step 2/6: WORKDIR / usr / src /

––> Using cache

––> 06666b54afba

Step 3/6: ADD ./app / usr / src /

––> Using cache

––> 13fa01953b4a

Step 4/6: RUN npm install

––> Using cache

––> dd074632659c

Step 5/6: EXPOSE 3000

––> Using cache

––> ba3b7745b8e3

Step 6/6: ENTRYPOINT ["node", "server.js"]

––> Using cache

––> a957fa7a1efa

Successfully built a957fa7a1efa

Successfully tagged nodejs_cluster: latest

essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

Now let's put our image in the GCP registry, and not the Docker Hub, because we immediately get a private repository with which our services automatically have access:

essh @ kubernetes-master: ~ / node-cluster $ IMAGE_ID = "nodejs_cluster"

essh @ kubernetes-master: ~ / node-cluster $ sudo docker tag $ IMAGE_ID: latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest

essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

essh @ kubernetes-master: ~ / node-cluster $ gcloud auth configure-docker

gcloud credential helpers already registered correctly.

essh @ kubernetes-master: ~ / node-cluster $ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest

The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]

194f3d074f36: Pushed

b91e71cc9778: Pushed

640fdb25c9d7: Layer already exists

b0b300677afe: Layer already exists

5667af297e60: Layer already exists

84d0c4b192e8: Layer already exists

a637c551a0da: Layer already exists

2c8d31157b81: Layer already exists

7b76d801397d: Layer already exists

f32868cde90b: Layer already exists

0db06dff9d9a: Layer already exists

latest: digest: sha256: 912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629

essh @ kubernetes-master: ~ / node-cluster $ gcloud container images list

NAME

gcr.io/node-cluster-243923/nodejs_cluster

Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.

Now we can see it in the GCP admin panel: Container Registry -> Images. Let's replace the code of our container with the code with our image. If for production it is necessary to version the launched image in order to avoid their automatic update during system re-creation of PODs, for example, when transferring POD from one node to another when taking a machine with our node for maintenance. For development, it is better to use the latest tag , which will update the service when the image is updated. When you update the service, you need to recreate it, that is, delete and recreate it, since otherwise the terraform will simply update the parameters, and not recreate the container with the new image. Also, if we update the image and mark the service as modified with the command ./terraform taint $ {NAME_SERVICE} , our service will simply be updated, which can be seen with the command ./terraform plan . Therefore, to update, for now, you need to use the commands ./terraform destroy -target = $ {NAME_SERVICE} and ./terraform apply , and the name of the services can be found in ./terraform state list :

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

data.google_client_config.default

module.kubernetes.google_container_cluster.node-ks

module.kubernetes.google_container_node_pool.node-ks-pool

module.Nginx.kubernetes_deployment.nodejs

module.Nginx.kubernetes_service.nodejs

essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy -target = module.nodejs.kubernetes_deployment.nodejs

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

Now let's replace the code of our container:

container {

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

name = "node-js"

}

Let's check the result of balancing for different nodes (no line break at the end of the output):

essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $

We will automate the process of creating images, for this we will use the Google Cloud Build service (free for 5 users and traffic up to 50GB) to create a new image when creating a new version (tag) in the Cloud Source Repositories repository (free on Google Cloud Platform Free Tier). Google Cloud Platform -> Menu -> Tools -> Cloud Build -> Triggers -> Enable Cloud Build API -> Get Started -> Create a repository that will be available on Google Cloud Platform -> Menu -> Tools -> Source Code Repositories (Cloud Source Repositories):

essh @ kubernetes-master: ~ / node-cluster $ cd app /

essh @ kubernetes-master: ~ / node-cluster / app $ ls

server.js

essh @ kubernetes-master: ~ / node-cluster / app $ mv ./server.js ../

essh @ kubernetes-master: ~ / node-cluster / app $ gcloud source repos clone nodejs –project = node-cluster-243923

Cloning into '/ home / essh / node-cluster / app / nodejs' …

warning: You appear to have cloned an empty repository.

Project [node-cluster-243923] repository [nodejs] was cloned to [/ home / essh / node-cluster / app / nodejs].

essh @ kubernetes-master: ~ / node-cluster / app $ ls -a

… .. nodejs

essh @ kubernetes-master: ~ / node-cluster / app $ ls nodejs /

essh @ kubernetes-master: ~ / node-cluster / app $ ls -a nodejs /

… .. .git

essh @ kubernetes-master: ~ / node-cluster / app $ cd nodejs /

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ mv ../../server.js.

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'test server'

[master (root-commit) 46dd957] test server

1 file changed, 7 insertions (+)

create mode 100644 server.js

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push -u origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (2/2), done.

Writing objects: 100% (3/3), 408 bytes | 408.00 KiB / s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new branch] master -> master

Branch 'master' set up to track remote branch 'master' from 'origin'.

Now it's time to set up image creation when creating a new version of the product: go to GCP -> Cloud Build -> triggers -> Create trigger -> Google Cloud source code repository -> NodeJS. Trigger tag type so that the image is not created during normal commits. I will change the name of the image from gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA to gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA and the timeout to 60 seconds. Now I'll commit and add a tag:

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'add Dockerfile'

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git remote -v

origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (fetch)

origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (push)

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (3/3), done.

Writing objects: 100% (3/3), 380 bytes | 380.00 KiB / s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

46dd957..b86c01d master -> master

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.1 -m 'test to run'

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.1

Counting objects: 1, done.

Writing objects: 100% (1/1), 161 bytes | 161.00 KiB / s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] v0.0.1 -> v0.0.1

Now, if we press the start trigger button, we will see the image in the Container Registry with our tag:

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud container images list

NAME

gcr.io/node-cluster-243923/nodejs

gcr.io/node-cluster-243923/nodejs_cluster

Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.

Now if we just add the changes and the tag, the image will be created automatically:

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sed -i 's / HOSTNAME \} / HOSTNAME \} \ n /' server.js

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'fix'

[master 230d67e] fix

1 file changed, 2 insertions (+), 1 deletion (-)

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (3/3), done.

Writing objects: 100% (3/3), 304 bytes | 304.00 KiB / s, done.

Total 3 (delta 1), reused 0 (delta 0)

remote: Resolving deltas: 100% (1/1)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

b86c01d..230d67e master -> master

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.2 -m 'fix'

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.2

Counting objects: 1, done.

Writing objects: 100% (1/1), 158 bytes | 158.00 KiB / s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] v0.0.2 -> v0.0.2

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sleep 60

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud builds list

ID CREATE_TIME DURATION SOURCE IMAGES STATUS

2b024d7e-87a9-4d2a-980b-4e7c108c5fad 2019-06-22T17: 13: 14 + 00: 00 28S nodejs@v0.0.2 gcr.io/node-cluster-243923/nodejs:v0.0.2 SUCCESS

6b4ae6ff-2f4a-481b-9f4e-219fafb5d572 2019-06-22T16: 57: 11 + 00: 00 29S nodejs@v0.0.1 gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS

e50df082-31a4-463b-abb2-d0f72fbf62cb 2019-06-22T16: 56: 48 + 00: 00 29S nodejs@v0.0.1 gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a latest -m 'fix'

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin latest

Counting objects: 1, done.

Writing objects: 100% (1/1), 156 bytes | 156.00 KiB / s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] latest -> latest

essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cd ../ ..

Creating multiple environments with Terraform clusters

When trying to create several clusters from the same configuration, we will encounter duplicate identifiers that must be unique, so we isolate them from each other by creating and placing them in different projects. To manually create a project, go to GCP -> Products -> IAM and administration -> Resource management and create a NodeJS-prod project and switch to the project, wait for its activation. Let's look at the state of the current project:

На страницу:
11 из 12