diff --git a/configmap/README.md b/configmap/README.md new file mode 100644 index 0000000..86d91bd --- /dev/null +++ b/configmap/README.md @@ -0,0 +1,90 @@ +# Configmap + +## 01 - Variáveis de ambiente + +``` +apiVersion: v1 +kind: Pod +metadata: + name: envar-demo + labels: + purpose: demonstrate-envars +spec: + containers: + - name: envar-demo-container + image: gcr.io/google-samples/node-hello:1.0 + env: + - name: DEMO_GREETING + value: "Hello from the environment" + - name: DEMO_FAREWELL + value: "Such a sweet sorrow" +``` + +``` +kubectl exec -it envar-demo -- env +``` + +## 02 - ConfigMaps + +É um objeto de API usado para armazenamento de informações não confidenciais no padrão de chave valor. Pods podem consumir esses objetos como variáveis de ambiente, argumentos de linha de comando ou como arquivos de configuração em um volume. + +``` +apiVersion: v1 +kind: ConfigMap +metadata: + name: game-demo +data: + # property-like keys; each key maps to a simple value + player_initial_lives: "3" + ui_properties_file_name: "user-interface.properties" + + # file-like keys + game.properties: | + enemy.types=aliens,monsters + player.maximum-lives=5 + user-interface.properties: | + color.good=purple + color.bad=yellow + allow.textmode=true +``` + +``` +apiVersion: v1 +kind: Pod +metadata: + name: configmap-demo-pod +spec: + containers: + - name: demo + image: alpine + command: ["sleep", "3600"] + env: + # Define the environment variable + - name: PLAYER_INITIAL_LIVES # Notice that the case is different here + # from the key name in the ConfigMap. + valueFrom: + configMapKeyRef: + name: game-demo # The ConfigMap this value comes from. + key: player_initial_lives # The key to fetch. + - name: UI_PROPERTIES_FILE_NAME + valueFrom: + configMapKeyRef: + name: game-demo + key: ui_properties_file_name + volumeMounts: + - name: config + mountPath: "/config" + readOnly: true + volumes: + # You set volumes at the Pod level, then mount them into containers inside that Pod + - name: config + configMap: + # Provide the name of the ConfigMap you want to mount. + name: game-demo + # An array of keys from the ConfigMap to create as files + items: + - key: "game.properties" + path: "game.properties" + - key: "user-interface.properties" + path: "user-interface.properties" +``` diff --git a/crojob/README.md b/crojob/README.md new file mode 100644 index 0000000..d2a990d --- /dev/null +++ b/crojob/README.md @@ -0,0 +1,27 @@ +# Cronjobs + +``` +apiVersion: batch/v1 +kind: CronJob +metadata: + name: hello +spec: + schedule: "* * * * *" + jobTemplate: + spec: + template: + spec: + containers: + - name: hello + image: busybox:1.28 + imagePullPolicy: IfNotPresent + command: + - /bin/sh + - -c + - date; echo Hello from the Kubernetes cluster + restartPolicy: OnFailure +``` + +``` +watch kubectl get jobs,pods,cj +``` diff --git a/daemonset/README.md b/daemonset/README.md new file mode 100644 index 0000000..92a832e --- /dev/null +++ b/daemonset/README.md @@ -0,0 +1,20 @@ +# Daemonsets + +``` +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: exemplo-ds +spec: + selector: + matchLabels: + app: exemplo-ds + template: + metadata: + labels: + app: exemplo-ds + spec: + containers: + - image: nginx:alpine + name: nginx +``` diff --git a/deployment/README.md b/deployment/README.md index 2cb66bc..cf2840a 100644 --- a/deployment/README.md +++ b/deployment/README.md @@ -70,93 +70,3 @@ Escale o deployment para cinco réplicas: ``` kubectl scale deployment/nginx-deployment --replicas=5 ``` - -#### 02 - HorizontalPodAutoscaler (HPA) - -Para utilização do HPA precisaremos instalar o [metrics-server](https://github.com/kubernetes-sigs/metrics-server) no cluster: - -``` -kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -``` - -Para concluir a instalação, edite o deployment adicionando um campo `--kubelet-insecure-tls` da lista de `args` do container. - -Instale o deployment de exemplo abaixo e inspecione os objetos no cluster: - -``` -apiVersion: apps/v1 -kind: Deployment -metadata: - name: hpa-demo-deployment -spec: - selector: - matchLabels: - run: hpa-demo-deployment - replicas: 1 - template: - metadata: - labels: - run: hpa-demo-deployment - spec: - containers: - - name: hpa-demo-deployment - image: k8s.gcr.io/hpa-example - ports: - - containerPort: 80 - resources: - limits: - cpu: 500m - requests: - cpu: 200m -``` - -Crie o service no cluster: - -``` -apiVersion: v1 -kind: Service -metadata: - name: hpa-demo-deployment - labels: - run: hpa-demo-deployment -spec: - ports: - - port: 80 - selector: - run: hpa-demo-deployment -``` - -Crie o HPA no cluster: - -``` -apiVersion: autoscaling/v1 -kind: HorizontalPodAutoscaler -metadata: - name: hpa-demo-deployment -spec: - scaleTargetRef: - apiVersion: apps/v1 - kind: Deployment - name: hpa-demo-deployment - minReplicas: 1 - maxReplicas: 10 - targetCPUUtilizationPercentage: 50 -``` - -Suba um container usando a image busybox para adicionar carga de CPU no pod: - -``` -kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -``` - -Dentro do container rode o seguinte comando: - -``` -while sleep 0.01; do wget -q -O- http://hpa-demo-deployment; done -``` - -Monitore o HPA e o quantitativo de pods enquanto o comando acima é executado: - -``` -watch kubectl get pods,hpa -``` diff --git a/hpa/README.md b/hpa/README.md new file mode 100644 index 0000000..35817f8 --- /dev/null +++ b/hpa/README.md @@ -0,0 +1,89 @@ +# HorizontalPodAutoscaler (HPA) + +Para utilização do HPA precisaremos instalar o [metrics-server](https://github.com/kubernetes-sigs/metrics-server) no cluster: + +``` +kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml +``` + +Para concluir a instalação, edite o deployment adicionando um campo `--kubelet-insecure-tls` da lista de `args` do container. + +Instale o deployment de exemplo abaixo e inspecione os objetos no cluster: + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: hpa-demo-deployment +spec: + selector: + matchLabels: + run: hpa-demo-deployment + replicas: 1 + template: + metadata: + labels: + run: hpa-demo-deployment + spec: + containers: + - name: hpa-demo-deployment + image: k8s.gcr.io/hpa-example + ports: + - containerPort: 80 + resources: + limits: + cpu: 500m + requests: + cpu: 200m +``` + +Crie o service no cluster: + +``` +apiVersion: v1 +kind: Service +metadata: + name: hpa-demo-deployment + labels: + run: hpa-demo-deployment +spec: + ports: + - port: 80 + selector: + run: hpa-demo-deployment +``` + +Crie o HPA no cluster: + +``` +apiVersion: autoscaling/v1 +kind: HorizontalPodAutoscaler +metadata: + name: hpa-demo-deployment +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: hpa-demo-deployment + minReplicas: 1 + maxReplicas: 10 + targetCPUUtilizationPercentage: 50 +``` + +Suba um container usando a image busybox para adicionar carga de CPU no pod: + +``` +kubectl run -i --tty load-generator --rm --image=busybox --restart=Never +``` + +Dentro do container rode o seguinte comando: + +``` +while sleep 0.01; do wget -q -O- http://hpa-demo-deployment; done +``` + +Monitore o HPA e o quantitativo de pods enquanto o comando acima é executado: + +``` +watch kubectl get pods,hpa +``` diff --git a/ingress/README.md b/ingress/README.md new file mode 100644 index 0000000..e69de29 diff --git a/namespaces/README.md b/namespace/README.md similarity index 100% rename from namespaces/README.md rename to namespace/README.md diff --git a/operators/README.md b/operators/README.md new file mode 100644 index 0000000..a67810d --- /dev/null +++ b/operators/README.md @@ -0,0 +1,23 @@ + +# Operators + +``` +curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.21.2/install.sh | bash -s v0.21.2 +``` + +``` +kubectl create -f https://operatorhub.io/install/cloud-native-postgresql.yaml +``` + +``` +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-sample +spec: + instances: 3 + logLevel: info + primaryUpdateStrategy: unsupervised + storage: + size: 1Gi +``` diff --git a/secret/README.md b/secret/README.md new file mode 100644 index 0000000..4f92259 --- /dev/null +++ b/secret/README.md @@ -0,0 +1,178 @@ +# Secrets + +## Introdução + +Secrets são objetos para armazenamento de informações sensiveis, como senhas ou tokens. + +### Usando Secrets + + +#### 01 - Opaque + +``` +kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb' +``` + +``` +apiVersion: v1 +kind: Pod +metadata: + name: envfrom-secret +spec: + containers: + - name: envars-test-container + image: nginx + envFrom: + - secretRef: + name: test-secret +``` + +``` +apiVersion: v1 +kind: Pod +metadata: + name: env-secretkeyref +spec: + containers: + - name: envars-test-container + image: nginx + env: + - name: username + valueFrom: + secretKeyRef: + name: test-secret + key: username + - name: password + valueFrom: + secretKeyRef: + name: test-secret + key: password +``` + +#### 02 - Dockerconfigjson + +``` +kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email= +``` + +Preencha com alguns dados fictícios e aplique no cluster. Inspecione o objeto criado para visualizar o formato do Yaml gerado. + +Há duas formas de usar esse tipo de secret, diretamente no pod/deployment ou na serivceaccount. + +Exemplo de uso no pod: + +``` +apiVersion: v1 +kind: Pod +metadata: + name: private-reg +spec: + containers: + - name: private-reg-container + image: + imagePullSecrets: + - name: regcred +``` + +Exemplo de uso na serviceaccount: + +``` +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +imagePullSecrets: +- name: regcred +``` + +#### 03 - TLS secrets + +O Kubernetes provê um tipo de secret própria para armazenar certificados TLS de aplicações (`kubernetes.io/tls`). + +Crie um arquivo (`confs.cnf`) passando as configurações do certificado: + +``` +[req] +distinguished_name = req_distinguished_name +x509_extensions = v3_req +prompt = no +[req_distinguished_name] +C = BR +ST = RJ +L = Rio de Janeiro +O = Sua Empresa +OU = Departamento de TI +CN = *.workshop.getup.local +[v3_req] +keyUsage = critical, digitalSignature, keyAgreement +extendedKeyUsage = serverAuth +``` + +Rode o openssl para a criação do certificado (`tls.crt`) e da chave privada (`tls.key`): + +``` +openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout tls.key -out tls.crt -config confs.cnf -sha256 +``` + +Instale o ingress-nginx no cluster: + +``` +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +``` + +``` +helm repo update +``` + +``` +helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --create-namespace \ +--set controller.extraArgs.default-ssl-certificate=ingress-nginx/default-certificate +``` + +Uma vez o ingress instalado, basta criar a secret padrão no mesmo namespace do controlador: + +``` +kubectl create secret tls default-certificate -n ingress-nginx\ + --cert=tls.crt \ + --key=tls.key +``` + +Para o teste final, crie um deployment de nginx para expô-lo através de um ingress: + +``` +kubectl create deploy nginx --port=80 --image=nginx:alpine +``` + +``` +kubectl expose deploy/nginx --port=80 --target-port=80 +``` + +``` +kubectl create ingress nginx --class=nginx --rule="nginx.workshop.getup.local/=nginx:80,tls" +``` + +``` +curl -kv https://nginx.workshop.getup.local +``` + +``` +apiVersion: v1 +kind: Secret +metadata: + name: secret-tls +type: kubernetes.io/tls +data: + # the data is abbreviated in this example + tls.crt: | + MIIC2DCCAcCgAwIBAgIBATANBgkqh ... + tls.key: | + MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... +``` + +Como criar a secret do tipo `kubernetes.io/tls`: + +``` +kubectl create secret tls my-tls-secret \ + --cert=path/to/cert/file \ + --key=path/to/key/file +``` diff --git a/service/README.md b/service/README.md new file mode 100644 index 0000000..e9c86d3 --- /dev/null +++ b/service/README.md @@ -0,0 +1,216 @@ +# Services + +## Introdução + +Service é uma abstração a nível de rede para expor os pods de forma perene, possibilitando o acesso através de um nome e criando um balanceador para distribuir a carga igualmente entre os pods. A definicão de quais pods serão balanceados pelo service se dá através de um selector, que varre os pods que possuem uma determinada label. + +![service](service-example.png) + +### Usando Services + +#### 01 - Expondo um pod + +``` +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app.kubernetes.io/name: proxy +spec: + containers: + - name: nginx + image: nginx:stable + ports: + - containerPort: 80 + name: http-web-svc + +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + selector: + app.kubernetes.io/name: proxy + ports: + - name: name-of-service-port + protocol: TCP + port: 80 + targetPort: http-web-svc +``` + +#### 02 - Expondo um deployment - ClusterIP + +![clusterip](clusterip.png) + +``` +kubectl create deploy nginx --image=nginx:alpine +``` + +``` +kubectl expose deploy/nginx --port=80 --target-port=80 --type=ClusterIP +``` + +#### 03 - NodePort + +Inicie um cluster kind com três nós ([referência de config](https://raw.githubusercontent.com/mmmarceleza/devops/main/kubernetes/kind/config.yaml)): + +``` +kind create cluster --config config.yaml +``` + +Crie um deployment com 2 réplicas: + +``` +kubectl create deploy nginx --image=nginx:alpine --replicas=2 +``` + +Crie um service do tipo NodePort para esse deploy: + +``` +kubectl expose deploy/nginx --port=80 --target-port=80 --type=NodePort +``` + +Identifique o endereço IP de cada nó: + +``` +docker inspect ID +``` + +Verifique se a conexão com porta alta do service é fechada com sucesso: + +``` +nc -vz IP PORTA + +ou + +curl IP:PORTA +``` + +#### 04 - Port forward a service + +``` +kubectl create deploy nginx --image=nginx:alpine +``` + +``` +kubectl expose deploy/nginx --port=80 --target-port=80 --type=ClusterIP +``` + +``` +kubectl port-forward svc/nginx 8080:80 +``` + +#### 05 - LoadBalancer + +Instale o metallb: + +``` +kubectl edit configmap -n kube-system kube-proxy +``` +O campo `strictARP` deve ser alterado para `true`. +``` +apiVersion: kubeproxy.config.k8s.io/v1alpha1 +kind: KubeProxyConfiguration +mode: "ipvs" +ipvs: + strictARP: true +``` + +``` +kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.4/config/manifests/metallb-native.yaml +``` + +Aplique o seguinte manifesto no cluster: + +``` +--- +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - 172.18.0.100-172.18.0.200 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: example + namespace: metallb-system +spec: + ipAddressPools: + - first-pool +``` + +Crie um deployment com nginx com um service do tipo LoadBalancer: + +``` +kubectl create deploy nginx --image=nginx:alpine +``` + +``` +kubectl expose deploy/nginx --port=80 --target-port=80 --type=LoadBalancer +``` + +#### 06 - Headless + +Crie os seguintes recursos no cluster: + +``` +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app + labels: + app: server +spec: + replicas: 3 + selector: + matchLabels: + app: web + template: + metadata: + labels: + app: web + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: regular-service +spec: + selector: + app: web + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: headless-svc +spec: + clusterIP: None + selector: + app: web + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +``` + +Rode um pod para avaliar a resolução de DNS em cima de cada service criado: + +``` +kubectl run -it --rm testedns --image=nicolaka/netshoot +``` diff --git a/service/clusterip.png b/service/clusterip.png new file mode 100644 index 0000000..b6d1974 Binary files /dev/null and b/service/clusterip.png differ diff --git a/service/service-example.png b/service/service-example.png new file mode 100644 index 0000000..d1f8eae Binary files /dev/null and b/service/service-example.png differ diff --git a/statefulset/README.md b/statefulset/README.md new file mode 100644 index 0000000..da69f9a --- /dev/null +++ b/statefulset/README.md @@ -0,0 +1,62 @@ + +# Statefulsets + +É o objeto de API Kubernetes responsável para gerenciar aplicações stateful, como bancos de dados e correlatos. + +``` +helm repo add openebs https://openebs.github.io/charts +helm repo update +helm install openebs -n openebs --create-namespace openebs/openebs +kubectl get storageclass +``` + +``` +apiVersion: v1 +kind: Service +metadata: + name: nginx + labels: + app: nginx +spec: + ports: + - port: 80 + name: web + clusterIP: None + selector: + app: nginx +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: web +spec: + selector: + matchLabels: + app: nginx + serviceName: "nginx" + replicas: 3 + template: + metadata: + labels: + app: nginx + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: nginx + image: k8s.gcr.io/nginx-slim:0.8 + ports: + - containerPort: 80 + name: web + volumeMounts: + - name: www + mountPath: /usr/share/nginx/html + volumeClaimTemplates: + - metadata: + name: www + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "openebs-hostpath" + resources: + requests: + storage: 1Gi +``` diff --git a/volumes/README.md b/volumes/README.md new file mode 100644 index 0000000..27364ef --- /dev/null +++ b/volumes/README.md @@ -0,0 +1,209 @@ +# Volumes + +## Introdução + +### Usando Volumes + +#### 01 - Emptydir + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: exemplo-01 +spec: + replicas: 1 + selector: + matchLabels: + app: emptydir + template: + metadata: + labels: + app: emptydir + spec: + containers: + - name: escrita + image: k8s.gcr.io/busybox + args: + - /bin/sh + - -c + - while sleep 30; do touch /escrita/$(date +%H)-$(date +%M)-$(date +%S); done + volumeMounts: + - mountPath: /escrita + name: pasta-comum + - name: leitura + image: k8s.gcr.io/busybox + args: + - /bin/sh + - -c + - while sleep 30; do ls -l /leitura; done + volumeMounts: + - mountPath: /leitura + name: pasta-comum + volumes: + - name: pasta-comum + emptyDir: {} +``` + +Veja a saída de log do container `leitura`. Derrube o Pod e faça uma nova conferência. Note que as informação + +#### 02 - Hostpath + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: exemplo-02 +spec: + replicas: 1 + selector: + matchLabels: + app: hostpath + template: + metadata: + labels: + app: hostpath + spec: + containers: + - name: escrita + image: k8s.gcr.io/busybox + args: + - /bin/sh + - -c + - while sleep 30; do touch /app/$(date +%H)-$(date +%M)-$(date +%S); done + volumeMounts: + - mountPath: /app + name: temp + volumes: + - name: temp + hostPath: + path: /tmp +``` + +Aguarde o pod rodar pelo menos por um minuto. Verifique a presença dos arquivos gerados na pasta `/app` do pod. Mate o pod e faça o mesmo procedimento para o seguinte. Note que os arquivos gerados pelo primeiro Pod foram persistidos e montados no segundo. + + +#### 03 - PV e PVC - Provisionamento estático + +PV: + +``` +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-exemplo-03 + labels: + type: local +spec: + storageClassName: manual + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + hostPath: + path: "/mnt/data" +``` + +PVC: + +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-exemplo-03 +spec: + storageClassName: manual + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +POD: + +``` +apiVersion: v1 +kind: Pod +metadata: + name: pod-exemplo-03 +spec: + volumes: + - name: pv + persistentVolumeClaim: + claimName: pvc-exemplo-03 + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 + name: "http-server" + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: pv +``` + +Crie um arquivo com sob o nome index.html e adicione seu nome internamente. + +``` +kubectl exec -it pod-exemplo-03 -- bash +echo SEU_NOME >> /usr/share/nginx/html/index.html +curl http://localhost +``` + +Mate o Pod e rode o comando `curl` novamente para checar se o arquivo criado na etapa anterior foi persistido em disco. + +#### 04 - PV e PVC - Provisionamento dinâmico + +Para esse cenário dinâmico, o administrador precisa configurar um `storageClass` para permitir que a API gerencie e crie os volumes automaticamente. + +Instale o Openebs com Helm para permitir o uso de um StorageClass no seu cluster: + +``` +helm repo add openebs https://openebs.github.io/charts +helm repo update +helm install openebs -n openebs --create-namespace openebs/openebs +kubectl get storageclass +``` + +Instale o PVC no cluster: + +``` +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pvc-exemplo-04 +spec: + storageClassName: openebs-hostpath + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1G +``` + +Instale o Pod no cluster: + +``` +apiVersion: v1 +kind: Pod +metadata: + name: pod-exemplo-04 +spec: + volumes: + - name: local-storage + persistentVolumeClaim: + claimName: pvc-exemplo-04 + containers: + - name: hello-container + image: busybox + command: + - sh + - -c + - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done' + volumeMounts: + - mountPath: /mnt/store + name: local-storage +``` + +Identifique o PV criado e localize o arquivo criado pelo pod.