class: title, self-paced Cloud Native
Continuous Deployment
with GitLab, Helm, and
Linode Kubernetes Engine
.nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: 1292168 [shared/title.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/title.md)] --- class: title, in-person Cloud Native
Continuous Deployment
with GitLab, Helm, and
Linode Kubernetes Engine
.footnote[ **Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) https://2021-03-lke.container.training/** ] .debug[[shared/title.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/title.md)] --- ## Intros - Hello! I'm Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo) on Twitter) - I worked at Docker from \~2011 to 2018 - I'm now doing consulting, training, etc. on Docker & Kubernetes (check out [container.training](https://container.training/)!) - I'll show you how to deploy a complete CI/CD pipeline on LKE! (Linode Kubernetes Engine 😎) .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/logistics.md)] --- ## Accessing these slides now - We recommend that you open these slides in your browser: https://2021-03-lke.container.training/ - Use arrows to move to next/previous slide (up, down, left, right, page up, page down) - Type a slide number + ENTER to go to that slide - The slide number is also visible in the URL bar (e.g. .../#123 for slide 123) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/about-slides.md)] --- ## Accessing these slides later - Slides will remain online so you can review them later if needed (let's say we'll keep them online at least 1 year, how about that?) - You can download the slides using that URL: https://2021-03-lke.container.training/slides.zip (then open the file `lke.yml.html`) - You will find new versions of these slides on: https://container.training/ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/about-slides.md)] --- ## These slides are open source - You are welcome to use, re-use, share these slides - These slides are written in markdown - The sources of these slides are available in a public GitHub repository: https://github.com/jpetazzo/container.training - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you ☺ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/about-slides.md)] --- name: toc-module-1 ## Module 1 - [Get ready!](#toc-get-ready) - [Our sample application](#toc-our-sample-application) - [Deploying our LKE cluster](#toc-deploying-our-lke-cluster) - [Quick Kubernetes review](#toc-quick-kubernetes-review) - [Accessing internal services](#toc-accessing-internal-services) - [DNS, Ingress, Metrics](#toc-dns-ingress-metrics) .debug[(auto-generated TOC)] --- name: toc-module-2 ## Module 2 - [Managing stacks with Helm](#toc-managing-stacks-with-helm) - [[ExternalDNS](https://github.com/kubernetes-sigs/external-dns)](#toc-externaldnshttpsgithubcomkubernetes-sigsexternal-dns) - [Installing Traefik](#toc-installing-traefik) - [Installing metrics-server](#toc-installing-metrics-server) - [Prometheus and Grafana](#toc-prometheus-and-grafana) - [cert-manager](#toc-cert-manager) - [CI/CD with GitLab](#toc-cicd-with-gitlab) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/toc.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-get-ready class: title Get ready! .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-module-1) | [Next section](#toc-our-sample-application) ] .debug[(automatically generated title slide)] --- # Get ready! - We're going to set up a whole Continous Deployment pipeline - ... for Kubernetes apps - ... on a Kubernetes cluster - Ingredients: cert-manager, GitLab, Helm, Linode DNS, LKE, Traefik .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Philosophy - "Do one thing, do it well" -- - ... But a CD pipeline is a complex system with interconnected parts! - GitLab is no exception to that rule - Let's have a look at its components! .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## GitLab components - GitLab dependencies listed in the GitLab official Helm chart - External dependencies: cert-manager, grafana, minio, nginx-ingress, postgresql, prometheus, redis, registry, shared-secrets (these dependencies correspond to external charts not created by GitLab) - Internal dependencies: geo-logcursor, gitaly, gitlab-exporter, gitlab-grafana, gitlab-pages, gitlab-shell, kas, mailroom, migrations, operator, praefect, sidekiq, task-runner, webservice (these dependencies correspond to subcharts embedded in the GitLab chart) .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Philosophy - Use the GitLab chart to deploy everything that is specific to GitLab - Deploy cluster-wide components separately (cert-manager, ExternalDNS, Ingress Controller...) .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## What we're going to do - Spin up an LKE cluster - Run a simple test app - Install a few extras (the cluster-wide components mentioned earlier) - Set up GitLab - Push an app with a CD pipeline to GitLab .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## What you need to know - If you just want to follow along and watch... - container basics (what's an image, what's a container...) - Kubernetes basics (what are Deployments, Namespaces, Pods, Services) - If you want to run this on your own Kubernetes cluster... - intermediate Kubernetes concepts (annotations, Ingresses) - Helm basic concepts (how to install/upgrade releases; how to set "values") - basic Kubernetes troubleshooting commands (view logs, events) - There will be a lot of explanations and reminders along the way .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## What you need to have If you want to run this on your own... - A Linode account - A domain name that you will point to Linode DNS (I got cloudnative.party for $5) - Local tools to control your Kubernetes cluster: - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - [helm](https://helm.sh/docs/intro/install/) - Patience, as many operations will require us to wait a few minutes! .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Do I really need a Linode account? - *Can I use a local cluster, e.g. with Minikube?* It will be very difficult to get valid TLS certs with a local cluster. Also, GitLab needs quite a bit of resources. - *Can I use another Kubernetes provider?* You certainly can: Kubernetes is a standard platform! But you'll have to adjust a few things. (I'll try my best to tell you what as we go along.) .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Why do I need a domain name? - Because accessing gitlab.cloudnative.party is easier than 102.34.55.67 - Because we'll need TLS certificates (and it's very easy to obtain certs with Let's Encrypt when we have a domain) - We'll illustrate automatic DNS configuration with ExternalDNS, too! (Kubernetes will automatically create DNS entries in our domain) .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Nice-to-haves Here are a few tools that I like... - [linode-cli](https://github.com/linode/linode-cli#installation) to manage Linode resources from the command line - [stern](https://github.com/stern/stern) to comfortably view logs of Kubernetes pods - [k9s](https://k9scli.io/topics/install/) to manage Kubernetes resources with that retro BBS look and feel 😎 - [kube-ps1](https://github.com/jonmosco/kube-ps1) to keep track of which Kubernetes cluster and namespace we're working on - [kubectx](https://github.com/ahmetb/kubectx) to easily switch between clusters, contexts, and namespaces .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- ## Warning ⚠️💸 - We're going to spin up cloud resources - Remember to shut them down when you're down! - In the immortal words of Cloud Economist [Corey Quinn](https://twitter.com/QuinnyPig): *[You're charged for what you forget to turn off.](https://www.theregister.com/2020/09/03/cloud_control_costs/)* .debug[[lke/intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/intro.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-our-sample-application class: title Our sample application .nav[ [Previous section](#toc-get-ready) | [Back to table of contents](#toc-module-1) | [Next section](#toc-deploying-our-lke-cluster) ] .debug[(automatically generated title slide)] --- # Our sample application - I'm going to run our demo app locally, with Docker (you don't have to do that; do it if you like!) .exercise[ - Clone the repository: ```bash git clone https://github.com/jpetazzo/container.training ``` ] (You can also fork the repository on GitHub and clone your fork if you prefer that.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Downloading and running the application Let's start this before we look around, as downloading will take a little time... .exercise[ - Go to the `dockercoins` directory, in the cloned repo: ```bash cd container.training/dockercoins ``` - Use Compose to build and run all containers: ```bash docker-compose up ``` ] Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## What's this application? -- - It is a DockerCoin miner! .emoji[💰🐳📦🚢] -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - generate a few random bytes - hash these bytes - increment a counter (to keep track of speed) - repeat forever! -- - DockerCoins is *not* a cryptocurrency (the only common points are "randomness," "hashing," and "coins" in the name) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## DockerCoins in the microservices era - DockerCoins is made of 5 services: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process calling `rng` and `hasher` - `webui` = web interface to watch progress - `redis` = data store (holds a counter updated by `worker`) - These 5 services are visible in the application's Compose file, [docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## How DockerCoins works - `worker` invokes web service `rng` to generate random bytes - `worker` invokes web service `hasher` to hash these bytes - `worker` does this in an infinite loop - every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in our browser *(See diagram on next slide!)* .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: pic ![Diagram showing the 5 containers of the applications](images/dockercoins-diagram.svg) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Service discovery in container-land How does each service find out the address of the other ones? -- - We do not hard-code IP addresses in the code - We do not hard-code FQDNs in the code, either - We just connect to a service name, and container-magic does the rest (And by container-magic, we mean "a crafty, dynamic, embedded DNS server") .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Example in `worker/worker.py` ```python redis = Redis("`redis`") def get_random_bytes(): r = requests.get("http://`rng`/32") return r.content def hash_bytes(data): r = requests.post("http://`hasher`/", data=data, headers={"Content-Type": "application/octet-stream"}) ``` (Full source code available [here]( https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17 )) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: extra-details ## Links, naming, and service discovery - Containers can have network aliases (resolvable through DNS) - Compose file version 2+ makes each container reachable through its service name - Compose file version 1 required "links" sections to accomplish this - Network aliases are automatically namespaced - you can have multiple apps declaring and using a service named `database` - containers in the blue app will resolve `database` to the IP of the blue database - containers in the green app will resolve `database` to the IP of the green database .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Show me the code! - You can check the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training - The application is in the [dockercoins]( https://github.com/jpetazzo/container.training/tree/master/dockercoins) subdirectory - The Compose file ([docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml)) lists all 5 services - `redis` is using an official image from the Docker Hub - `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile - Each service's Dockerfile and source code is in its own directory (`hasher` is in the [hasher](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/) directory, `rng` is in the [rng](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/) directory, etc.) .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: extra-details ## Compose file format version *This is relevant only if you have used Compose before 2016...* - Compose 1.6 introduced support for a new Compose file format (aka "v2") - Services are no longer at the top level, but under a `services` section - There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer) - Containers are placed on a dedicated network, making links unnecessary - There are other minor differences, but upgrade is easy and straightforward .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Our application at work - On the left-hand side, the "rainbow strip" shows the container names - On the right-hand side, we see the output of our containers - We can see the `worker` service making requests to `rng` and `hasher` - For `rng` and `hasher`, we see HTTP access logs .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Connecting to the web UI - "Logs are exciting and fun!" (No-one, ever) - The `webui` container exposes a web dashboard; let's view it .exercise[ - With a web browser, connect to `node1` on port 8000 - Remember: the `nodeX` aliases are valid only on the nodes themselves - In your browser, you need to enter the IP address of your node ] A drawing area should show up, and after a few seconds, a blue graph will appear. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: self-paced, extra-details ## If the graph doesn't load If you just see a `Page not found` error, it might be because your Docker Engine is running on a different machine. This can be the case if: - you are using the Docker Toolbox - you are using a VM (local or remote) created with Docker Machine - you are controlling a remote Docker Engine When you run DockerCoins in development mode, the web UI static files are mapped to the container using a volume. Alas, volumes can only work on a local environment, or when using Docker Desktop for Mac or Windows. How to fix this? Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again. .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: extra-details ## Why does the speed seem irregular? - It *looks like* the speed is approximately 4 hashes/second - Or more precisely: 4 hashes/second, with regular dips down to zero - Why? -- class: extra-details - The app actually has a constant, steady speed: 3.33 hashes/second
(which corresponds to 1 hash every 0.3 seconds, for *reasons*) - Yes, and? .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- class: extra-details ## The reason why this graph is *not awesome* - The worker doesn't update the counter after every loop, but up to once per second - The speed is computed by the browser, checking the counter about once per second - Between two consecutive updates, the counter will increase either by 4, or by 0 - The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc. - What can we conclude from this? -- class: extra-details - "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Stopping the application - If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app - The Docker Engine will send a `TERM` signal to the containers - If the containers do not exit in a timely manner, the Engine sends a `KILL` signal .exercise[ - Stop the application by hitting `^C` ] -- Some containers exit immediately, others take longer. The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time! .debug[[shared/sampleapp.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/sampleapp.md)] --- ## Clean up - Before moving on, let's remove those containers .exercise[ - Tell Compose to remove everything: ```bash docker-compose down ``` ] .debug[[shared/composedown.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/composedown.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-deploying-our-lke-cluster class: title Deploying our LKE cluster .nav[ [Previous section](#toc-our-sample-application) | [Back to table of contents](#toc-module-1) | [Next section](#toc-quick-kubernetes-review) ] .debug[(automatically generated title slide)] --- # Deploying our LKE cluster - *If we wanted to deploy Kubernetes manually*, what would we need to do? (not that I recommend doing that...) - Control plane (etcd, API server, scheduler, controllers) - Nodes (VMs with a container engine + the Kubelet agent; CNI setup) - High availability (etcd clustering, API load balancer) - Security (CA and TLS certificates everywhere) - Cloud integration (to provision LoadBalancer services, storage...) *And that's just to get a basic cluster!* .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## The best way to deploy Kubernetes *The best way to deploy Kubernetes is to get someone else to do it for us.* (Me, ever since I've been working with Kubernetes) .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Managed Kubernetes - Cloud provider runs the control plane (including etcd, API load balancer, TLS setup, cloud integration) - We run nodes (the cloud provider generally gives us an easy way to provision them) - Get started in *minutes* - We're going to use [Linode Kubernetes Engine](https://www.linode.com/products/kubernetes/) .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Creating a cluster - With the web console: https://cloud.linode.com/kubernetes/clusters - Pick the region of your choice - Pick the latest available Kubernetes version - Pick 3 nodes with 8 GB of RAM - Click! ✨ - Wait a few minutes... ⌚️ - Download the kubeconfig file 💾 .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## With the CLI - View available regions with `linode-cli regions list` - View available server types with `linode-cli linodes types` - View available Kubernetes versions with `linode-cli lke versions-list` - Create cluster: ```bash linode-cli lke cluster-create --label=hello-lke --region=us-east \ --k8s_version=1.20 --node_pools.type=g6-standard-4 --node_pools.count=3 ``` - Note the cluster ID (e.g.: 12345) - Download the kubeconfig file: ```bash linode-cli lke kubeconfig-view `12345` --text --no-headers | base64 -d ``` .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Communicating with the cluster - All the Kubernetes tools (`kubectl`, but also `helm` etc) use the same config file - That file is (by default) `$HOME/.kube/config` - It can hold multiple cluster definitions (or *contexts*) - Or, we can have multiple config files and switch between them: - by adding the `--kubeconfig` flag each time we invoke a tool (🙄) - or by setting the `KUBECONFIG` environment variable (☺️) .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Using the kubeconfig file Option 1: - move the kubeconfig file to e.g. `~/.kube/config.lke` - set the environment variable: `export KUBECONFIG=~/.kube/config.lke` Option 2: - directly move the kubeconfig file to `~/.kube/config` - **do not** do that if you already have a file there! Option 3: - merge the new kubeconfig file with our existing file .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Merging kubeconfig - Assuming that we want to merge `~/.kube/config` and `~/.kube/config.lke` ... - Move our existing kubeconfig file: ```bash cp ~/.kube/config ~/.kube/config.old ``` - Merge both files: ```bash KUBECONFIG=~/.kube/config.old:~/.kube/config.lke kubectl config \ view --raw > ~/.kube/config ``` - Check that everything is there: ```bash kubectl config get-contexts ``` .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- ## Are we there yet? - Let's check if our control plane is available: ```bash kubectl get services ``` → This should show the `kubernetes` `ClusterIP` service - Look for our nodes: ```bash kubectl get nodes ``` → This should show 3 nodes (or whatever amount we picked earlier) - If the nodes aren't visible yet, give them a minute to join the cluster .debug[[lke/deploy-cluster.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/deploy-cluster.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-quick-kubernetes-review class: title Quick Kubernetes review .nav[ [Previous section](#toc-deploying-our-lke-cluster) | [Back to table of contents](#toc-module-1) | [Next section](#toc-accessing-internal-services) ] .debug[(automatically generated title slide)] --- # Quick Kubernetes review - Let's deploy a simple HTTP server - And expose it to the outside world! - Feel free to skip this section if you're familiar with Kubernetes .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Creating a container - On Kubernetes, one doesn't simply run a container - We need to create a "Pod" - A Pod will be a group of containers running together (often, it will be a group of *one* container) - We can create a standalone Pod, but generally, we'll use a *controller* (for instance: Deployment, Replica Set, Daemon Set, Job, Stateful Set...) - The *controller* will take care of scaling and recreating the Pod if needed (note that within a Pod, containers can also be restarted automatically if needed) .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## A *controller*, you said? - We're going to use one of the most common controllers: a *Deployment* - Deployments... - can be scaled (will create the requested number of Pods) - will recreate Pods if e.g. they get evicted or their Node is down - handle rolling updates - Deployments actually delegate a lot of these tasks to *Replica Sets* - We will generally have the following hierarchy: Deployment → Replica Set → Pod .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Creating a Deployment - Without further ado: ```bash kubectl create deployment web --image=nginx ``` - Check what happened: ```bash kubectl get all ``` - Wait until the NGINX Pod is "Running"! - Note: `kubectl create deployment` is great when getting started... - ... But later, we will probably write YAML instead! .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Exposing the Deployment - We need to create a Service - We can use `kubectl expose` for that (but, again, we will probably use YAML later!) - For *internal* use, we can use the default Service type, ClusterIP: ```bash kubectl expose deployment web --port=80 ``` - For *external* use, we can use a Service of type LoadBalancer: ```bash kubectl expose deployment web --port=80 --type=LoadBalancer ``` .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Changing the Service type - We can `kubectl delete service web` and recreate it - Or, `kubectl edit service web` and dive into the YAML - Or, `kubectl patch service web --patch '{"spec": {"type": "LoadBalancer"}}'` - ... These are just a few "classic" methods; there are many ways to do this! .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Deployment → Pod - Can we check exactly what's going on when the Pod is created? - Option 1: `watch kubectl get all` - displays all object types - refreshes every 2 seconds - puts a high load on the API server when there are many objects - Option 2: `kubectl get pods --watch --output-watch-events` - can only display one type of object - will show all modifications happening (à la `tail -f`) - doesn't put a high load on the API server (except for initial display) .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Recreating the Deployment - Let's delete our Deployment: ```bash kubectl delete deployment web ``` - Watch Pod updates: ```bash kubectl get pods --watch --output-watch-events ``` - Recreate the Deployment and see what Pods do: ```bash kubectl create deployment web --image=nginx ``` .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## Service stability - Our Service *still works* even though we deleted and re-created the Deployment - It wouldn't have worked while the Deployment was deleted, though - A Service is a *stable endpoint* ??? :T: Warming up with a quick Kubernetes review :Q: In Kubernetes, what is a Pod? :A: ✔️A basic unit of scaling that can contain one or more containers :A: An abstraction for an application and its dependencies :A: It's just a fancy name for "container" but they're the same :A: A group of cluster nodes used for scheduling purposes :Q: In Kubernetes, what is a Replica Set? :A: ✔️A controller used to create one or multiple identical Pods :A: A numeric parameter in a Pod specification, used to scale that Pod :A: A group of containers running on the same node :A: A group of containers running on different nodes :Q: In Kubernetes, what is a Deployment? :A: ✔️A controller that can manage Replica Sets corresponding to different configurations :A: A manifest telling Kubernetes how to deploy an app and its dependencies :A: A list of instructions executed in a container to configure that container :A: A basic unit of work for the Kubernetes scheduler .debug[[lke/kubernetes-review.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/kubernetes-review.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl create deployment web --image=nginx ``` .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/01.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/02.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/03.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/04.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/05.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/06.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/07.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/08.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/09.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/10.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/11.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/12.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/13.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/14.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/15.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/16.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/17.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/18.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/19.svg) .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-accessing-internal-services class: title Accessing internal services .nav[ [Previous section](#toc-quick-kubernetes-review) | [Back to table of contents](#toc-module-1) | [Next section](#toc-dns-ingress-metrics) ] .debug[(automatically generated title slide)] --- # Accessing internal services - How can we temporarily access a service without exposing it to everyone? - `kubectl proxy`: gives us access to the API, which includes a proxy for HTTP resources - `kubectl port-forward`: allows forwarding of TCP ports to arbitrary pods, services, ... .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in theory - Running `kubectl proxy` gives us access to the entire Kubernetes API - The API includes routes to proxy HTTP traffic - These routes look like the following: `/api/v1/namespaces/
/services/
/proxy` - We just add the URI to the end of the request, for instance: `/api/v1/namespaces/
/services/
/proxy/index.html` - We can access `services` and `pods` this way .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/accessinternal.md)] --- ## `kubectl proxy` in practice - Let's access the `web` service through `kubectl proxy` .exercise[ - Run an API proxy in the background: ```bash kubectl proxy & ``` - Access the `web` service: ```bash curl localhost:8001/api/v1/namespaces/default/services/web/proxy/ ``` - Terminate the proxy: ```bash kill %1 ``` ] .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in theory - What if we want to access a TCP service? - We can use `kubectl port-forward` instead - It will create a TCP relay to forward connections to a specific port (of a pod, service, deployment...) - The syntax is: `kubectl port-forward service/name_of_service local_port:remote_port` - If only one port number is specified, it is used for both local and remote ports .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/accessinternal.md)] --- ## `kubectl port-forward` in practice - Let's access our remote NGINX server .exercise[ - Forward connections from local port 1234 to remote port 80: ```bash kubectl port-forward svc/web 1234:80 & ``` - Connect to the NGINX server: ```bash curl localhost:1234 ``` - Terminate the port forwarder: ```bash kill %1 ``` ] ??? :EN:- Securely accessing internal services :FR:- Accès sécurisé aux services internes :T: Accessing internal services from our local machine :Q: What's the advantage of "kubectl port-forward" compared to a NodePort? :A: It can forward arbitrary protocols :A: It doesn't require Kubernetes API credentials :A: It offers deterministic load balancing (instead of random) :A: ✔️It doesn't expose the service to the public :Q: What's the security concept behind "kubectl port-forward"? :A: ✔️We authenticate with the Kubernetes API, and it forwards connections on our behalf :A: It detects our source IP address, and only allows connections coming from it :A: It uses end-to-end mTLS (mutual TLS) to authenticate our connections :A: There is no security (as long as it's running, anyone can connect from anywhere) .debug[[k8s/accessinternal.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/accessinternal.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-dns-ingress-metrics class: title DNS, Ingress, Metrics .nav[ [Previous section](#toc-accessing-internal-services) | [Back to table of contents](#toc-module-1) | [Next section](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # DNS, Ingress, Metrics - We got a basic app up and running - We accessed it over a raw IP address - Can we do better? (i.e. access it with a domain name!) - How much resources is it using? .debug[[lke/what-is-missing.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/what-is-missing.md)] --- ## DNS - We'd like to associate a fancy name to that LoadBalancer Service (e.g. `nginx.cloudnative.party` → `A.B.C.D`) - option 1: manually add a DNS record - option 2: find a way to create DNS records automatically - We will install ExternalDNS to automate DNS records creatoin - ExternalDNS supports Linode DNS and dozens of other providers .debug[[lke/what-is-missing.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/what-is-missing.md)] --- ## Ingress - What if we have multiple web services to expose? - We could create one LoadBalancer Service for each of them - This would create a lot of cloud load balancers (and they typically incur a cost, even if it's a small one) - Instead, we can use an *Ingress Controller* - Ingress Controller = HTTP load balancer / reverse proxy - Put all our HTTP services behind a single LoadBalancer Service - Can also do fancy "content-based" routing (using headers, request path...) - We will install Traefik as our Ingress Controller .debug[[lke/what-is-missing.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/what-is-missing.md)] --- ## Metrics - How much resources are we using right now? - When will we need to scale up our cluster? - We need metrics! - We're going to install the *metrics server* - It's a very basic metrics system (no retention, no graphs, no alerting...) - But it's lightweight, and it is used internally by Kubernetes for autoscaling .debug[[lke/what-is-missing.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/what-is-missing.md)] --- ## What's next - We're going to install all these components - Very often, things can be installed with a simple YAML file - Very often, that YAML file needs to be customized a little bit (add command-line parameters, provide API tokens...) - Instead, we're going to use Helm charts - Helm charts give us a way to customize what we deploy - Helm can also keep track of what we install (for easier uninstall and updates) .debug[[lke/what-is-missing.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/what-is-missing.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous section](#toc-dns-ingress-metrics) | [Back to table of contents](#toc-module-2) | [Next section](#toc-externaldnshttpsgithubcomkubernetes-sigsexternal-dns) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - Helm is a (kind of!) package manager for Kubernetes - We can use it to: - find existing packages (called "charts") created by other folks - install these packages, configuring them for our particular setup - package our own things (for distribution or for internal use) - manage the lifecycle of these installs (rollback to previous version etc.) - It's a "CNCF graduate project", indicating a certain level of maturity (more on that later) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## From `kubectl run` to YAML - We can create resources with one-line commands (`kubectl run`, `kubectl createa deployment`, `kubectl expose`...) - We can also create resources by loading YAML files (with `kubectl apply -f`, `kubectl create -f`...) - There can be multiple resources in a single YAML files (making them convenient to deploy entire stacks) - However, these YAML bundles often need to be customized (e.g.: number of replicas, image version to use, features to enable...) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Beyond YAML - Very often, after putting together our first `app.yaml`, we end up with: - `app-prod.yaml` - `app-staging.yaml` - `app-dev.yaml` - instructions indicating to users "please tweak this and that in the YAML" - That's where using something like [CUE](https://github.com/cuelang/cue/blob/v0.3.2/doc/tutorial/kubernetes/README.md), [Kustomize](https://kustomize.io/), or [Helm](https://helm.sh/) can help! - Now we can do something like this: ```bash helm install app ... --set this.parameter=that.value ``` .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Other features of Helm - With Helm, we create "charts" - These charts can be used internally or distributed publicly - Public charts can be indexed through the [Artifact Hub](https://artifacthub.io/) - This gives us a way to find and install other folks' charts - Helm also gives us ways to manage the lifecycle of what we install: - keep track of what we have installed - upgrade versions, change parameters, roll back, uninstall - Furthermore, even if it's not "the" standard, it's definitely "a" standard! .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## CNCF graduation status - On April 30th 2020, Helm was the 10th project to *graduate* within the CNCF .emoji[🎉] (alongside Containerd, Prometheus, and Kubernetes itself) - This is an acknowledgement by the CNCF for projects that *demonstrate thriving adoption, an open governance process,
and a strong commitment to community, sustainability, and inclusivity.* - See [CNCF announcement](https://www.cncf.io/announcement/2020/04/30/cloud-native-computing-foundation-announces-helm-graduation/) and [Helm announcement](https://helm.sh/blog/celebrating-helms-cncf-graduation/) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Helm concepts - `helm` is a CLI tool - It is used to find, install, upgrade *charts* - A chart is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Differences between charts and packages - A package (deb, rpm...) contains binaries, libraries, etc. - A chart contains YAML manifests (the binaries, libraries, etc. are in the images referenced by the chart) - On most distributions, a package can only be installed once (installing another version replaces the installed one) - A chart can be installed multiple times - Each installation is called a *release* - This allows to install e.g. 10 instances of MongoDB (with potentially different versions and configurations) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## Wait a minute ... *But, on my Debian system, I have Python 2 **and** Python 3.
Also, I have multiple versions of the Postgres database engine!* Yes! But they have different package names: - `python2.7`, `python3.8` - `postgresql-10`, `postgresql-11` Good to know: the Postgres package in Debian includes provisions to deploy multiple Postgres servers on the same system, but it's an exception (and it's a lot of work done by the package maintainer, not by the `dpkg` or `apt` tools). .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Helm 2 vs Helm 3 - Helm 3 was released [November 13, 2019](https://helm.sh/blog/helm-3-released/) - Charts remain compatible between Helm 2 and Helm 3 - The CLI is very similar (with minor changes to some commands) - The main difference is that Helm 2 uses `tiller`, a server-side component - Helm 3 doesn't use `tiller` at all, making it simpler (yay!) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## With or without `tiller` - With Helm 3: - the `helm` CLI communicates directly with the Kubernetes API - it creates resources (deployments, services...) with our credentials - With Helm 2: - the `helm` CLI communicates with `tiller`, telling `tiller` what to do - `tiller` then communicates with the Kubernetes API, using its own credentials - This indirect model caused significant permissions headaches (`tiller` required very broad permissions to function) - `tiller` was removed in Helm 3 to simplify the security aspects .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Installing Helm - If the `helm` CLI is not installed in your environment, install it .exercise[ - Check if `helm` is installed: ```bash helm ``` - If it's not installed, run the following command: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ | bash ``` ] (To install Helm 2, replace `get-helm-3` with `get`.) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## Only if using Helm 2 ... - We need to install Tiller and give it some permissions - Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace - They can be managed (installed, upgraded...) with the `helm` CLI .exercise[ - Deploy Tiller: ```bash helm init ``` ] At the end of the install process, you will see: ``` Happy Helming! ``` .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## Only if using Helm 2 ... - Tiller needs permissions to create Kubernetes resources - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings .exercise[ - Grant `cluster-admin` role to `kube-system:default` service account: ```bash kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` ] (Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Charts and repositories - A *repository* (or repo in short) is a collection of charts - It's just a bunch of files (they can be hosted by a static HTTP server, or on a local directory) - We can add "repos" to Helm, giving them a nickname - The nickname is used when referring to charts on that repo (for instance, if we try to install `hello/world`, that means the chart `world` on the repo `hello`; and that repo `hello` might be something like https://blahblah.hello.io/charts/) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## How to find charts, the old way - Helm 2 came with one pre-configured repo, the "stable" repo (located at https://charts.helm.sh/stable) - Helm 3 doesn't have any pre-configured repo - The "stable" repo mentioned above is now being deprecated - The new approach is to have fully decentralized repos - Repos can be indexed in the Artifact Hub (which supersedes the Helm Hub) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## How to find charts, the new way - Go to the [Artifact Hub](https://artifacthub.io/packages/search?kind=0) (https://artifacthub.io) - Or use `helm search hub ...` from the CLI - Let's try to find a Helm chart for something called "OWASP Juice Shop"! (it is a famous demo app used in security challenges) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Finding charts from the CLI - We can use `helm search hub
` .exercise[ - Look for the OWASP Juice Shop app: ```bash helm search hub owasp juice ``` - Since the URLs are truncated, try with the YAML output: ```bash helm search hub owasp juice -o yaml ``` ] Then go to → https://artifacthub.io/packages/helm/seccurecodebox/juice-shop .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Finding charts on the web - We can also use the Artifact Hub search feature .exercise[ - Go to https://artifacthub.io/ - In the search box on top, enter "owasp juice" - Click on the "juice-shop" result (not "multi-juicer" or "juicy-ctf") ] .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Installing the chart - Click on the "Install" button, it will show instructions .exercise[ - First, add the repository for that chart: ```bash helm repo add juice https://charts.securecodebox.io ``` - Then, install the chart: ```bash helm install my-juice-shop juice/juice-shop ``` ] Note: it is also possible to install directly a chart, with `--repo https://...` .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Charts and releases - "Installing a chart" means creating a *release* - In the previous exemple, the release was named "my-juice-shop" - We can also use `--generate-name` to ask Helm to generate a name for us .exercise[ - List the releases: ```bash helm list ``` - Check that we have a `my-juice-shop-...` Pod up and running: ```bash kubectl get pods ``` ] .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: extra-details ## Searching and installing with Helm 2 - Helm 2 doesn't have support for the Helm Hub - The `helm search` command only takes a search string argument (e.g. `helm search juice-shop`) - With Helm 2, the name is optional: `helm install juice/juice-shop` will automatically generate a name `helm install --name my-juice-shop juice/juice-shop` will specify a name .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Viewing resources of a release - This specific chart labels all its resources with a `release` label - We can use a selector to see these resources .exercise[ - List all the resources created by this release: ```bash kubectl get all --selector=app.kubernetes.io/instance=my-juice-shop ``` ] Note: this label wasn't added automatically by Helm.
It is defined in that chart. In other words, not all charts will provide this label. .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Configuring a release - By default, `juice/juice-shop` creates a service of type `ClusterIP` - We would like to change that to a `NodePort` - We could use `kubectl edit service my-juice-shop`, but ... ... our changes would get overwritten next time we update that chart! - Instead, we are going to *set a value* - Values are parameters that the chart can use to change its behavior - Values have default values - Each chart is free to define its own values and their defaults .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Checking possible values - We can inspect a chart with `helm show` or `helm inspect` .exercise[ - Look at the README for the app: ```bash helm show readme juice/juice-shop ``` - Look at the values and their defaults: ```bash helm show values juice/juice-shop ``` ] The `values` may or may not have useful comments. The `readme` may or may not have (accurate) explanations for the values. (If we're unlucky, there won't be any indication about how to use the values!) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Setting values - Values can be set when installing a chart, or when upgrading it - We are going to update `my-juice-shop` to change the type of the service .exercise[ - Update `my-juice-shop`: ```bash helm upgrade my-juice-shop juice/juice-shop --set service.type=NodePort ``` ] Note that we have to specify the chart that we use (`juice/my-juice-shop`), even if we just want to update some values. We can set multiple values. If we want to set many values, we can use `-f`/`--values` and pass a YAML file with all the values. All unspecified values will take the default values defined in the chart. .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- ## Connecting to the Juice Shop - Let's check the app that we just installed .exercise[ - Check the node port allocated to the service: ```bash kubectl get service my-juice-shop PORT=$(kubectl get service my-juice-shop -o jsonpath={..nodePort}) ``` - Connect to it: ```bash curl localhost:$PORT/ ``` ] ??? :EN:- Helm concepts :EN:- Installing software with Helm :EN:- Helm 2, Helm 3, and the Helm Hub :FR:- Fonctionnement général de Helm :FR:- Installer des composants via Helm :FR:- Helm 2, Helm 3, et le *Helm Hub* :T: Getting started with Helm and its concepts :Q: Which comparison is the most adequate? :A: Helm is a firewall, charts are access lists :A: ✔️Helm is a package manager, charts are packages :A: Helm is an artefact repository, charts are artefacts :A: Helm is a CI/CD platform, charts are CI/CD pipelines :Q: What's required to distribute a Helm chart? :A: A Helm commercial license :A: A Docker registry :A: An account on the Helm Hub :A: ✔️An HTTP server .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/helm-intro.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-externaldnshttpsgithubcomkubernetes-sigsexternal-dns class: title [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) .nav[ [Previous section](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-module-2) | [Next section](#toc-installing-traefik) ] .debug[(automatically generated title slide)] --- # [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) - ExternalDNS will automatically create DNS records from Kubernetes resources - Services (with the annotation `external-dns.alpha.kubernetes.io/hostname`) - Ingresses (automatically) - It requires a domain name (obviously) - ... And that domain name should be configurable through an API - As of April 2021, it supports [a few dozens of providers](https://github.com/kubernetes-sigs/external-dns#status-of-providers) - We're going to use Linode DNS .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- ## Prep work - We need a domain name (if you need a cheap one, look e.g. at [GANDI](https://shop.gandi.net/?search=funwithlinode); there are many options below $10) - That domain name should be configured to point to Linode DNS servers (ns1.linode.com to ns5.linode.com) - We need to generate a Linode API token with DNS API access - Pro-tip: reduce the default TTL of the domain to 5 minutes! .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- ## Deploying ExternalDNS - The ExternalDNS documentation has a [tutorial](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/linode.md) for Linode - ... It's basically a lot of YAML! - That's where using a Helm chart will be very helpful - There are a few ExternalDNS charts available out there - We will use the one from Bitnami (these folks maintain *a lot* of great Helm charts!) .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- ## How we'll install things with Helm - We will install each chart in its own namespace (this is not mandatory, but it helps to see what belongs to what) - We will use `helm upgrade --install` instead of `helm install` (that way, if we want to change something, we can just re-run the command) - We will use the `--create-namespace` and `--namespace ...` options - To keep things boring and predictible, if we are installing chart `xyz`: - we will install it in namespace `xyz` - we will name the release `xyz` as well .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- ## Installing ExternalDNS - First, let's add the Bitnami repo: ```bash helm repo add bitnami https://charts.bitnami.com/bitnami ``` - Then, install ExternalDNS: ```bash LINODE_API_TOKEN=`1234abcd...6789` helm upgrade --install external-dns bitnami/external-dns \ --namespace external-dns --create-namespace \ --set provider=linode \ --set linode.apiToken=$LINODE_API_TOKEN ``` (Make sure to update your API token above!) .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- ## Testing ExternalDNS - Let's annotate our NGINX service to expose it with a DNS record: ```bash kubectl annotate service web \ external-dns.alpha.kubernetes.io/hostname=nginx.`cloudnative.party` ``` (make sure to use *your* domain name above, otherwise that won't work!) - Check ExternalDNS logs: ```bash kubectl logs -n external-dns -l app.kubernetes.io/name=external-dns ``` - It might take a few minutes for ExternalDNS to start, patience! - Then try to access `nginx.cloudnative.party` (or whatever domain you picked) .debug[[lke/external-dns.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/external-dns.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-installing-traefik class: title Installing Traefik .nav[ [Previous section](#toc-externaldnshttpsgithubcomkubernetes-sigsexternal-dns) | [Back to table of contents](#toc-module-2) | [Next section](#toc-installing-metrics-server) ] .debug[(automatically generated title slide)] --- # Installing Traefik - Traefik is going to be our Ingress Controller - Let's install it with a Helm chart, in its own namespace - First, let's add the Traefik chart repository: ```bash helm repo add traefik https://helm.traefik.io/traefik ``` - Then, install the chart: ```bash helm upgrade --install traefik traefik/traefik \ --create-namespace --namespace traefik \ --set "ports.websecure.tls.enabled=true" ``` (that option that we added enables HTTPS, it will be useful later!) .debug[[lke/traefik.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/traefik.md)] --- ## Testing Traefik - Let's create an Ingress resource! - If we're using Kubernetes 1.20 or later, we can simply do this: ```bash kubectl create ingress web \ --rule=`ingress-is-fun.cloudnative.party`/*=web:80 ``` (make sure to update and use your own domain) - Check that the Ingress was correctly created: ```bash kubectl get ingress kubectl describe ingress ``` - If we're using Kubernetes 1.19 or earlier, we'll need some YAML .debug[[lke/traefik.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/lke/traefik.md)] --- ## Creating an Ingress with YAML - This is how we do it with YAML: ```bash kubectl apply -f- <
/.well-known/acme-challenge/
` .exercise[ - Check the *path* of the Ingress in particular: ```bash kubectl describe ingress --selector=acme.cert-manager.io/http01-solver=true ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/cert-manager.md)] --- ## And then... - A little bit later, we will have a `kubernetes.io/tls` Secret: ```bash kubectl get secrets ``` - Note that this might take a few minutes, because of the DNS integration! .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/cert-manager.md)] --- class: extra-details ## Using the secret - For bonus points, try to use the secret in an Ingress! - This is what the manifest would look like: ```yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: xyz spec: tls: - secretName: xyz.A.B.C.D.nip.io hosts: - xyz.A.B.C.D.nip.io rules: ... ``` .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/cert-manager.md)] --- class: extra-details ## Automatic TLS Ingress with annotations - It is also possible to annotate Ingress resources for cert-manager - If we annotate an Ingress resource with `cert-manager.io/cluster-issuer=xxx`: - cert-manager will detect that annotation - it will obtain a certificate using the specified ClusterIssuer (`xxx`) - it will store the key and certificate in the specified Secret - Note: the Ingress still needs the `tls` section with `secretName` and `hosts` ??? :EN:- Obtaining certificates with cert-manager :FR:- Obtenir des certificats avec cert-manager :T: Obtaining TLS certificates with cert-manager .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/cert-manager.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-cicd-with-gitlab class: title CI/CD with GitLab .nav[ [Previous section](#toc-cert-manager) | [Back to table of contents](#toc-module-2) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # CI/CD with GitLab - In this section, we will see how to set up a CI/CD pipeline with GitLab (using a "self-hosted" GitLab; i.e. running on our Kubernetes cluster) - The big picture: - each time we push code to GitLab, it will be deployed in a staging environment - each time we push the `production` tag, it will be deployed in production .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Disclaimers - We'll use GitLab here as an exemple, but there are many other options (e.g. some combination of Argo, Harbor, Tekton ...) - There are also hosted options (e.g. GitHub Actions and many others) - We'll use a specific pipeline and workflow, but it's purely arbitrary (treat it as a source of inspiration, not a model to be copied!) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Workflow overview - Push code to GitLab's git server - GitLab notices the `.gitlab-ci.yml` file, which defines our pipeline - Our pipeline can have multiple *stages* executed sequentially (e.g. lint, build, test, deploy ...) - Each stage can have multiple *jobs* executed in parallel (e.g. build images in parallel) - Each job will be executed in an independent *runner* pod .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Pipeline overview - Our repository holds source code, Dockerfiles, and a Helm chart - *Lint* stage will check the Helm chart validity - *Build* stage will build container images (and push them to GitLab's integrated registry) - *Deploy* stage will deploy the Helm chart, using these images - Pushes to `production` will deploy to "the" production namespace - Pushes to other tags/branches will deploy to a namespace created on the fly - We will discuss shortcomings and alternatives and the end of this chapter! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Lots of requirements - We need *a lot* of components to pull this off: - a domain name - a storage class - a TLS-capable ingress controller - the cert-manager operator - GitLab itself - the GitLab pipeline - Wow, why?!? .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## I find your lack of TLS disturbing - We need a container registry (obviously!) - Docker (and other container engines) *require* TLS on the registry (with valid certificates) - A few options: - use a "real" TLS certificate (e.g. obtained with Let's Encrypt) - use a self-signed TLS certificate - communicate with the registry over localhost (TLS isn't required then) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- class: extra-details ## Why not self-signed certs? - When using self-signed certs, we need to either: - add the cert (or CA) to trusted certs - disable cert validation - This needs to be done on *every client* connecting to the registry: - CI/CD pipeline (building and pushing images) - container engine (deploying the images) - other tools (e.g. container security scanner) - It's doable, but it's a lot of hacks (especially when adding more tools!) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- class: extra-details ## Why not localhost? - TLS is usually not required when the registry is on localhost - We could expose the registry e.g. on a `NodePort` - ... And then tweak the CI/CD pipeline to use that instead - This is great when obtaining valid certs is difficult: - air-gapped or internal environments (that can't use Let's Encrypt) - no domain name available - Downside: the registry isn't easily or safely available from outside (the `NodePort` essentially defeats TLS) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- class: extra-details ## Can we use `nip.io`? - We will use Let's Encrypt - Let's Encrypt has a quota of certificates per domain (in 2020, that was [50 certificates per week per domain](https://letsencrypt.org/docs/rate-limits/)) - So if we all use `nip.io`, we will probably run into that limit - But you can try and see if it works! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Install GitLab itself - We will deploy GitLab with its official Helm chart - It will still require a bunch of parameters and customization - Brace! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Installing the GitLab chart ```bash helm repo add gitlab https://charts.gitlab.io/ DOMAIN=`cloudnative.party` ISSUER=letsencrypt-production helm upgrade --install gitlab gitlab/gitlab \ --create-namespace --namespace gitlab \ --set global.hosts.domain=$DOMAIN \ --set certmanager.install=false \ --set nginx-ingress.enabled=false \ --set global.ingress.class=traefik \ --set global.ingress.provider=traefik \ --set global.ingress.configureCertmanager=false \ --set global.ingress.annotations."cert-manager\.io/cluster-issuer"=$ISSUER \ --set gitlab.webservice.ingress.tls.secretName=gitlab-gitlab-tls \ --set registry.ingress.tls.secretName=gitlab-registry-tls \ --set minio.ingress.tls.secretName=gitlab-minio-tls ``` 😰 Can we talk about all these parameters? .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Breaking down all these parameters - `certmanager.install=false` do not install cert-manager, we already have it - `nginx-ingress.enabled=false` do not install the NGINX ingress controller, we already have Traefik - `global.ingress.class=traefik`, `global.ingress.provider=traefik` these merely enable creation of Ingress resources - `global.ingress.configureCertmanager=false` do not create a cert-manager Issuer or ClusterIssuer, we have ours .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## More parameters - `global.ingress.annotations."cert-manager\.io/cluster-issuer"=$ISSUER` this annotation tells cert-manager to automatically issue certs - `gitlab.webservice.ingress.tls.secretName=gitlab-gitlab-tls`,
`registry.ingress.tls.secretName=gitlab-registry-tls`,
`minio.ingress.tls.secretName=gitlab-minio-tls` these annotations enable TLS in the Ingress controller .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Wait for GitLab to come up - Let's watch what's happening in the GitLab namespace: ```bash watch kubectl get all --namespace gitlab ``` - We want to wait for all the Pods to be "Running" or "Completed" - This will take a few minutes (10-15 minutes for me) - Don't worry if you see Pods crashing and restarting (it happens when they are waiting on a dependency which isn't up yet) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Things that could go wrong - Symptom: Pods remain "Pending" or "ContainerCreating" for a while - Investigate these pods (with `kubectl describe pod ...`) - Also look at events: ```bash kubectl get events \ --field-selector=type=Warning --sort-by=metadata.creationTimestamp ``` - Make sure your cluster is big enough (I use 3 `g6-standard-4` nodes) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Log into GitLab - First, let's check that we can connect to GitLab (with TLS): `https://gitlab.$DOMAIN` - It's asking us for a login and password! - The login is `root`, and the password is stored in a Secret: ```bash kubectl get secrets --namespace=gitlab gitlab-gitlab-initial-root-password \ -o jsonpath={.data.password} | base64 -d ``` .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Configure GitLab - For simplicity, we're going to use that "root" user (but later, you can create multiple users, teams, etc.) - First, let's add our SSH key (top-right user menu → settings, then SSH keys on the left) - Then, create a project (using the + menu next to the search bar on top) - Let's call it `kubecoin` (you can change it, but you'll have to adjust Git paths later on) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Try to push our repository - This is the repository that we're going to use: https://github.com/jpetazzo/kubecoin - Let's clone that repository locally first: ```bash git clone https://github.com/jpetazzo/kubecoin ``` - Add our GitLab instance as a remote: ```bash git remote add gitlab git@gitlab.$DOMAIN:root/kubecoin.git ``` - Try to push: ```bash git push -u gitlab ``` .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Connection refused? - Normally, we get the following error: `port 22: Connection refused` - Why? 🤔 -- - What does `gitlab.$DOMAIN` point to? -- - Our Ingress Controller! (i.e. Traefik) 💡 - Our Ingress Controller has nothing to do with port 22 - So how do we solve this? .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Routing port 22 - Whatever is on `gitlab.$DOMAIN` needs to have the following "routing": - port 80 → GitLab web service - port 443 → GitLab web service, with TLS - port 22 → GitLab shell service - Currently, Traefik is managing `gitlab.$DOMAIN` - We are going to tell Traefik to: - accept connections on port 22 - send them to GitLab .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## TCP routing - The technique that we are going to use is specific to Traefik - Other Ingress Controllers may or may not have similar features - When they have similar features, they will be enabled very differently .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Telling Traefik to open port 22 - Let's reconfigure Traefik: ```bash helm upgrade --install traefik traefik/traefik \ --create-namespace --namespace traefik \ --set "ports.websecure.tls.enabled=true" \ --set "providers.kubernetesIngress.publishedService.enabled=true" \ --set "ports.ssh.port=2222" \ --set "ports.ssh.exposedPort=22" \ --set "ports.ssh.expose=true" \ --set "ports.ssh.protocol=TCP" ``` - This creates a new "port" on Traefik, called "ssh", listening on port 22 - Internally, Traefik listens on port 2222 (for permission reasons) - Note: Traefik docs also call these ports "entrypoints" (these entrypoints are totally unrelated to the `ENTRYPOINT` in Dockerfiles) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Knocking on port 22 - What happens if we try to connect to that port 22 right now? ```bash curl gitlab.$DOMAIN:22 ``` - We hit GitLab's web service! - We need to tell Traefik what to do with connections to that port 22 - For that, we will create a "TCP route" .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Traefik TCP route The following custom resource tells Traefik to route the `ssh` port that we created earlier, to the `gitlab-gitlab-shell` service belonging to GitLab. ```yaml apiVersion: traefik.containo.us/v1alpha1 kind: IngressRouteTCP metadata: name: gitlab-shell namespace: gitlab spec: entryPoints: - ssh routes: - match: HostSNI(\`*`) services: - name: gitlab-gitlab-shell port: 22 ``` The `HostSNI` wildcard is the magic option to define a "default route". .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Creating the TCP route Since our manifest has backticks, we must pay attention to quoting: ```bash kubectl apply -f- << "EOF" apiVersion: traefik.containo.us/v1alpha1 kind: IngressRouteTCP metadata: name: gitlab-shell namespace: gitlab spec: entryPoints: - ssh routes: - match: HostSNI(\`*`) services: - name: gitlab-gitlab-shell port: 22 EOF ``` .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Knocking on port 22, again - Let's see what happens if we try port 22 now: ```bash curl gitlab.$DOMAIN:22 ``` - This should tell us something like `Received HTTP/0.9 when not allowed` (because we're no longer talking to an HTTP server, but to SSH!) - Try with SSH: ```bash ssh git@gitlab.$DOMAIN ``` - After accepting the key fingerprint, we should see `Welcome to GitLab, @root!` .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Pushing again - Now we can try to push our repository again: ```bash git push -u gitlab ``` - Reload the project page in GitLab - We should see our repository! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## CI/CD - Click on the CI/CD tab on the left (the one with the shuttle / space rocket icon) - Our pipeline was detected... - But it failed 😕 - Let's click on one of the failed jobs - This is a permission issue! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Fixing permissions - GitLab needs to do a few of things in our cluster: - create Pods to build our container images with BuildKit - create Namespaces to deploy staging and production versions of our app - create and update resources in these Namespaces - For the time being, we're going to grant broad permissions (and we will revisit and discuss what to do later) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Granting permissions - Let's give `cluster-admin` permissions to the GitLab ServiceAccount: ```bash kubectl create clusterrolebinding gitlab \ --clusterrole=cluster-admin --serviceaccount=gitlab:default ``` - Then retry the CI/CD pipeline - The build steps will now succeed; but the deploy steps will fail - We need to set the `REGISTRY_USER` and `REGISTRY_PASSWORD` variables - Let's explain what this is about! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## GitLab container registry access - A registry access token is created for the duration of the CI/CD pipeline (it is exposed through the `$CI_JOB_TOKEN` environment variable) - This token gives access only to a specific repository in the registry - It is valid only during the execution of the CI/CD pipeline - We can (and we do!) use it to *push* images to the registry - We cannot use it to *pull* images when running in staging or production (because Kubernetes might need to pull images *after* the token expires) - We need to create a separate read-only registry access token .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Creating the registry access token - Let's go to "Settings" (the cog wheel on the left) / "Access Tokens" - Create a token with `read_registry` permission - Save the token name and the token value - Then go to "Settings" / "CI/CD" - In the "Variables" section, add two variables: - `REGISTRY_USER` → token name - `REGISTRY_PASSWORD` → token value - Make sure that they are **not** protected! (otherwise, they won't be available in non-default tags and branches) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Trying again - Go back to the CI/CD pipeline view, and hit "Retry" - The deploy stage should now work correctly! 🎉 .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Our CI/CD pipeline - Let's have a look at the [.gitlab-ci.yml](https://github.com/jpetazzo/kubecoin/blob/107dac5066087c52747e557babc97e57f42dd71d/.gitlab-ci.yml) file - We have multiple *stages*: - lint (currently doesn't do much, it's mostly as an example) - build (currently uses BuildKit) - deploy - "Deploy" behaves differently in staging and production - Let's investigate that! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Staging vs production - In our pipeline, "production" means "a tag or branch named `production`" (see the `except:` and `only:` sections) - Everything else is "staging" - In "staging": - we build and push images - we create a staging Namespace and deploy a copy of the app there - In "production": - we do not build anything - we deploy (or update) a copy of the app in the production Namespace .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Namespace naming - GitLab will create Namespaces named `gl-
-
-
` - At the end of the deployment, the web UI will be available at: `http://
-
-
-gitlab.
` - The "production" Namespace will be `
-
` - And it will be available on its own domain as well: `http://
-
-gitlab.
` .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Production - `git tag -f production && git push -f --tags` - Our CI/CD pipeline will deploy on the production URL (`http://
-
-gitlab.
`) - It will do it *only* if that same git commit was pushed to staging first (because the "production" pipeline skips the build phase) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Let's talk about build - There are many ways to build container images on Kubernetes - ~~And they all suck~~ Many of them have inconveniencing issues - Let's do a quick review! .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Docker-based approaches - Bind-mount the Docker socket - very easy, but requires Docker Engine - build resource usage "evades" Kubernetes scheduler - insecure - Docker-in-Docker in a pod - requires privileged pod - insecure - approaches like rootless or sysbox might help in the future - External build host - more secure - requires resources outside of the Kubernetes cluster .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Non-privileged builders - Kaniko - each build runs in its own containers or pod - no caching by default - registry-based caching is possible - BuildKit / `docker buildx` - can leverage Docker Engine or long-running Kubernetes worker pod - supports distributed, multi-arch build farms - basic caching out of the box - can also leverage registry-based caching .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Other approaches - Ditch the Dockerfile! - bazel - jib - ko - etc. .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Discussion - Our CI/CD workflow is just *one* of the many possibilities - It would be nice to add some actual unit or e2e tests - Map the production namespace to a "real" domain name - Automatically remove older staging environments (see e.g. [kube-janitor](https://codeberg.org/hjacobs/kube-janitor)) - Deploy production to a separate cluster - Better segregate permissions (don't give `cluster-admin` to the GitLab pipeline) .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- ## Why not use GitLab's Kubernetes integration? - "All-in-one" approach (deploys its own Ingress, cert-manager, Prometheus, and much more) - I wanted to show you something flexible and customizable instead - But feel free to explore it now that we have shown the basics! ??? :EN:- CI/CD with GitLab :FR:- CI/CD avec GitLab .debug[[k8s/gitlab.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/k8s/gitlab.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Thank you ✨ ![end](images/end.jpg) .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/2021-03-lke/slides/shared/thankyou.md)]