Installing kubeadm on Debian 12
This article is a quick install guide, without much more information than the official docs.
I still managed to found some issues during the steps that had me puzzled, so let's write it down so it won't be forgotten.
I still managed to found some issues during the steps that had me puzzled, so let's write it down so it won't be forgotten.
Fresh new VM
First step, getting a VM. AWS, GCP, any provider will do! In my case, it will be a Debian 12.Installation of kubeadm prerequisites
Let's follow the official installation guide (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). Some of the prerequisites are:- Disable swap
- Install a Container Runtime Interface (CRI)
Disable swap
Comment the line mentioning swap of the file at/etc/fstab
:
Runroot@kubevm:~# cat /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # systemd generates mount units based on this file, see systemd.mount(5). # Please run 'systemctl daemon-reload' after making changes here. # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda2 during installation UUID=1b0327db-31d5-4e2e-95de-2838e45d852c / ext4 errors=remount-ro 0 1 # /boot/efi was on /dev/sda1 during installation UUID=9DBF-27DC /boot/efi vfat umask=0077 0 1 # swap was on /dev/sda3 during installation# UUID=549f1211-8a23-4b9a-b405-16764da60045 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
systemctl daemon-reload
(or reboot if you prefer the nuclear option) to apply the changes.
Install a container runtime
For my install, I chosecontainerd
. I will not install Docker on this VM.
So, according to the documentation, I should have a file at /etc/containerd/config.toml
. Let's check it out.
root@kubevm:~# cat /etc/containerd/config.toml
cat: /etc/containerd/config.toml: No such file or directory
Nope. OK, so I need to install containerd.
Go.
OK, now we're in agreement. Still following the official documentation, let's adapt this file for using the systemd cgroup driver.root@kubevm:~# apt install containerd -y
Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: criu libnet1 python3-protobuf runc Suggested packages: containernetworking-plugins The following NEW packages will be installed: containerd criu libnet1 python3-protobuf runc 0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded. Need to get 29.6 MB of archives. [...]root@kubevm:~# cat /etc/containerd/config.toml
version = 2 [plugins] [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/usr/lib/cni" conf_dir = "/etc/cni/net.d" [plugins."io.containerd.internal.v1.opt"] path = "/var/lib/containerd/opt"
root@kubevm:~# cat /etc/containerd/config.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/usr/lib/cni"
conf_dir = "/etc/cni/net.d"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.internal.v1.opt"]
path = "/var/lib/containerd/opt"
Done.
Installing kubeadm
And we continue following the doc...root@kubevm:~# sudo apt-get update
Hit:1 http://security.debian.org/debian-security bookworm-security InRelease Hit:2 http://ftp.fr.debian.org/debian bookworm InRelease Get:3 http://ftp.fr.debian.org/debian bookworm-updates InRelease [55.4 kB] Fetched 55.4 kB in 0s (142 kB/s) Reading package lists... Doneroot@kubevm:~# sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Reading package lists... Done Building dependency tree... Done Reading state information... Done ca-certificates is already the newest version (20230311). gpg is already the newest version (2.2.40-1.1). gpg set to manually installed. The following NEW packages will be installed: apt-transport-https curl 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 341 kB of archives. After this operation, 537 kB of additional disk space will be used. Get:1 http://ftp.fr.debian.org/debian bookworm/main amd64 apt-transport-https all 2.6.1 [25.2 kB] Get:2 http://ftp.fr.debian.org/debian bookworm/main amd64 curl amd64 7.88.1-10+deb12u12 [315 kB] Fetched 341 kB in 0s (2,231 kB/s) Selecting previously unselected package apt-transport-https. (Reading database ... 153713 files and directories currently installed.) Preparing to unpack .../apt-transport-https_2.6.1_all.deb ... Unpacking apt-transport-https (2.6.1) ... Selecting previously unselected package curl. Preparing to unpack .../curl_7.88.1-10+deb12u12_amd64.deb ... Unpacking curl (7.88.1-10+deb12u12) ... Setting up apt-transport-https (2.6.1) ... Setting up curl (7.88.1-10+deb12u12) ... Processing triggers for man-db (2.11.2-2) ...root@kubevm:~# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
root@kubevm:~# echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /root@kubevm:~# sudo apt-get update
Hit:1 http://security.debian.org/debian-security bookworm-security InRelease Hit:2 http://ftp.fr.debian.org/debian bookworm InRelease Hit:3 http://ftp.fr.debian.org/debian bookworm-updates InRelease Get:4 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb InRelease [1,186 B] Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb Packages [2,718 B] Fetched 3,904 B in 0s (9,965 B/s) Reading package lists... Doneroot@kubevm:~# sudo apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: conntrack cri-tools ethtool iptables kubernetes-cni libip6tc2 Suggested packages: firewalld The following NEW packages will be installed: conntrack cri-tools ethtool iptables kubeadm kubectl kubelet kubernetes-cni libip6tc2 0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded. Need to get 96.0 MB of archives. After this operation, 354 MB of additional disk space will be used. Get:1 http://ftp.fr.debian.org/debian bookworm/main amd64 conntrack amd64 1:1.4.7-1+b2 [35.2 kB] Get:2 http://ftp.fr.debian.org/debian bookworm/main amd64 ethtool amd64 1:6.1-1 [197 kB] Get:3 http://ftp.fr.debian.org/debian bookworm/main amd64 libip6tc2 amd64 1.8.9-2 [19.4 kB] Get:4 http://ftp.fr.debian.org/debian bookworm/main amd64 iptables amd64 1.8.9-2 [360 kB] Get:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb cri-tools 1.33.0-1.1 [17.3 MB] Get:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb kubeadm 1.33.0-1.1 [12.7 MB] Get:7 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb kubectl 1.33.0-1.1 [11.7 MB] Get:8 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb kubernetes-cni 1.6.0-1.1 [37.8 MB] Get:9 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.33/deb kubelet 1.33.0-1.1 [15.9 MB] Fetched 96.0 MB in 2s (44.7 MB/s) Selecting previously unselected package conntrack. (Reading database ... 153724 files and directories currently installed.) Preparing to unpack .../0-conntrack_1%3a1.4.7-1+b2_amd64.deb ... Unpacking conntrack (1:1.4.7-1+b2) ... Selecting previously unselected package cri-tools. Preparing to unpack .../1-cri-tools_1.33.0-1.1_amd64.deb ... Unpacking cri-tools (1.33.0-1.1) ... Selecting previously unselected package ethtool. Preparing to unpack .../2-ethtool_1%3a6.1-1_amd64.deb ... Unpacking ethtool (1:6.1-1) ... Selecting previously unselected package libip6tc2:amd64. Preparing to unpack .../3-libip6tc2_1.8.9-2_amd64.deb ... Unpacking libip6tc2:amd64 (1.8.9-2) ... Selecting previously unselected package iptables. Preparing to unpack .../4-iptables_1.8.9-2_amd64.deb ... Unpacking iptables (1.8.9-2) ... Selecting previously unselected package kubeadm. Preparing to unpack .../5-kubeadm_1.33.0-1.1_amd64.deb ... Unpacking kubeadm (1.33.0-1.1) ... Selecting previously unselected package kubectl. Preparing to unpack .../6-kubectl_1.33.0-1.1_amd64.deb ... Unpacking kubectl (1.33.0-1.1) ... Selecting previously unselected package kubernetes-cni. Preparing to unpack .../7-kubernetes-cni_1.6.0-1.1_amd64.deb ... Unpacking kubernetes-cni (1.6.0-1.1) ... Selecting previously unselected package kubelet. Preparing to unpack .../8-kubelet_1.33.0-1.1_amd64.deb ... Unpacking kubelet (1.33.0-1.1) ... Setting up libip6tc2:amd64 (1.8.9-2) ... Setting up conntrack (1:1.4.7-1+b2) ... Setting up kubectl (1.33.0-1.1) ... Setting up cri-tools (1.33.0-1.1) ... Setting up kubernetes-cni (1.6.0-1.1) ... Setting up ethtool (1:6.1-1) ... Setting up kubeadm (1.33.0-1.1) ... Setting up iptables (1.8.9-2) ... update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in auto mode update-alternatives: using /usr/sbin/ip6tables-legacy to provide /usr/sbin/ip6tables (ip6tables) in auto mode update-alternatives: using /usr/sbin/iptables-nft to provide /usr/sbin/iptables (iptables) in auto mode update-alternatives: using /usr/sbin/ip6tables-nft to provide /usr/sbin/ip6tables (ip6tables) in auto mode update-alternatives: using /usr/sbin/arptables-nft to provide /usr/sbin/arptables (arptables) in auto mode update-alternatives: using /usr/sbin/ebtables-nft to provide /usr/sbin/ebtables (ebtables) in auto mode Setting up kubelet (1.33.0-1.1) ... Processing triggers for man-db (2.11.2-2) ... Processing triggers for libc-bin (2.36-9+deb12u10) ...root@kubevm:~# sudo apt-mark hold kubelet kubeadm kubectl
kubelet set on hold. kubeadm set on hold. kubectl set on hold.root@kubevm:~# sudo systemctl enable --now kubelet
Kubeadm: init the cluster (part #1)
Well, everything seems to be installed. How about initiating a (one-node) cluster now?Aaand ... Errors. Of course.root@kubevm:~# kubeadm init
[init] Using Kubernetes version: v1.33.0 [preflight] Running pre-flight checks[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
Debugging swap issue
After runningsystemctl daemon-reload
, you can run swapon
to check if swap is deactivated (in which case, the swapon command will return nothing).
If this doesn't work, use systemctl list-unit-files | grep swap
to see which file could be responsible of that, and disable it. See example below.
root@kubevm:~# systemctl list-unit-files | grep swap
dev-sda3.swap generated - swap.target static -root@kubevm:~# sudo systemctl mask dev-sda3.swap
Created symlink /etc/systemd/system/dev-sda3.swap → /dev/null.root@kubevm:~# swapon
root@kubevm:~#
Debugging IP forwarding
Edit/etc/sysctl.conf
and search for the line containing : net.ipv4.ip_forward
. Make sure it is uncommented, and set its value to 1.
Kubeadm: init the cluster (part #2)
This should be good now. Let's run the init process again.Well, this is much better, but still not it.root@kubevm:~# kubeadm init
[init] Using Kubernetes version: v1.33.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action beforehand using 'kubeadm config images pull' W0512 08:14:36.859646 1436 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubevm] and IPs [10.96.0.1 192.168.10.71] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubevm localhost] and IPs [192.168.10.71 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubevm localhost] and IPs [192.168.10.71 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.528364ms [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s [control-plane-check] Checking kube-apiserver at https://192.168.10.71:6443/livez [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez[control-plane-check] kube-apiserver is not healthy after 4m0.000346539s [control-plane-check] kube-controller-manager is not healthy after 4m0.000285239s [control-plane-check] kube-scheduler is not healthy after 4m0.000502765s
A control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID' error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.10.71:6443/livez: Get "https://192.168.10.71:6443/livez?timeout=10s": dial tcp 192.168.10.71:6443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused] To see the stack trace of this error execute with --v=5 or higher
Debugging kube-apiserver issue
Well, let's see what containers we have here.
root@kubevm:~# crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml"
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
Well, something is clearly wrong.
To resolve this, we will have to look at our /etc/containerd/config.toml
. It's a bit minimalist, let's make it better, using the default configuration.
Well, clearly, the file has changed! Yet, don't forget to (re)set SystemdCgroup to true inside the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] section.root@kubevm:~# containerd config default > /etc/containerd/config.toml
root@kubevm:~# cat /etc/containerd/config.toml
disabled_plugins = [] imports = [] oom_score = 0 plugin_dir = "" required_plugins = [] root = "/var/lib/containerd" state = "/run/containerd" temp = "" version = 2 [cgroup] path = "" [debug] address = "" format = "" gid = 0 level = "" uid = 0 [grpc] address = "/run/containerd/containerd.sock" gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 tcp_address = "" tcp_tls_ca = "" tcp_tls_cert = "" tcp_tls_key = "" uid = 0 [metrics] address = "" grpc_histogram = false [plugins] [plugins."io.containerd.gc.v1.scheduler"] deletion_threshold = 0 mutation_threshold = 100 pause_threshold = 0.02 schedule_delay = "0s" startup_delay = "100ms" [plugins."io.containerd.grpc.v1.cri"] device_ownership_from_security_context = false disable_apparmor = false disable_cgroup = false disable_hugetlb_controller = true disable_proc_mount = false disable_tcp_service = true enable_selinux = false enable_tls_streaming = false enable_unprivileged_icmp = false enable_unprivileged_ports = false ignore_image_defined_volumes = false max_concurrent_downloads = 3 max_container_log_line_size = 16384 netns_mounts_under_state_dir = false restrict_oom_score_adj = false sandbox_image = "registry.k8s.io/pause:3.6" selinux_category_range = 1024 stats_collect_period = 10 stream_idle_timeout = "4h0m0s" stream_server_address = "127.0.0.1" stream_server_port = "0" systemd_cgroup = false tolerate_missing_hugetlb_controller = true unset_seccomp_profile = "" [...] [ttrpc] address = "" gid = 0 uid = 0
Kubeadm: init the cluster (part #3)
If you went through part #2 of this article, we will probably need to runkubeadm reset
first.
Then, try to run kubedm init
again.
Well, now it's all OK! Don't forget to run the last command (see logs above) to create the kubelet configuration. After this, kubelet will let you interact with your cluster !root@kubevm:~# kubeadm init
[init] Using Kubernetes version: v1.33.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action beforehand using 'kubeadm config images pull' W0512 12:04:04.546716 1914 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local kubevm] and IPs [10.96.0.1 192.168.10.71] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubevm localhost] and IPs [192.168.10.71 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubevm localhost] and IPs [192.168.10.71 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.464501ms [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s [control-plane-check] Checking kube-apiserver at https://192.168.10.71:6443/livez [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez[control-plane-check] kube-controller-manager is healthy after 1.502020657s [control-plane-check] kube-scheduler is healthy after 1.757098466s [control-plane-check] kube-apiserver is healthy after 3.501716779s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubevm as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kubevm as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: vejzef.dq6l5fqh3y2c2223 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
root@kubevm:~# mkdir -p $HOME/.kube
root@kubevm:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kubevm:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@kubevm:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-674b8bbfcf-6jzr2 0/1 Pending 0 3m6s kube-system coredns-674b8bbfcf-6v9gh 0/1 Pending 0 3m6s kube-system etcd-kubevm 1/1 Running 1 3m12s kube-system kube-apiserver-kubevm 1/1 Running 1 3m14s kube-system kube-controller-manager-kubevm 1/1 Running 1 3m12s kube-system kube-proxy-wjfgt 1/1 Running 0 3m6s kube-system kube-scheduler-kubevm 1/1 Running 1 3m14s
Socials
Github
Gitlab
Stack
Overflow
LinkedIn