/ HOWTO

Instalando Kubernetes en modo local.


Para instalar Kubernetes en modo local usaremos dos equipos virtuales en Virtualbox, llamados SRV y K1, ambos con 2 vcpu y 2 GB Ram, una interfaz de red conectada en modo Bridge.

Usé las imágenes de Ubuntu server 16.04, debido a problemas de compatibilidad de las mas nuevas.

ubuntu-16.04.6-server-amd64.iso

Hemos realizado instalaciones mínimas de paquetes en los equipos virtuales Linux, definiendo IPs fijos en ambos casos (192.168.10.200 y 201). Creando una cuenta personal y clave.

Un punto importante es desactivar la memoria SWAP de ambos servidores, en nuestro caso editamos el fichero /etc/fstab comentando la linea de SWAP.

Instalaremos paquetes básicos y definireos el repositorio oficial de Kubernetes, que nos servirá para actualizar el producto en el tiempo, En el servidor instalaremos los paquetes kubelet, kubeadm y kubectl.

También como hemos comentado, Kubernetes corre sobre Docker, con lo que hace falta instalar la última versión del mismo. (docker.io)

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt update
apt install -y apt-transport-https
apt install -y kubelet kubeadm kubectl docker.io

Verificamos que Docker esté funcionando correctamente.

# docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        6247962
 Built:             Tue Feb 26 23:52:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Wed Feb 13 00:24:14 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Definimos que Docker arranque automáticamente en cada reinicio del servidor.

# systemctl enable docker.service

Ahora crearemos el cluster, es un proceso que demora varios minutos ya que tiene que descargar paquetes, contenedores y componentes para arrancar.

En este punto podemos definir cual será la red en la que los demas nodos se podrán asociar, en el caso de que un servidor tenga mas de una placa de red e IP, podemos definir una conexión entre los nodos en esa red.

Para nuestras pruebas, como cada nodo necesitará descargar contenido de Internet, utilizamos la misma red que tiene salida a internet. En entornos profesionales recomendamos tener redes aisladas.

# kubeadm init --pod-network-cidr=192.168.10.0/24
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
root@ubuntu:~# rm /var/lib/etcd/ -Rf
root@ubuntu:~# kubeadm init --pod-network-cidr=192.168.10.0/24
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.10.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.10.200 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.505338 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: q3hj5r.smm0vh0krmidc9r4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.200:6443 --token q3hj5r.smm0vh0krmidc9r4 \
    --discovery-token-ca-cert-hash sha256:3d58eeb7647406496cb5f9f3031f0329c0dabe7bd2198afaa002c4ffa3caca3c 

Al final veremos el texto que deberemos pegar en cada equipo que nos interese que sea Nodo de este cluster. Algo similar a esto.

# kubeadm join 192.168.10.200:6443 --token q3hj5r.smm0vh0krmidc9r4 \
    --discovery-token-ca-cert-hash sha256:3d58eeb3c 

Recomiendan que un usuario que no sea “root”, sea el manager del Cluster de Kubernetes.

su - usuario1
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

A partir de este momento, las taréas de gestión se deberán realizar usando esta cuenta.

Como la creación del Cluster puede tener fallos, se puede borrar toda la configuración para comenzar nuevamente. También puede que tengas que parar el servicio kubelet y borrar carpetas.

kubeadm reset
rm /var/lib/docker -Rf
rm /etc/kubernetes -Rf

Si queremos instalar el Dashboard, lo haremos usando la herramienta kubectl y luego habilitando un PROXY para acceder a la interfaz web.

# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

# kubectl proxy
Starting to serve on 127.0.0.1:8001

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

En el otro servidor (K1), deberemos hacer un proceso similar de instalación de paquetes y luego asociar ese servidor al nodo Manager (SRV).

Recordar desactivar el SWAP.

# kubeadm join 192.168.10.200:6443 --token q3hj5r.smm0vh0krmidc9r4 \
    --discovery-token-ca-cert-hash sha256:3d58eeb3c 
    
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

En este punto nuestro cluster cuenta con un nodo Manager y otro Nodo worker.

Obtenemos información del cluster.

$ su - usuario1
$ kubectl cluster-info
Kubernetes master is running at https://192.168.10.200:6443
KubeDNS is running at https://192.168.10.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Podemos ver los PODs que están corriendo.

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7bb7db9df8-w2gdh   0/1     Pending   0          2m5s
kube-system   coredns-fb8b8dccf-64mvt                    0/1     Pending   0          18m
kube-system   coredns-fb8b8dccf-jjrqq                    0/1     Pending   0          18m
kube-system   etcd-ubuntu                                1/1     Running   0          18m
kube-system   kube-apiserver-ubuntu                      1/1     Running   0          18m
kube-system   kube-controller-manager-ubuntu             1/1     Running   0          18m
kube-system   kube-proxy-nw4k6                           1/1     Running   0          16m
kube-system   kube-proxy-ph6p7                           1/1     Running   0          18m
kube-system   kube-scheduler-ubuntu                      1/1     Running   0          17m

También las imágenes que se descargó para poner a funcionar el Cluster.

$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.2
k8s.gcr.io/kube-controller-manager:v1.14.2
k8s.gcr.io/kube-scheduler:v1.14.2
k8s.gcr.io/kube-proxy:v1.14.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

Si queremos verificar que estemos corriendo las versiones correctas.

$ kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.2
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

Veremos información del cluster.

$ kubeadm config view
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.10.0/24
  serviceSubnet: 10.96.0.0/12
scheduler: {}

Los contenedores corriendo.

$ docker ps
CONTAINER ID        NAME                                                                                                        CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
589c4ed3e5b4        k8s_kube-proxy_kube-proxy-ph6p7_kube-system_8a47c8b4-78da-11e9-a6cb-0800270b51e2_0                          0.07%               13.09MiB / 992MiB   1.32%               0B / 0B             31.5MB / 0B         8
d9a53578d681        k8s_POD_kube-proxy-ph6p7_kube-system_8a47c8b4-78da-11e9-a6cb-0800270b51e2_0                                 0.00%               636KiB / 992MiB     0.06%               0B / 0B             729kB / 0B          1
ce779a6ae099        k8s_kube-apiserver_kube-apiserver-ubuntu_kube-system_14e1b2019a0d2d159521966386b1becf_0                     2.26%               214.5MiB / 992MiB   21.62%              0B / 0B             29.1MB / 0B         11
6a8cfa5b7ad4        k8s_kube-controller-manager_kube-controller-manager-ubuntu_kube-system_35fa2d848675222dfbb1d331f6a9e118_0   1.32%               43.91MiB / 992MiB   4.43%               0B / 0B             14MB / 0B           10
6f6aea66d0ab        k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_9b290132363a92652555896288ca3f88_0                     0.28%               12.63MiB / 992MiB   1.27%               0B / 0B             7.18MB / 0B         10
bd879130c5b8        k8s_etcd_etcd-ubuntu_kube-system_ed060ecbf494a0a438e89fc55c42fef3_0                                         1.61%               42.12MiB / 992MiB   4.25%               0B / 0B             43.4MB / 73.1MB     13
64f1657de243        k8s_POD_kube-controller-manager-ubuntu_kube-system_35fa2d848675222dfbb1d331f6a9e118_0                       0.00%               404KiB / 992MiB     0.04%               0B / 0B             0B / 0B             1
7e75ff7e8450        k8s_POD_kube-apiserver-ubuntu_kube-system_14e1b2019a0d2d159521966386b1becf_0                                0.00%               500KiB / 992MiB     0.05%               0B / 0B             0B / 0B             1
37c52c1a95d6        k8s_POD_kube-scheduler-ubuntu_kube-system_9b290132363a92652555896288ca3f88_0                                0.00%               520KiB / 992MiB     0.05%               0B / 0B             0B / 0B             1
bee36b00d8a3        k8s_POD_etcd-ubuntu_kube-system_ed060ecbf494a0a438e89fc55c42fef3_0                                          0.00%               688KiB / 992MiB     0.07%               0B / 0B             0B / 0B             1

En el nodo worker, también hay contenedores corriendo.

DOCKER STATS en el cliente
CONTAINER ID        NAME                                                                                 CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             BLOCK I/O           PIDS
4bb9c7ad1066        k8s_kube-proxy_kube-proxy-nw4k6_kube-system_d2ee4ac9-78da-11e9-a6cb-0800270b51e2_0   3.70%               9.98MiB / 992.1MiB   1.01%               0B / 0B             2.04MB / 0B         6
79914e2b37cf        k8s_POD_kube-proxy-nw4k6_kube-system_d2ee4ac9-78da-11e9-a6cb-0800270b51e2_0          0.00%               544KiB / 992.1MiB    0.05%               0B / 0B             3.1MB / 0B          1

Información sobre los nodos.

$ kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k1       Ready    <none>   30h   v1.14.2   192.168.10.201   <none>        Ubuntu 16.04.6 LTS   4.4.0-148-generic   docker://18.9.2
ubuntu   Ready    master   30h   v1.14.2   192.168.10.200   <none>        Ubuntu 16.04.6 LTS   4.4.0-148-generic   docker://18.9.2

En el siguiente post veremos como instalar PODS y ver sus propiedades.

Subscríbete y recibirás los últimos artículos semanalmente en tu email.