Friday, March 15, 2019

Setup kubernetes on centos7

I setup three CentOS VMs with 2 GB RAM and 2 Cores. all VMs have static IPs.
Then install the packages with yum
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

make sure swap is off and host entries are properly set for the three VMs
value of /proc/sys/net/bridge/bridge-nf-call-iptables  should be 1

Make sure that 6443 and 10250 are open on the firewall

# firewall-cmd --list-ports

# firewall-cmd --list-all

# firewall-cmd --zone=public --add-port=6443/tcp --permanent

# firewall-cmd --zone=public --add-port=6443/udp --permanent

# firewall-cmd --zone=public --add-port=10250/tcp --permanent

# firewall-cmd --zone=public --add-port=10250/udp --permanent

# firewall-cmd --reload


# kubeadm init --pod-network-cidr=10.244.0.0/16 --                                                                                            apiserver-advertise-address=192.168.213.7

192.168.213.7 is the IP of the first VM which i am configuring as the master node

next run this
#cp /etc/kubernetes/admin.conf $HOME/
#chown $(id -u):$(id -g) $HOME/admin.conf
#export KUBECONFIG=$HOME/admin.conf
# kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

if you don't run the cp, chown and export command beore running the above kubectl command, you will get this error
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

if all goes well till this point, the below commands should work

# kubectl get pods
No resources found.
# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
centos05-04   NotReady   master   11m   v1.13.4

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-7x5bb              1/1     Running   0          27m
kube-system   coredns-86c58d9df4-wcddl              1/1     Running   0          27m
kube-system   etcd-centos05-04                      1/1     Running   0          26m
kube-system   kube-apiserver-centos05-04            1/1     Running   0          26m
kube-system   kube-controller-manager-centos05-04   1/1     Running   0          26m
kube-system   kube-flannel-ds-amd64-nwchq           1/1     Running   0          18m
kube-system   kube-proxy-6znn4                      1/1     Running   0          27m
kube-system   kube-scheduler-centos05-04            1/1     Running   0          26m

On the the worker nodes now
start docker and kubelet service
now run the command
kubeadm join 192.168.253.10:6443 --token gh4858585dkdk--discovery-token-ca-cert-hash sha256:c1ffff3838383833

End of Output should be like this

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

run on the master node, ensure the worker nodes are available
# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
centos05-04   Ready      master   33m     v1.13.4
centos05-05   NotReady   <none>   2m30s   v1.13.4
centos05-06   NotReady   <none>   9s      v1.13.4

it takes some time for the worker nodes to sync up with the master
i was getting some error messages like this

Mar 15 03:53:39 centos05-05 kubelet: W0315 03:53:39.430340    5737 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 15 03:53:39 centos05-05 kubelet: E0315 03:53:39.430556    5737 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

but it got automatically resolved

Mar 15 03:53:51 centos05-05 containerd: time="2019-03-15T03:53:51.261353081-04:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12c65130c840c1aa86cc3ca8141aebe525bcaa670a33e97c92bed5da0a4e8c09/shim.sock" debug=false pid=6447
Mar 15 03:53:52 centos05-05 containerd: time="2019-03-15T03:53:52.143233726-04:00" level=info msg="shim reaped" id=12c65130c840c1aa86cc3ca8141aebe525bcaa670a33e97c92bed5da0a4e8c09
Mar 15 03:53:52 centos05-05 dockerd: time="2019-03-15T03:53:52.153117014-04:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 03:53:53 centos05-05 containerd: time="2019-03-15T03:53:53.062007920-04:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0b98a280903a05f3310c0d306e24b175c7483aa3b4f0c5ca6a0ebe792b955b6/shim.sock" debug=false pid=6516


# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
centos05-04   Ready    master   37m     v1.13.4
centos05-05   Ready    <none>   6m45s   v1.13.4
centos05-06   Ready    <none>   4m24s   v1.13.4

now its time to deploy some pods
# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-bm4zx   0/1     ContainerCreating   0          8s

# kubectl delete deployment/nginx
deployment.extensions "nginx" deleted



No comments:

Post a Comment

Not able to login to a server from putty using ppk file

 If you are not able to login to a Linux server from putty using a ppk file, you can regenerate the file using the below options. Open putty...