Sunday, April 24, 2022

Not able to login to a server from putty using ppk file

 If you are not able to login to a Linux server from putty using a ppk file, you can regenerate the file using the below options.



Open puttygen and Make sure RSA is selected under the Parameters section.

Click the load button (Load an existing private key file)


Select the .pem file in the load private key dialog box. make sure All Files *.* is selected to see the .pem file.

you will see a dialog box, click ok

from the key menu at top, select Parameters for saving key file, select the "PPK file version 2" option and click ok

Click the Save Private key and click Yes in the puttygen warning dialog box. save the key and authenticate using it.

Tuesday, May 21, 2019

Sunday, March 31, 2019

deployments

Contents of my myapp.yml file
# cat deployment/myapp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp-deployment
    type: frontend
  name: myapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      name: myapp-pod
    spec:
      containers:
        -
          image: nginx
          name: nginx-container

Create the deployments
# kubectl create -f dep-definition.yml

Update the myapp.yml file
image: nginx:1.7.1

Check status of rollout
# kubectl rollout status deployment/myapp-deployment

Get rollout status
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

Delete the deployment
# kubectl delete deployment myapp-deployment
deployment.extensions "myapp-deployment" deleted

# kubectl rollout status deployment/myapp-deployment
deployment "myapp-deployment" successfully rolled out

# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true

# kubectl apply -f deployment/dep-definition.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/myapp-deployment configured


Update nginx version
image: nginx:1.14.2

# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "myapp-deployment" successfully rolled out


# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true





# kubectl describe deployments
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Sun, 31 Mar 2019 20:47:52 -0400
Labels:                 app=myapp-deployment
                        type=frontend
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-deployment","type":"frontend"},"name":"mya...
                        kubernetes.io/change-cause: kubectl create --filename=dep-definition.yml --record=true
Selector:               app=myapp
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx-container:
    Image:        nginx:1.14.2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-5854fc6749 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  7m32s  deployment-controller  Scaled up replica set myapp-deployment-cf874bdcd to 3
  Normal  ScalingReplicaSet  2m19s  deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 1
  Normal  ScalingReplicaSet  112s   deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 2
  Normal  ScalingReplicaSet  112s   deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  88s    deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 1
  Normal  ScalingReplicaSet  88s    deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 3
  Normal  ScalingReplicaSet  86s    deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 0

# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true


# kubectl set image deployment/myapp-deployment nginx-container=nginx:1.12-perl
deployment.extensions/myapp-deployment image updated
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
myapp-deployment-5854fc6749-2pr6v   1/1     Running             0          5m44s
myapp-deployment-5854fc6749-hjxnt   1/1     Running             0          6m11s
myapp-deployment-5bcf5cbbf9-557d4   1/1     Running             0          56s
myapp-deployment-5bcf5cbbf9-5hhgm   0/1     ContainerCreating   0          9s
[root@centos05-04 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     2            3           11m
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "myapp-deployment" successfully rolled out
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true

[root@centos05-04 ~]# kubectl describe deployments
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Sun, 31 Mar 2019 20:47:52 -0400
Labels:                 app=myapp-deployment
                        type=frontend
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-deployment","type":"frontend"},"name":"mya...
                        kubernetes.io/change-cause: kubectl create --filename=dep-definition.yml --record=true
Selector:               app=myapp
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx-container:
    Image:        nginx:1.14.2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-5854fc6749 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  7m32s  deployment-controller  Scaled up replica set myapp-deployment-cf874bdcd to 3
  Normal  ScalingReplicaSet  2m19s  deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 1
  Normal  ScalingReplicaSet  112s   deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 2
  Normal  ScalingReplicaSet  112s   deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  88s    deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 1
  Normal  ScalingReplicaSet  88s    deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 3
  Normal  ScalingReplicaSet  86s    deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 0
# kubectl describe deployments
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Sun, 31 Mar 2019 20:47:52 -0400
Labels:                 app=myapp-deployment
                        type=frontend
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-deployment","type":"frontend"},"name":"mya...
                        kubernetes.io/change-cause: kubectl create --filename=dep-definition.yml --record=true
Selector:               app=myapp
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx-container:
    Image:        nginx:1.14.2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-5854fc6749 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  8m6s   deployment-controller  Scaled up replica set myapp-deployment-cf874bdcd to 3
  Normal  ScalingReplicaSet  2m53s  deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 1
  Normal  ScalingReplicaSet  2m26s  deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 2
  Normal  ScalingReplicaSet  2m26s  deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  2m2s   deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 1
  Normal  ScalingReplicaSet  2m2s   deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 3
  Normal  ScalingReplicaSet  2m     deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 0
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true

# kubectl set image deployment/myapp-deployment nginx-container=nginx:1.12-perl
deployment.extensions/myapp-deployment image updated
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
myapp-deployment-5854fc6749-2pr6v   1/1     Running             0          5m44s
myapp-deployment-5854fc6749-hjxnt   1/1     Running             0          6m11s
myapp-deployment-5bcf5cbbf9-557d4   1/1     Running             0          56s
myapp-deployment-5bcf5cbbf9-5hhgm   0/1     ContainerCreating   0          9s
[root@centos05-04 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     2            3           11m
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true

# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "myapp-deployment" successfully rolled out
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true

# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     3            3           14m
# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
myapp-deployment   3/3     3            3           14m
# kubectl describe deployments
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Sun, 31 Mar 2019 20:47:52 -0400
Labels:                 app=myapp-deployment
                        type=frontend
Annotations:            deployment.kubernetes.io/revision: 3
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-deployment","type":"frontend"},"name":"mya...
                        kubernetes.io/change-cause: kubectl create --filename=dep-definition.yml --record=true
Selector:               app=myapp
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx-container:
    Image:        nginx:1.12-perl
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-5bcf5cbbf9 (3/3 replicas created)
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  15m                   deployment-controller  Scaled up replica set myapp-deployment-cf874bdcd to 3
  Normal  ScalingReplicaSet  9m54s                 deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 1
  Normal  ScalingReplicaSet  9m27s                 deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 2
  Normal  ScalingReplicaSet  9m27s                 deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  9m3s                  deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 1
  Normal  ScalingReplicaSet  9m3s                  deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 3
  Normal  ScalingReplicaSet  9m1s                  deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 0
  Normal  ScalingReplicaSet  4m39s                 deployment-controller  Scaled up replica set myapp-deployment-5bcf5cbbf9 to 1
  Normal  ScalingReplicaSet  3m52s                 deployment-controller  Scaled down replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  3m3s (x4 over 3m52s)  deployment-controller  (combined from similar events): Scaled down replica set myapp-deployment-5854fc6749 to 0
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
2         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true

# kubectl rollout undo deployment/myapp-deployment
deployment.extensions/myapp-deployment rolled back
# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true
4         kubectl create --filename=dep-definition.yml --record=true

[root@centos05-04 ~]# kubectl describe deployments
Name:                   myapp-deployment
Namespace:              default
CreationTimestamp:      Sun, 31 Mar 2019 20:47:52 -0400
Labels:                 app=myapp-deployment
                        type=frontend
Annotations:            deployment.kubernetes.io/revision: 4
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-deployment","type":"frontend"},"name":"mya...
                        kubernetes.io/change-cause: kubectl create --filename=dep-definition.yml --record=true
Selector:               app=myapp
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
  Containers:
   nginx-container:
    Image:        nginx:1.14.2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-deployment-5854fc6749 (3/3 replicas created)
Events:
  Type    Reason             Age                 From                   Message
  ----    ------             ----                ----                   -------
  Normal  ScalingReplicaSet  16m                 deployment-controller  Scaled up replica set myapp-deployment-cf874bdcd to 3
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 2
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 1
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 3
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled down replica set myapp-deployment-cf874bdcd to 0
  Normal  ScalingReplicaSet  5m38s               deployment-controller  Scaled up replica set myapp-deployment-5bcf5cbbf9 to 1
  Normal  ScalingReplicaSet  4m51s               deployment-controller  Scaled down replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  10s (x2 over 10m)   deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 1
  Normal  ScalingReplicaSet  8s (x2 over 10m)    deployment-controller  Scaled up replica set myapp-deployment-5854fc6749 to 2
  Normal  ScalingReplicaSet  3s (x8 over 4m51s)  deployment-controller  (combined from similar events): Scaled down replica set myapp-deployment-5bcf5cbbf9 to 0


image: nginx:1.4.2-err

# kubectl apply -f deployment/dep-definition.yml
deployment.apps/myapp-deployment configured
# kubectl rollout status deployment/myapp-deployment
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...

# kubectl rollout history deployment/myapp-deployment
deployment.extensions/myapp-deployment
REVISION  CHANGE-CAUSE
1         kubectl create --filename=dep-definition.yml --record=true
3         kubectl create --filename=dep-definition.yml --record=true
4         kubectl create --filename=dep-definition.yml --record=true
5         kubectl create --filename=dep-definition.yml --record=true

# kubectl get pods
NAME                                READY   STATUS         RESTARTS   AGE
myapp-deployment-5854fc6749-8jvw8   1/1     Running        0          6m39s
myapp-deployment-5854fc6749-fh2s9   1/1     Running        0          6m37s
myapp-deployment-5854fc6749-gm5x6   1/1     Running        0          6m34s
myapp-deployment-85db5cb47b-ttjd5   0/1     ErrImagePull   0          48s
# kubectl rollout undo deployment/myapp-deployment
deployment.extensions/myapp-deployment rolled back
# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-5854fc6749-8jvw8   1/1     Running   0          7m6s
myapp-deployment-5854fc6749-fh2s9   1/1     Running   0          7m4s

myapp-deployment-5854fc6749-gm5x6   1/1     Running   0          7m1s


Monday, March 25, 2019

Replication Controller

Create rc-definition.yml file. be careful of tabs and indents.

# cat rc-definition.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp-rc
  labels:
      app: myapp
      type: front-end

spec:
 template:

     metadata:
        name: myapp-prod
        labels:
          app: myapp
          type: front-end
     spec:
       containers:
         - name: nginx-controller
           image: nginx
 replicas: 2

Run below command to create the pods

# kubectl create -f rc-definition.yml
replicationcontroller/myapp-rc created

when you run the above commands, errors like the below may come if the indentation is incorrect, or there is a array instead of a map

error: error parsing rc-definition.yml: error converting YAML to JSON: yaml: line 6: found character that cannot start any token

error: error parsing rc-definition.yml: error converting YAML to JSON: yaml: line 16: found a tab character that violates indentation

error: error validating "rc-definition.yml": error validating data: ValidationError(ReplicationController.spec): invalid type for io.k8s.api.core.v1.ReplicationControllerSpec: got "array", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false


# kubectl get replicationcontroller
NAME       DESIRED   CURRENT   READY   AGE
myapp-rc   2         2         2       14m

# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
myapp-rc-69v2d             1/1     Running   0          14m
myapp-rc-fw8z4             1/1     Running   0          14m
mynginx-6b7b7bcd75-l7jwv   1/1     Running   1          25h

nginx-7cdbd8cdc9-g2tcv     1/1     Running   2          9d


Friday, March 15, 2019

Setup kubernetes on centos7

I setup three CentOS VMs with 2 GB RAM and 2 Cores. all VMs have static IPs.
Then install the packages with yum
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

make sure swap is off and host entries are properly set for the three VMs
value of /proc/sys/net/bridge/bridge-nf-call-iptables  should be 1

Make sure that 6443 and 10250 are open on the firewall

# firewall-cmd --list-ports

# firewall-cmd --list-all

# firewall-cmd --zone=public --add-port=6443/tcp --permanent

# firewall-cmd --zone=public --add-port=6443/udp --permanent

# firewall-cmd --zone=public --add-port=10250/tcp --permanent

# firewall-cmd --zone=public --add-port=10250/udp --permanent

# firewall-cmd --reload


# kubeadm init --pod-network-cidr=10.244.0.0/16 --                                                                                            apiserver-advertise-address=192.168.213.7

192.168.213.7 is the IP of the first VM which i am configuring as the master node

next run this
#cp /etc/kubernetes/admin.conf $HOME/
#chown $(id -u):$(id -g) $HOME/admin.conf
#export KUBECONFIG=$HOME/admin.conf
# kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

if you don't run the cp, chown and export command beore running the above kubectl command, you will get this error
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

if all goes well till this point, the below commands should work

# kubectl get pods
No resources found.
# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
centos05-04   NotReady   master   11m   v1.13.4

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-7x5bb              1/1     Running   0          27m
kube-system   coredns-86c58d9df4-wcddl              1/1     Running   0          27m
kube-system   etcd-centos05-04                      1/1     Running   0          26m
kube-system   kube-apiserver-centos05-04            1/1     Running   0          26m
kube-system   kube-controller-manager-centos05-04   1/1     Running   0          26m
kube-system   kube-flannel-ds-amd64-nwchq           1/1     Running   0          18m
kube-system   kube-proxy-6znn4                      1/1     Running   0          27m
kube-system   kube-scheduler-centos05-04            1/1     Running   0          26m

On the the worker nodes now
start docker and kubelet service
now run the command
kubeadm join 192.168.253.10:6443 --token gh4858585dkdk--discovery-token-ca-cert-hash sha256:c1ffff3838383833

End of Output should be like this

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

run on the master node, ensure the worker nodes are available
# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
centos05-04   Ready      master   33m     v1.13.4
centos05-05   NotReady   <none>   2m30s   v1.13.4
centos05-06   NotReady   <none>   9s      v1.13.4

it takes some time for the worker nodes to sync up with the master
i was getting some error messages like this

Mar 15 03:53:39 centos05-05 kubelet: W0315 03:53:39.430340    5737 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 15 03:53:39 centos05-05 kubelet: E0315 03:53:39.430556    5737 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

but it got automatically resolved

Mar 15 03:53:51 centos05-05 containerd: time="2019-03-15T03:53:51.261353081-04:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12c65130c840c1aa86cc3ca8141aebe525bcaa670a33e97c92bed5da0a4e8c09/shim.sock" debug=false pid=6447
Mar 15 03:53:52 centos05-05 containerd: time="2019-03-15T03:53:52.143233726-04:00" level=info msg="shim reaped" id=12c65130c840c1aa86cc3ca8141aebe525bcaa670a33e97c92bed5da0a4e8c09
Mar 15 03:53:52 centos05-05 dockerd: time="2019-03-15T03:53:52.153117014-04:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 03:53:53 centos05-05 containerd: time="2019-03-15T03:53:53.062007920-04:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0b98a280903a05f3310c0d306e24b175c7483aa3b4f0c5ca6a0ebe792b955b6/shim.sock" debug=false pid=6516


# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
centos05-04   Ready    master   37m     v1.13.4
centos05-05   Ready    <none>   6m45s   v1.13.4
centos05-06   Ready    <none>   4m24s   v1.13.4

now its time to deploy some pods
# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-bm4zx   0/1     ContainerCreating   0          8s

# kubectl delete deployment/nginx
deployment.extensions "nginx" deleted



Monday, March 11, 2019

minikube upgrade


sudo yum -y install epel-release


 sudo yum -y install libvirt qemu-kvm virt-install virt-top libguestfs-tools bridge-utils

Assuming you can access virsh do the following:

Type virsh
Type net-list --all - should see that minikube-net is inactive
Type net-start minikube-net - should get an error message about "Network is already in use by interface virbr1" or similar
Quit virsh
Type sudo ifconfig virbr1 down
Type sudo brctl delbr virbr1
Type virsh
Type net-start minikube-net - should now start-up
Quit virsh
Type minikube start

$ minikube start
o   minikube v0.35.0 on linux (amd64)
>   Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@   Downloading Minikube ISO ...
 184.42 MB / 184.42 MB [============================================] 100.00% 0sk
!   Unable to start VM: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path

-   Make sure to install all necessary requirements, according to the documentation:
-   https://kubernetes.io/docs/tasks/tools/install-minikube/


$ minikube start --vm-driver kvm2
o   minikube v0.35.0 on linux (amd64)
-   minikube will upgrade the local cluster from Kubernetes 1.13.2 to 1.13.4

!   Ignoring --vm-driver=kvm2, as the existing "minikube" VM was created using the none driver.
!   To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=kvm2`
!   Alternatively, you may delete the existing VM using `minikube delete -p minikube`

:   Re-using the currently running none VM for "minikube" ...
:   Waiting for SSH access ...
-   "minikube" IP address is 192.168.213.4
-   Configuring Docker as the container runtime ...
-   Preparing Kubernetes environment ...
@   Downloading kubeadm v1.13.4
@   Downloading kubelet v1.13.4
-   Pulling images required by Kubernetes v1.13.4 ...
:   Relaunching Kubernetes v1.13.4 using kubeadm ...
:   Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
:   Updating kube-proxy configuration ...
-   Verifying component health .....
+   kubectl is now configured to use "minikube"
=   Done! Thank you for using minikube!


$  minikube delete -p minikube
x   Deleting "minikube" from none ...
-   The "minikube" cluster has been deleted.

$ sudo /usr/local/bin/minikube start --vm-driver=kvm2
o   minikube v0.35.0 on linux (amd64)
>   Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@   Downloading Minikube ISO ...
 184.42 MB / 184.42 MB [============================================] 100.00% 0s
!   Unable to start VM: new host: Driver "kvm2" not found. Do you have the plugin binary "docker-machine-driver-kvm2" accessible in your PATH?

-   Make sure to install all necessary requirements, according to the documentation:
-   https://kubernetes.io/docs/tasks/tools/install-minikube/
[prabhu@localhost ~]$ sudo su -
Last login: Sat Mar  9 22:11:26 EST 2019 on pts/0
[root@localhost ~]# minikube start --vm-driver=kvm2
o   minikube v0.35.0 on linux (amd64)
>   Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
!   Unable to start VM: create: Error creating machine: Error in driver during machine creation: ensuring active networks: checking network default: virError(Code=43, Domain=19, Message='Network not found: no network with matching name 'default'')

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

After working on this for 3 days, I found that setting up minikube on kvm2 through nested virtualization doesn't work. Though some people have claimed that it works on Fedora but not on CentOS. I was on the verge of giving up but somehow managed to troubleshoot the issues and brought up minikube on kvm2

[root@localhost ~]# minikube start --vm-driver kvm2
o   minikube v0.35.0 on linux (amd64)
>   Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
-   "minikube" IP address is 192.168.39.184
-   Configuring Docker as the container runtime ...
-   Preparing Kubernetes environment ...
@   Downloading kubeadm v1.13.4
@   Downloading kubelet v1.13.4
-   Pulling images required by Kubernetes v1.13.4 ...
-   Launching Kubernetes v1.13.4 using kubeadm ...
:   Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
-   Configuring cluster permissions ...
-   Verifying component health .....
+   kubectl is now configured to use "minikube"
=   Done! Thank you for using minikube!


But i think i have to get an external bootable disk sooner to avoid such issues in future. another big advantage of booting up linux directly is that i can have more resources for kubernetes.


Wednesday, March 6, 2019

Dockerfile

this blog posts shows a simple hello world Dockerfile example.

ubuntu:~/app/myapp1$ cat Dockerfile
FROM alpine
CMD ["echo", hello world!"]

ubuntu:~/app/myapp1$ sudo docker build .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM alpine
latest: Pulling from library/alpine
6c40cc604d8e: Pull complete
Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8
Status: Downloaded newer image for alpine:latest
 ---> caf27325b298
Step 2/2 : CMD ["echo hello world!"]
 ---> Running in 6b813f4405dd
Removing intermediate container 6b813f4405dd
 ---> 6aaaa0ac6697
Successfully built 6aaaa0ac6697

ubuntu:~/app/myapp1$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
<none>              <none>              30dcbba17c50        58 seconds ago      5.53MB
alpine              latest              caf27325b298        5 weeks ago         5.53MB
nginx               1.10.0              16666ff3a57f        2 years ago         183MB
nginx               1.9.3               ea4b88a656c9        3 years ago         133MB
prabhan_world@ubuntu:~/app/myapp1$ sudo docker run --name test 30dcbba17c50
hello world!

Not able to login to a server from putty using ppk file

 If you are not able to login to a Linux server from putty using a ppk file, you can regenerate the file using the below options. Open putty...