Posts

Showing posts from August, 2021

Terraform Certification Details

1. The Duration for the Terraform Exam is 1 Hour 2. You will have 50 to 60 questions and you will be tested on Terraform version 0.12 and higher so if you have worked on the version older than 0.12 than there has been considerable changes in the syntax and the logic as well. 3. The exam is online proctored and the whole certification is quite handsoff.  4. You will have to register on the hashicorp website from where you will be redirected to the exam portal. And you will have to make sure your system meets the requirement for the online exam.  5. The certification will have 2 years expiration from the day which you passed the exam.

[Solved] kubelet isn't running or healthy.The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error

Description:- So the issue came when i was setting up the kubernetes cluster on the AWS centos VM. Although these steps were followed every time, this particular time it resulted in the error below [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. Issue:- If you google this ...

Preventing Google bot to scrawl certain website pages via Nginx

Sometimes you might want to skip the google bot from scrawling your certain pages so you can use the robots.txt file to decline them. But at times during the migration or testing newer changes you allocate a small traffic on new endpoints to verify if things are working fine or not. Sometimes the newer pages might not have certain components which googlebot might be using from the seo perspective. Also newer limited allocations of a part of traffic might cause bot to view pages differently and mark them as copied content due to which search results might get affected. So you can prevent and control the google bot from scrawling pages from the nginx webserver itself as well. First two important things are there:- 1. Google has multiple bots no one actually knows however google give some idea about its bots. But one thing is common they all have google in it 2. This is not a replacement for robots.txt rather we implementing because of the partioning/allocation of small traffic to new s...

[Resolved] Kubernetes showing older version of master after successful upgrade

 Issue:- Recently updated my kubernetes cluster [root@k8smaster ~]# kubeadm upgrade apply v1.21.3 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. So the upgrade message clearly shows the cluster was upgraded to  "v1.21.3"  in master node. However when i run the command to verify [centos@k8smaster ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8smaster.unixcloudfusion.in Ready control-plane,master 9d ...