Issue:-
Error:-
Effect:-
docker run -d --network=host project_app1:latest
-->
One stop blog for Aws Cloud, Webservers, Application Servers, Database Servers, Linux Admin, Scripting and Automation.
docker run -d --network=host project_app1:latest
[root@kubemaster ~]# rpm -ivh containerd.io-1.6.8-3.1.el7.x86_64.rpm
warning: containerd.io-1.6.8-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
error: Failed dependencies:
container-selinux >= 2:2.74 is needed by containerd.io-1.6.8-3.1.el7.x86_64
No package containerd available.
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.6.8-3.1.el7.x86_64.rpm
Issue:- Kubeadm provides you a join token command when you first create a kubernetes cluster. But if you dont have that token handy for the future requirement for addition of the worker nodes to increase the cluster capacity ?
Solution:- you can run the following command which will allow you to generate the full token command which can be used to add the worker nodes to master in the future.
[centos@kubemaster ~]$ kubeadm token create --print-join-command
kubeadm join 172.31.98.106:6443 --token ix1ien.29glfz1p04d7ymtd --discovery-token-ca-cert-hash sha256:1f202db500d698032d075433176dd62f5d0074453daa12ccdfffd637a966a771
Once the token has been generated than you can run the command on the worker node to add it in the kubernetes cluster.
replicas: 1 minimumMasterNodes: 1 volumeClaimTemplate: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi extraInitContainers: | - name: create image: busybox:1.35.0 command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/'] securityContext: runAsUser: 0 volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-master - name: file-permissions image: busybox:1.35.0 command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/'] securityContext: runAsUser: 0 volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-master
[root@aafe920be71c ~]# snap install terragrunt
error: too early for operation, device not yet seeded or device model not acknowledged
[root@aafe920be71c ~]# systemctl status snapd.seeded.service
● snapd.seeded.service - Wait until snapd is fully seeded
Loaded: loaded (/usr/lib/systemd/system/snapd.seeded.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root@aafe920be71c ~]# systemctl status snapd.seeded.service ● snapd.seeded.service - Wait until snapd is fully seeded Loaded: loaded (/usr/lib/systemd/system/snapd.seeded.service; disabled; vendor preset: disabled) Active: active (exited) since Mon 2022-07-18 16:12:34 UTC; 2s ago Process: 6425 ExecStart=/usr/bin/snap wait system seed.loaded (code=exited, status=0/SUCCESS) Main PID: 6425 (code=exited, status=0/SUCCESS) Jul 18 16:12:33 aafe920be71c.mylabserver.com systemd[1]: Starting Wait until snapd is fully seeded... Jul 18 16:12:34 aafe920be71c.mylabserver.com systemd[1]: Started Wait until snapd is fully seeded.[root@aafe920be71c ~]# snap install terragrunt 2022-07-18T16:13:10Z INFO Waiting for automatic snapd restart...
terragrunt 0+git.ae675d6 from dt9394 (terraform-snap) installed
java.lang.OutOfMemoryError: Java heap space
ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1': (org.apache.kafka.common.utils.KafkaThread)java.lang.OutOfMemoryError: Java heap space
at java.base/java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:61)
at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:348)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1131)
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
./kafka-topics.sh --bootstrap-server b-1.test-kafka.q15lx0.c10.kafka.us-west-2.amazonaws.com:9094,b-2.test-kafka.q15lx0.c10.kafka.us-west-2.amazonaws.com:9094,b-3.test-kafka.q15lx0.c10.kafka.us-west-2.amazonaws.com:9094 --delete --topic <topic-name> --command-config /Users/amittal/kafka/kafka_2.12-2.2.1/bin/client.properties
[centos@kubemaster service]$ nslookup my-service.default.svc
-bash: nslookup: command not found
[centos@kubemaster service]$ dig nslookup my-service.default.svc
-bash: dig: command not found
[centos@kubemaster service]$ ping nslookup my-service.default.svc
ping: my-service.default.svc: Name or service not known
[centos@kubemaster service]$ ping my-service.default.svc
ping: my-service.default.svc: Name or service not known
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
[centos@kubemaster service]$ kubectl exec -it dnsutils -- nslookup my-service.default.svc
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-service.default.svc.cluster.local
Address: 10.111.144.147
apiversion: v1
kind: Pod
metadata:
name: pod2
spec:
containers:
- name: c1
image: nginx
apiVersion: v1
kind: Pod
metadata:
name: pod2
spec:
containers:
- name: c1
image: nginx
More than 1million event logs are getting posted in an hour due to which the Disk would be becoming a bottleneck and burst of events are being pushed into the Newrelic ELK.
Lowering down and printing the error logs or logs required for troubleshooting should help to overcome this issue of intermittent logs in the Newrelic/ELK.