I have a kubernetes cluster version (Client Version: v1.21.3 / Server Version: v1.21.3) and its working. I made a Rancher server and wanted to import the kubernetes cluster, but the agent pods thats gets created fails with this.
kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-cluster-agent-7d69dc8885-2lmf2 0/1 CrashLoopBackOff 22 101m
cattle-system cattle-cluster-agent-dbdcdb586-v4g6g 0/1 CrashLoopBackOff 28 139m
kubectl describe pod cattle-cluster-agent-7d69dc8885-2lmf2The pods ip is 192.168.135.3 for cattle-cluster-agent-7d69dc8885-2lmf2
Warning Unhealthy 39m (x229 over 79m) kubelet Readiness probe failed: Get "http://192.168.135.3:8080/health": dial tcp 192.168.135.3:8080: connect: connection refused
Warning BackOff 34m (x117 over 77m) kubelet Back-off restarting failed container
Normal Pulled 34m kubelet Container image "rancher/rancher-agent:v2.5.9" already present on machine
Normal Created 34m kubelet Created container cluster-register
Normal Started 34m kubelet Started container cluster-register
Warning Unhealthy 9m11s (x171 over 34m) kubelet Readiness probe failed: Get "http://192.168.135.3:8080/health": dial tcp 192.168.135.3:8080: connect: connection refused
Warning BackOff 4m13s (x70 over 32m) kubelet Back-off restarting failed container
kubectl logs --follow pod/cattle-cluster-agent-7d69dc8885-2lmf2 -n cattle-systemINFO: Environment: CATTLE_ADDRESS=192.168.135.3 CATTLE_CA_CHECKSUM=1db5ccc6e975206c64c8fcc3280549a26bdd656d8ee13ac5de159af06c33c5a8 CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true
CATTLE_NODE_NAME=cattle-cluster-agent-7d69dc8885-2lmf2 CATTLE_SERVER=https://192.168.16.20
INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://192.168.16.20/ping is not accessible (Failed to connect to 192.168.16.20 port 443: Connection timed out)
All my servers are in the 192.168.16.0/24 range and they can talt to each other. It fails on the check for the pod itself on the kubernetes cluster
I tried from the machine itself hosting the docker instance of rancher, but i think its the docker instance that cannot also not connect to
https://192.168.16.20/pingits the same problem for both cattle agent pods
Rancher is running version v2.5.9
Could it be because the pods cant resolve or communicate directly with 192.168.16.20 witch is on my local network. Does it have a public domain name and connect from the outside and into rancher?
ASKER
MY home network has the cidr 192.168.16.0/24
NODE1 KMASTER1 = 192.168.16.22
NODE2 KMASTER2 = 192.168.16.23
NODE3 KWORKER1 = 192.68.16.24
LOAD BALANCER HAPROXY = 192.168.16.20 | Forwards all the traffic to the 2 KMASTERS ON PORT 6443
RANCHER SERVER = 192.168.16.21
OWN PC 192.168.16.180
The kubernetes test pods kan ping the 2 master nodes and the kworker, but not the rancher server or my own PC.
Should the Rancher setup have a public DNS and not be in the same subnet as the Kubernetes cause the rancher-agent that gets deployed cant reach 192.168.16.21
I think the error is about the rancher agent pod cant reach the ranger server on the same subnet as the nodes in the kubernettes cluster.
Home someone will take the time and help me.