When setting up a HA control plane with kubeadm one would expect that the
controlPlaneEndpoint is being used in all places.
However, this is not true (I suspect at one point it will be fixed, but I have
not tracked down the corresponding change in the code).
Thanks to my colleague mbb I was spared tracking down one of these:
kube-proxyConfigMap inkube-systemnamespacecluster-infoConfigMap inkube-publicnamespace
In this issue in both ConfigMaps the server entry is set to the first member
of the control plane and will lead to issues when your first control plane
member leaves the cluster for some reason.
The erroneous entry in the latter CM shows itself in a quite obvious error,
namely when trying to join new nodes to the cluster with kubeadm, it
complains about being unable to reach the API server on this former node.
So one just wonders, why on earth the kubeadm join command trys to connect
to the wrong IP to fetch the kubeadm config map.
An erroneous entry in the kube-proxy CM produces more subtle networking
issues, such as failure of name resolution of internal DNS names or accessing
service addresses.
Well, maybe this helps someone someday find these issues a bit faster.