-
Notifications
You must be signed in to change notification settings - Fork 295
Provision nodes with kubeadm #654
Comments
@c-knowles Thanks for chiming in 👍 |
I think it seems aligned in some aspects like self hosting. If we are sticking with CloudFormation then kubeadm doesn't seems like 100% a natural fit based on what the docs say. It depends on your view around how much we bake in and how many moving parts on node initialisation. |
Thanks a lot for the kind follow-up, @luxas!
What files should be copied? Also, you meant that the files should be copied from the node running the kubeadm master to nodes running kubeadm followers(I'm not entirely sure if the terminology is correct) Would you mind guiding me about one more thing: How kubeadm should be run in Container Linux?
I wish I could, but unfortunately no. It is 2-3 am in my timezone, which isn't acceptable to me(It is just impossible for me to wake up at such time - in an Nth non-rem sleep after I've finally put my son to sleep 😴 I'd really appreciate it if I could virtually attend WG meetings like that asynchronously via recorded videos, meeting notes, etc. |
So, should we at least sync the following files among controller nodes?
Ref: https://kubernetes.io/docs/admin/kubeadm/ Would there be anything else? |
Also, can we instruct kubeadm to run kubelets in rkt pods rather than in docker containers like we do currently? How? (I know apiserver and controller pods are run as static pods via cri configured for kubelets) According to the "Use Kubeadm with other CRI runtimes" section of the kubeadm doc, it seems to support rkt through CRI. However, isn't it only for configuring kubelets? What is unclear to be is how we could configure runtimes of kubelets themselves. Update:
Perhaps we must prepare a kubelet systemd unit per node before
However, this sentence seems to imply that we would need a running kubelet which requires kubeconfig, before |
@mumoshu Thanks for writing this up. Very unfortunate on the timezones! However, I'm in the GMT+2 timezone, so we should be able to sync 1:1 at least I think... I can act as a proxy to the others in the kubeadm adoption working group later. |
@luxas Thanks for the suggestion 👍 I'm still catching things up so let's talk after that. |
According to the info gathered until now, this issue can be solved with or without it, but anyway I've submitted a feature request to add dedicated etcd nodes support for kubeadm in kubernetes/kubeadm#491. Also note that, at first glance, I took it as kubeadm would allow us to provision every kube-aws node after kubernetes/kubeadm#261 is addressed. However, it doesn't seem to include dedicated etcd nodes bootstrapped by kubeadm as of today. |
Isn't kubeadm model implies SSHing into already up and running nodes and installing packages/updating configs? It doesn't play well with autoscaling groups. It also need to support coreos. |
@mumoshu kubeadm supports external etcd. That should probably work for your case (setting up etcd yourself, delegating k8s bootstrap to kubeadm). Regarding high availability -- it is totally possible to set up HA clusters with kubeadm if you can a) move the certificates for the cluster to all your masters b) set up a LB in front of the API servers. I think you have those capabilities, so you should be good to go. @redbaron kubeadm handles bootstrapping of Kubernetes on a machine that exists. You can install / set up kubeadm in a boot script or afterwards by executing commands via ssh, or whatever. How do you currently set up k8s after the machines are created?
We just don't provide packages for CoreOS, but you can indeed use kubeadm on CoreOS. Please read the design doc here about technical details: https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md |
@luxas Thanks for the clarifications! I'm still looking forward to work on this soon. I've studied a bit about kubeadm - it seems to just writes various files required for a master/worker node into well-known locations on the local filesystem, so that the kubelet on the node is able to read them to deploy static pods, deployments, daemonsets to form a k8s cluster.
I expect kubeadm to do the steps 2 and 3. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Probably after kubeadm starts supporting multi-master & dedicated-etcd-nodes setup.
Relevant PR(which may or may not be merged): kubernetes/kubernetes#44793
The text was updated successfully, but these errors were encountered: