-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make kubeadm deploy HA kubernetes cluster #328
Comments
@cookeem Hi! Are you interested in attending out meetings? |
It looks like the meeting was finished already :( |
@cookeem They are organized every week. Come and join us the next time on Tuesday https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.3lizw9e2c8mi :) |
@luxas got it, thx :) |
@cookeem I am also strugeeling to create HA Cluster for Centos I setup my cluster as below attached file, Can you please advise how I can modify current cluster now? I want 2 Master and 1 Worker just to showcase to management to get approval and get kubernetes as standard to replace docker. Please help me Best Regards |
@kumarganesh2814 you can follow my instruction: But I think 2 master nodes is not a good idea, master nodes number should be odd and greater than 1, for example 3 master nodes. |
@cookeem |
@cookeem Earlier I was able to make HA cluster up some how while scaling any app or deplying was getting errors Issues Such as
|
Now I started this setup yet again and now I see after many hours node still not joined
|
@cookeem
|
@kumarganesh2814 It seems there is something wrong with the certificates. Before setup again, do you tear down your cluster first? |
Hi Yes I removed all and reset using kubeadm reset on all 3 master But again after reboot of VM and cleaning some files I am able to get back CLuster Now. Thanks you very much for your Help till now. Having few issue on VIP
|
|
@cookeem Best Regards |
|
|
|
@kumarganesh2814 Sorry for reply late. So your problem is how to use keepalived to create a VIP right? If your cluster is publish in internet, I think each host has two ip address(one is internet IP address, the other is ethernet IP address for cluster communication), so you should make a new ethernet IP address for keepalived VIP first, and make sure this VIP can access by each nodes. |
@cookeem I will try to follow info, you been kind enough to answer all my qyeries really appericiate this. Best Regards |
|
Can you please also advise on which all port I should ask my network team to Configure for this VIP. Is this offloading of certificates of 8443 port done on f5 (VIP) or on the Master Nodes? Best Regards |
@cookeem I have chaged port from 8443 to 8080 for nginx continer and then joined 3 nodes worked fine and now I see app is accessible from all 3 Master IP Thanks for your support cook you Rock !!! Best Regards |
@kumarganesh2814 Great, you are welcome 😁 |
Sorry to Trouble you again. In CLuster which I have deployed I see one starge this that I can access my app on all Master and Worker Node However I havent specified hostport true is this is bug/issue or a feature. my Yaml for Sample App
|
So Now after Kubeclt apply yaml files But as far as I know we can only access app via Master Node IP, not sure why worker also showing load I see this Process on Worker Node netstat -plant|grep 30006tcp6 0 0 :::30006 :::* LISTEN 1972/kube-proxy kubectl get nodes output
|
This is kubernetes proxy & nodeport features, there's proxy process will boot on every node, it will make all nodes can access the same nodeport. This is the document about NodePort: |
@cookeem Or If I enable hostnetwork to true I can access same service on specified port if I shut all 3 Masters still pods will be running? Best Regards |
@kumarganesh2814 NodePort is support by kubernetes proxy component, if all masters are down, although docker container still running, but all nodeport will unavailable and network function will lost, you can not access any service from your host. |
Thanks Man. Appericiate your Support till. Best Regards |
Hi @cookeem Thanks for your solution. 1.After setting cluster in HA, how can cluster be upgraded ? Like say this setup is done using kubernetes 1.8.x version and yesterday 1.9 is released, what will be simple way to upgrade ? 2.If one of master goes down will there be any impact on pods deployed nodes ? 3.Can this setup be done in production? Regards, |
|
@cookeem wrap your solution in ansible playbook(https://github.com/sv01a/ansible-kubeadm-ha-cluster) |
@kcao3 do you mean convert existing non-ha kubeadm cluster to ha cluster? If yes, I don't try. |
@sv01a No. Once you deployed an HA cluster using your ansible playbooks, how do you expand it- i.e. adding one or more masters to your current HA cluster? Do you use the same certs on each master? |
With current playbook implementation it's not possible, because playbook will re-init cluster on second run. Honestly i don't think about this case. But you can add master by hand, simply repeat steps for master from instructions. |
Hi Man, Sorry to keep you bugging you closed loop, but I guess this will help other too. So new issue is that well set Master Node which was working absoulutely fine. After reboot nothing works fine. Services which were able to access via Master1 (rebooted VM) now not accessible. I tried to recreate dns and flannel pod again but still same. Only Message I see
But not get much info
Did you this happens to your env also other on one other cluster I see if kubeadm/kubectl version is upgrade then nodes become NotReady and pods goes to Unknown state So these 2 Issue how we address in HA setup.
Best Regards |
|
Hi @cookeem I got the issue was change in iptable rule which was introduced as a workaround when installation was done. To refrence here so others can benifit in such case. I check iprable rules on VM which run fine and one which fails for connection Diff
|
@kumarganesh2814 According to the official documentation, Kubernetes does not work well with firewall on CentOS, so you should stop and disable your firewalld: Will most likely fix your problem. |
I am having issues to join the master02 and master03 after creating master01. What I'm missing? I'm using 1.8. |
never mind, didn't see this detail.: My masters are now joining the cluster. Roles is still like NONE, but I will keep working on it. |
/kind feature
kubeadm now is not support HA, so we can not use kubeadm to setup a production kubernetes cluster. But create a HA cluster from scratch is too complicated, and when I google keyword "kubeadm HA", only few article or mind draft related tell me how to.
So I try lots of ways to reform "kubeadm init", finally I make kubeadm cluster support HA, and I hope this way will help "kubeadm init" support creating a HA production cluster.
Detail operational guidelines is here: https://github.com/cookeem/kubeadm-ha
Summary
Linux version: CentOS 7.3.1611
docker version: 1.12.6
kubeadm version: v1.6.4
kubelet version: v1.6.4
kubernetes version: v1.6.4
Hosts list
Critical steps
1. Deploy an independent etcd tls cluster on all master nodes
2. On k8s-master1: use
kubeadm init
create master connect independent etcd tls cluster3. Copy k8s-master1 /etc/kubernetes directory to k8s-master2 and k8s-master3
4. Use ca.key and ca.crt re-create all master nodes' apiserver.key and apiserver.crt certificates
Modify apiserver.crt
X509v3 Subject Alternative Name
DNS and IP to current hostname and IP address, and add keepalived virtual IP address.5. Edit all master nodes' admin.conf controller-manager.conf scheduler.conf, replace
server
point to current IP address6. Setup keepalived, and create a virtual IP redirect to all master nodes
7. Setup nginx as all master apiserver's load balancer
8. Update configmap/kube-proxy, replace
server
point to virtual IP apiserver's load balancerHow to make kubeadm support HA
kubeadm init --config=/root/kubeadm-ha/kubeadm-init.yaml
create a master nodekubeadm will create etcd/nginx/keepalived pods and all certificates and *.conf files.
3. On k8s-master1 copy /etc/kubernetes/pki directory to k8s-master2 and k8s-master3
4. On k8s-master2 and k8s-master3 replace kubeadm-init.yaml ha.ip settings to current IP address
5. On k8s-master2 and k8s-master3 we use
kubeadm init --config=/root/kubeadm-ha/kubeadm-init.yaml
create 2 master nodeskubeadm will create etcd/nginx/keepalived pods and all certificates and *.conf files, then k8s-master2 and k8s-master3 will join the HA cluster automatically.
The text was updated successfully, but these errors were encountered: