-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s HA cluster setup #7
Comments
@kcao3
|
借个楼........ @cookeem 楼主大神,请教一下: 1、这里的etcd集群没用证书?etcd原本就是可以不用证书的吧?我不用证书是可以拉起master并正常创建应用的。 2、架构图很赞💯,不过有点疑问,这里同时用了keepalived和nginx,keepalived检测apiserver,这个高可用没问题,但是用nginx代理作负载有什么作用?从图上看是nginx在keepalived下面,keepalived和Nginx同时监测apiserver?这里用haproxy作四层代理和负载均衡检测apiserver,然后keepalived检测haproxy是不是更合适?如果是用Nginx做负载,keepalived检测的应该是nginx。。 一点想法,望大神回复! |
动手试了一下,楼主的文档真的是好~👍 环境如下: kube version: 1.8.3 我没有关闭 node验证,两个Node验证参数默认打开:
这里,另外两个mater是可以加入集群的,如下:
集群是运行起来了,看着好像没问题,dashboard也能正常打开,不过,不幸的是还是出现了致命问题,那就是 controller-manager 和 scheduler 选举时请求apiserver被拒绝,日志如下:
不知道是不是跟Node策略有关...... sad |
update: 经过一番探索,终于好了!!!手动为其他两个节点单独生成了一套证书,我直接用的openssl,网上有很多用的cfssl,官方好像用的easyrsa。
还差一点就是负载和高可用还没弄,打算用lvs+keepalived,多谢楼主大神的文档!!! |
@KeithTt 是否保留了所有默认admission-control策略,通过自己做证书解决节点互通问题? |
是的,保留了Node策略,为新加的两个节点每个单独生成了一套证书。 楼主可以试试为每个节点单独生成一套证书,ca不变,sa公钥私钥不变,要注意的是apiserver证书的san。 如果对Openssl不熟的话,可能得仔细研究一下,期待楼主更新1.8文档。 |
@KeithTt
当时做v1.7的高可用就是遗留了这个问题,详细说说你的操作步骤是怎么样的? |
是的,默认策略没变。 除了证书通信问题,具体步骤跟楼主的文档一样。 也就是,在现有的文档的基础上,为每个节点生成一套证书就可以了。 |
While joining worker node using Virtual IP address gives following error.
|
Is the 104.236.222.113:8443 port active already? |
Yes..Port is active. Is there any extra configuration for the virtual IP node? |
@vishalcloudyuga Check your apiserver logs, and make sure your certificates create correctly. |
@KeithTt 请教一下你是怎么操作的?我知道怎么用openssl手工创建apiserver的证书,不过controller-manager和scheduler的怎么创建还真不清楚,有没有具体的操作指引? https://kubernetes.io/docs/concepts/cluster-administration/certificates/#distributing-self-signed-ca-certificate |
@cookeem 官方居然有这么详细的文档.....才看到。。厉害了。 要是知道apiserver的证书怎么弄,那controller-manager和scheduler就更简单了,不同的仅仅是后面两个是客户端验证,以及不需要配置san,在配置文件里稍微修改一下就好. 我是这样做的:
|
@KeithTt |
@cookeem 配置文件用默认的改一下就可以了。 你说的manifests中是 |
@cookeem Sorry i missunderstood the setup I was trying thise set up on cloud environment and i have created special node for virtual ip. I will do revision and i will let you know. |
@KeithTt |
整理了一下,这是我的配置脚本,要设置和替换的地方从脚本中能看到:
|
@KeithTt 厉害,你解决了我一直没有解决的admission策略问题,高手。明天我测试测试。 |
至于openssl的配置文件,这个很难说清楚,如果事先没有直观认识,可能需要买本书看看。这个双向认证分为 |
嗯嗯,期待更新文档~ 👍 |
@KeithTt 😁 |
@kcao3 我只是重新生成并了controller-manager和scheduler,没有改CA证书,并更新了controller-manager.conf以及scheduler.conf,不过还是connection refused错误。
可惜证书生成了,但是其他master节点连接失败。你的cnf配置是怎么设置的? |
kubeadm v.1.9 now support HA by official, this issue can close. New document about kubeadm v1.9 HA will deploy soon. |
@cookeem
I have a few questions about creating a k8s HA cluster using kubeadm.
kubernetes uses NodeRestriction admission control that prevents other master from
joining the cluster.
As a work around, you reset the kube-apiserver's admission-control settings to
the v1.6.x recommended config.
So, did you figure out how to make it work with NodeRestriction admission control?
It appears to me that your solution works. I also noticed there has been
some work to make kubeadm HA available in 1.9: Kubeadm HA ( high availability ) checklist kubernetes/kubeadm#261
Do you know exactly how your HA setup is different from the one being working on there?
I also notice there is another approach for creating a k8s HA cluster:
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ha-mode.md
Just curious how you would compare this approach with yours. Any thoughts?
Thank you for your time.
The text was updated successfully, but these errors were encountered: