Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only parse ClusterCIDR, ServiceCIDR if AllocateNodeCIDRs #54934

Merged
merged 1 commit into from
Nov 3, 2017

Conversation

akosiaris
Copy link
Contributor

What this PR does / why we need it:

Avoid unnecessary spam in kube-controller-manager log if --cluster-cidr is not specified and --allocate-node-cidrs is false. Add clarification in kube-controller-manager help about that.

Release note

Avoid unnecessary spam in kube-controller-manager log if --cluster-cidr is not specified and --allocate-node-cidrs is false.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 1, 2017
@k8s-ci-robot
Copy link
Contributor

Hi @akosiaris. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 1, 2017
Copy link
Contributor

@tengqm tengqm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is a valid change, maybe I'm misunderstanding something.
AllocateNodeCIDRs is supposed to be used with cloud providers. It can be set to false when deploying a cluster on a fix set of nodes or on bare metal.
However, ServiceClusterCIDR serves a different purpose. It is basically a different pool of IP for services. Most of the time, it has to be set for kube-proxy to function properly.

@akosiaris
Copy link
Contributor Author

AllocateNodeCIDRs is supposed to be used with cloud providers. It can be set to false when deploying a cluster on a fix set of nodes or on bare metal.

That's exactly our case. Bare metal cloud. In our case AllocateNodeCIDRs is false (which is the default anyway) and we get on every kube-controller-manager startup the following

core.go:76] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
core.go:80] Unsuccessful parsing of service CIDR : invalid CIDR address:

That's 1.7.4. In 1.8.0 that has been addressed via #48797. That being said given the code in https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/core.go#L131 and https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/node/node_controller.go#L256, I 'd argue that if guarding ClusterCIDR with AllocateNodeCIDRs is technically correct. Feel free to overrule of course, and given it adds a tad more complexity, I am fine with amending my PR and removing those if guards.

However, ServiceClusterCIDR serves a different purpose. It is basically a different pool of IP for services. Most of the time, it has to be set for kube-proxy to function properly.

As far as ServiceCIDR goes, I only added the if guard and most importantly the doc change due to https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/node/node_controller.go#L340, which if guards the only instantiation I can find (correct me if I am wrong) of the IPAM controller, which from what I gather (and again correct as appropriate) is the only user of ServiceCIDR. I 'll admit I don't yet see how kube-proxy is related here, but I 've started getting some more familiarity with the code, so any pointers would be greatly appreciated.

@phsiao
Copy link
Contributor

phsiao commented Nov 2, 2017

@akosiaris here is a starting point for clusterIP. It definitely should not be tied to --allocate-node-cidrs, and I believe it should be set to a default value if not specified.

We only set that flag in apiserver, but not in controller-manager, and I don't see the error regarding service CIDR in our controller-manager log. I think your cluster might not be configured properly.

@akosiaris
Copy link
Contributor Author

@phsiao I am sorry, but I do not understand what does clusterIP (that is the field in the service resource's spec, aka https://kubernetes.io/docs/api-reference/v1.8/#servicespec-v1-core) have to do with what we are discussing here. This PR is about ClusterCIDR and ServiceCIDR which are the internal code variable names that store the kube-controller-manager parameters --cluster-cidr and --service-cluster-ip-range. This PR has nothing to do with apiserver.

@akosiaris
Copy link
Contributor Author

/assign @thockin

@dims
Copy link
Member

dims commented Nov 2, 2017

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 2, 2017
@akosiaris
Copy link
Contributor Author

Fixed some of my silly mistakes, still looking into failing k8s.io/kubernetes/pkg/api/testing

ClusterCIDR and ServiceCIDR are settings that are only used if at least
AllocateNodeCIDRs is set. The route controller requires in addition to
it for ConfigureCloudRoutes to be true as well. Since
AllocateNodeCIDRs is by default false, if guard the parsing of these
settings in order to not unnecessarily spam logs. Amend the
documentation of kube-controller-manager for the 2 settings to point
out the requirement of AllocateNodeCIDRs to be true as well
@akosiaris
Copy link
Contributor Author

Fixed that as well, all tests now pass

@akosiaris
Copy link
Contributor Author

/approve no-issue

@thockin
Copy link
Member

thockin commented Nov 3, 2017

/lgtm
/approve no-issue

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 3, 2017
@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: akosiaris, thockin

Associated issue requirement bypassed by: thockin

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 3, 2017
@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 54906, 54120, 54934, 54915, 54848). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 9ca2bda into kubernetes:master Nov 3, 2017
@akosiaris
Copy link
Contributor Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants