Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misleading kubectl error: the server doesn't have a resource type "po" #57198

Closed
luksa opened this issue Dec 14, 2017 · 16 comments
Closed

Misleading kubectl error: the server doesn't have a resource type "po" #57198

luksa opened this issue Dec 14, 2017 · 16 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/release Categorizes an issue or PR as relevant to SIG Release.

Comments

@luksa
Copy link
Contributor

luksa commented Dec 14, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
When using kubectl 1.9.0-beta.1+, if you run kubectl get po, while the API server is unreachable, you get the following misleading error: the server doesn't have a resource type "po".

What you expected to happen:
In versions 1.9.0-alpha.3 and lower, you get the proper error: The connection to the server 127.0.0.1:1 was refused.

How to reproduce it (as minimally and precisely as possible):

$ curl -sL https://storage.googleapis.com/kubernetes-release/release/v1.9.0-beta.1/bin/linux/amd64/kubectl -o /tmp/kubectl \
&& chmod +x /tmp/kubectl \
&& /tmp/kubectl -s http://127.0.0.1:1 get po 
the server doesn't have a resource type "po"

If you try the same command with kubectl 1.9.0-alpha.3, you get the correct error message:

$ curl -sL https://storage.googleapis.com/kubernetes-release/release/v1.9.0-alpha.3/bin/linux/amd64/kubectl -o /tmp/kubectl \
&& chmod +x /tmp/kubectl \
&& /tmp/kubectl -s http://127.0.0.1:1 get po 
The connection to the server 127.0.0.1:1 was refused - did you specify the right host or port?

You also get the same error message in kubectl 1.9.0-beta.1, if you use the full resource name instead of the abbreviation:

$ curl -sL https://storage.googleapis.com/kubernetes-release/release/v1.9.0-beta.1/bin/linux/amd64/kubectl -o /tmp/kubectl \
&& chmod +x /tmp/kubectl \
&& /tmp/kubectl -s http://127.0.0.1:1 get pod
The connection to the server 127.0.0.1:1 was refused - did you specify the right host or port?

Anything else we need to know?:
This is a regression between 1.9.0-alpha.3 and 1.9.0-beta.1

/sig cli

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. labels Dec 14, 2017
@luksa luksa changed the title Misleading kubectl error: Misleading kubectl error: the server doesn't have a resource type "po" Dec 14, 2017
@luksa
Copy link
Contributor Author

luksa commented Dec 14, 2017

/sig release

#57159

@k8s-ci-robot k8s-ci-robot added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Dec 14, 2017
@niuzhenguo
Copy link
Member

I can reproduce this when cached discovery expired when API server unreachable.

@niuzhenguo
Copy link
Member

And this happend because we changed to fallback to hardcoded types when there's err. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/util/factory_object_mapping.go#L93-L94
I can add a PR to check "connection refused" err and not fallback in this case.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 10, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@Ghazgkull
Copy link
Contributor

This is still an issue in kubectl v1.11.2

@Ghazgkull
Copy link
Contributor

/reopen

@k8s-ci-robot
Copy link
Contributor

@Ghazgkull: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@michidk
Copy link

michidk commented Oct 18, 2018

Still an issue (v1.12.1) and very confusing!

@luksa
Copy link
Contributor Author

luksa commented Oct 18, 2018

/reopen

@k8s-ci-robot
Copy link
Contributor

@luksa: Reopening this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Oct 18, 2018
@luksa
Copy link
Contributor Author

luksa commented Oct 18, 2018

Hmm, @michidk are you sure? I've just tried it with v1.12.1 and can't reproduce it anymore. Can you provide steps to reproduce the error?

$ ./kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 127.0.0.1:8443 was refused - did you specify the right host or port?
$ ./kubectl get po 
The connection to the server 127.0.0.1:8443 was refused - did you specify the right host or port?

@michidk
Copy link

michidk commented Oct 18, 2018

Not sure if this is a different issue, but I tried it with kubectl get pod
After wasting time debugging because I thought kubectl was already connected to that node but some resource didn't exist, I found this thread and looked into things that could prevent the kubectl from connection to the cluster. I then completely reconfigured kubectl.
Sadly I can't reproduce it right now, because I don't know what was wrong before.

@luksa
Copy link
Contributor Author

luksa commented Oct 18, 2018

ok, then I'll re-close this.

@luksa luksa closed this as completed Oct 18, 2018
@cbrunnkvist
Copy link

I can reproduce this and yes the error is/was quite misleading (especially when the error was not happening on my machine but in a CI environment):

kubectl v1.13.5:

cbrunnkvist@hostname:~$ KUBECONFIG=~/.kube/known-problematic.config kubectl.old version
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
cbrunnkvist@hostname:~$ KUBECONFIG=~/.kube/known-problematic.config kubectl.old get pods
error: the server doesn't have a resource type "pods"

kubectl v1.16.6-beta.0:

cbrunnkvist@hostname:~$ KUBECONFIG=~/.kube/known-problematic.config kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
cbrunnkvist@hostname:~$ KUBECONFIG=~/.kube/known-problematic.config kubectl get pods
error: You must be logged in to the server (Unauthorized)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants