Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing usage of clustered datastore name to be absolute datastore name #54438

Merged

Conversation

pshahzeb
Copy link

@pshahzeb pshahzeb commented Oct 23, 2017

What this PR does / why we need it:
This PR adds the step to convert the datastore name in form of a folder/cluster to absolute datastore name.

Which issue this PR fixes: fixes vmware#312

Special notes for your reviewer:
Testing. All the clustered datastore tests pass now

root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export CLUSTER_DATASTORE=dscl1/sharedVmfs-0
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export VSPHERE_SPBM_POLICY_DS_CLUSTER=gold_cluster
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test  --test_args='--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build641208048/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 17:29:39 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 17:29:39 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 17:29:39 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/18 17:29:39 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/18 17:29:39 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore...
2017/10/18 17:29:39 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 17:29:39 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 286.302987ms
2017/10/18 17:29:39 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty", GitCommit:"2bfcf4a7d1d6d4f257fe7ee96dcb47484b8e3a5f", GitTreeState:"dirty", BuildDate:"2017-10-18T23:37:58Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17663+ba93892625a3e8", GitCommit:"ba93892625a3e8a246f3bc660d56a2635e7e7f9f", GitTreeState:"clean", BuildDate:"2017-10-18T20:57:49Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 17:29:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 296.318449ms
2017/10/18 17:29:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore
Conformance test: not doing test setup.
Oct 18 17:29:41.634: INFO: Overriding default scale value of zero to 1
Oct 18 17:29:41.634: INFO: Overriding default milliseconds value of zero to 5000
I1018 17:29:41.829967   32645 e2e.go:383] Starting e2e run "98fbc41f-b464-11e7-8e40-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508372981 - Will randomize all specs
Will run 3 of 714 specs

Oct 18 17:29:41.889: INFO: >>> kubeConfig: /tmp/kube171.json
Oct 18 17:29:41.894: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 17:29:41.928: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 17:29:42.057: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 17:29:42.057: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 17:29:42.063: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 17:29:42.063: INFO: Dumping network health container logs from all nodes...
Oct 18 17:29:42.076: INFO: Client version: v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty
Oct 18 17:29:42.079: INFO: Server version: v1.6.0-alpha.0.17663+ba93892625a3e8
SSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:29:42.079: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
STEP: creating a test vsphere volume
W1018 17:29:43.087373   32645 datacenter.go:156] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] kubevols/kube-dummyDisk.vmdk was not found
STEP: Creating pod
STEP: Waiting for pod to be ready
STEP: Verifying volume is attached
STEP: Deleting pod
Oct 18 17:30:07.268: INFO: Deleting pod "vsphere-e2e-vtzbg" in namespace "e2e-tests-volume-provision-bchh4"
Oct 18 17:30:07.301: INFO: Wait up to 5m0s for pod "vsphere-e2e-vtzbg" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:30:49.429: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/e2e-vmdk-e2e-tests-volume-provision-bchh4.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting the vsphere volume
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:30:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-bchh4" for this suite.
Oct 18 17:30:58.203: INFO: namespace: e2e-tests-volume-provision-bchh4, resource: bindings, ignored listing per whitelist
Oct 18 17:30:58.225: INFO: namespace e2e-tests-volume-provision-bchh4 deletion completed in 8.444766463s

• [SLOW TEST:76.146 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify static provisioning on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:30:58.231: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:30:58.402: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-6dwrm to have phase Bound
Oct 18 17:30:58.410: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:00.416: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:02.430: INFO: PersistentVolumeClaim pvc-6dwrm found and phase=Bound (4.027818895s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:31:26.666: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-hgcj6 --namespace=e2e-tests-volume-provision-sfbcf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:31:27.171: INFO: stderr: ""
Oct 18 17:31:27.171: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:31:27.171: INFO: Deleting pod "pvc-tester-hgcj6" in namespace "e2e-tests-volume-provision-sfbcf"
Oct 18 17:31:27.202: INFO: Wait up to 5m0s for pod "pvc-tester-hgcj6" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:32:11.324: INFO: Volume "[sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-c6d8e174-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node2".
Oct 18 17:32:11.324: INFO: Deleting PersistentVolumeClaim "pvc-6dwrm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:32:11.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-sfbcf" for this suite.
Oct 18 17:32:19.867: INFO: namespace: e2e-tests-volume-provision-sfbcf, resource: bindings, ignored listing per whitelist
Oct 18 17:32:19.876: INFO: namespace e2e-tests-volume-provision-sfbcf deletion completed in 8.460287539s

• [SLOW TEST:81.645 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify dynamic provision with spbm policy on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:32:19.878: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:32:20.024: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zrcmm to have phase Bound
Oct 18 17:32:20.034: INFO: PersistentVolumeClaim pvc-zrcmm found but phase is Pending instead of Bound.
Oct 18 17:32:22.040: INFO: PersistentVolumeClaim pvc-zrcmm found and phase=Bound (2.016093024s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:32:30.266: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-2hns4 --namespace=e2e-tests-volume-provision-llvbq -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:32:30.774: INFO: stderr: ""
Oct 18 17:32:30.774: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:32:30.774: INFO: Deleting pod "pvc-tester-2hns4" in namespace "e2e-tests-volume-provision-llvbq"
Oct 18 17:32:30.806: INFO: Wait up to 5m0s for pod "pvc-tester-2hns4" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:33:20.972: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-f77fba36-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node4".
Oct 18 17:33:20.972: INFO: Deleting PersistentVolumeClaim "pvc-zrcmm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:33:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-llvbq" for this suite.
Oct 18 17:33:29.490: INFO: namespace: e2e-tests-volume-provision-llvbq, resource: bindings, ignored listing per whitelist
Oct 18 17:33:29.554: INFO: namespace e2e-tests-volume-provision-llvbq deletion completed in 8.444585266s

• [SLOW TEST:69.676 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify dynamic provision with default parameter on clustered datastore
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 17:33:29.559: INFO: Running AfterSuite actions on all node
Oct 18 17:33:29.559: INFO: Running AfterSuite actions on node 1

Ran 3 of 714 Specs in 227.670 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 711 Skipped PASS

Ginkgo ran 1 suite in 3m48.395491704s
Test Suite Passed

Release note:

Fix clustered datastore name to be absolute.

@k8s-ci-robot k8s-ci-robot added do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 23, 2017
@k8s-ci-robot
Copy link
Contributor

Hi @pshahzeb. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@pshahzeb
Copy link
Author

/release-note-none

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Oct 23, 2017
@pshahzeb
Copy link
Author

/assign @abrarshivani

@abrarshivani
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 23, 2017
@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abrarshivani, pshahzeb

Associated issue: 312

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 23, 2017
@pshahzeb
Copy link
Author

/test pull-kubernetes-unit

@k8s-ci-robot
Copy link
Contributor

@pshahzeb: you can't request testing unless you are a kubernetes member.

In response to this:

/test pull-kubernetes-unit

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@divyenpatel
Copy link
Member

/test pull-kubernetes-unit

@k8s-ci-robot
Copy link
Contributor

@divyenpatel: you can't request testing unless you are a kubernetes member.

In response to this:

/test pull-kubernetes-unit

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@krousey
Copy link
Contributor

krousey commented Oct 24, 2017

/test pull-kubernetes-unit

@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 33e4f17 into kubernetes:master Oct 25, 2017
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Oct 31, 2017
k8s-github-robot pushed a commit that referenced this pull request Nov 2, 2017
…-upstream-release-1.8

Automatic merge from submit-queue.

Automated cherry pick of #54438 upstream release-1.8

Cherry pick of #54438 on release-1.8
#54438: Fixing usage of clustered datastore name to be absolute datastore name

Without this Fix, Clustered Datastore or Datastore within storage folder can not be used with Release 1.8 of Kubernetes.

@pshahzeb 

Release note:
```
Fix clustered datastore name to be absolute.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Disk is not getting detached when pv is provisioned on clustered datastore
7 participants