Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some errors on testing helm charts made for helm 2 #53

Closed
gcavalcante8808 opened this issue Aug 28, 2020 · 1 comment · Fixed by #54
Closed

Some errors on testing helm charts made for helm 2 #53

gcavalcante8808 opened this issue Aug 28, 2020 · 1 comment · Fixed by #54

Comments

@gcavalcante8808
Copy link

gcavalcante8808 commented Aug 28, 2020

Hi, I have a helm chart that was created for helm2 and the unittests were working with the lrills/helm-unittest plugin.

But when I try to use this version (Installed trought the helm2 plugin install https://github.com/quintush/helm-unittest command) some errors about template occurs.

Helm Version: 2.16.10

The tests:

suite: test autoscaling
templates:
  - horizontalpodautoscaler.yaml

tests:
  - it: should use GLOBAL scaling config when release autoscaling AND Global autoscaling are enabled
    set:
      infra:
        autoScaling:
          enabled: true
          minReplicas: 100
          maxReplicas: 500
      releases:
        - name: default
          environment: nimbus
          infra:
            autoScaling:
              enabled: true
              type: "hpa"
    asserts:
      - isKind:
          of: HorizontalPodAutoscaler
      - hasDocuments:
          count: 1
      - equal:
          path: spec.minReplicas
          value: 100
      - equal:
          path: spec.maxReplicas
          value: 500

  - it: should use release hpa config when Global autoscaling is disabled but release scaling is enabled.
    set:
      infra:
        autoScaling:
          enabled: false
          minReplicas: 5000
          maxReplicas: 7000
      releases:
        - name: default
          environment: nimbus
          infra:
            autoScaling:
              enabled: true
              minReplicas: 2
              maxReplicas: 2
    asserts:
      - isKind:
          of: HorizontalPodAutoscaler
      - hasDocuments:
          count: 1
      - equal:
          path: spec.minReplicas
          value: 2
      - equal:
          path: spec.maxReplicas
          value: 2

  - it: should'n't use any autoscaling config when release autoscaling is disabled
    set:
      infra:
        autoScaling:
          enabled: true
          minReplicas: 5000
          maxReplicas: 7000
      releases:
        - name: default
          environment: nimbus
          infra:
            autoScaling:
              enabled: false
              minReplicas: 2
              maxReplicas: 2
    asserts:
      - hasDocuments:
          count: 0

Errors:

 	- should use GLOBAL scaling config when release autoscaling AND Global autoscaling are enabled
		- asserts[0] `isKind` fail
			Error:
				assertion.template must be given if testsuite.templates is empty
		- asserts[1] `hasDocuments` fail
			Error:
				assertion.template must be given if testsuite.templates is empty
		- asserts[2] `equal` fail
			Error:
				assertion.template must be given if testsuite.templates is empty
		- asserts[3] `equal` fail
			Error:
				assertion.template must be given if testsuite.templates is empty

Can u provide some method to debug these problems?

Thanks for the awesome work!

@quintush
Copy link
Owner

Hello @gcavalcante8808,

The error message should not occur, as the test suite does have the testsuite.templates filled.
It seems that the plugin cannot find the horizontalpodautoscaler.yaml file in the charts/templates folder (it's looking for the file basic/templates/horizontalpodautoscaler.yaml).

I will fix this issue as soon as i have time.

Greetings,
@quintush

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants