Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ingress-contour Support #143

Closed
pgold30 opened this issue Mar 13, 2024 · 10 comments
Closed

ingress-contour Support #143

pgold30 opened this issue Mar 13, 2024 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pgold30
Copy link

pgold30 commented Mar 13, 2024

contour projectcontour.io/ingress/ingress-contour-contour
What would you like to be added:
We would like to migrate Countour-envoy provider , is not listed on the existing 3
Why this is needed:
this will help us migrating a lot of our existing

We are getting error
`...go/bin/ingress2gateway print -n backend 6s
Error: failed to read istio resources from the cluster: failed to read resources from cluster: failed to read gateways: failed to list istio gateways: failed to get API group resources: unable to retrieve the complete list of server APIs: networking.istio.io/v1beta1: the server could not find the requested resource
Usage:
ingress2gateway print [flags]

Flags:
-A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even
if specified with --namespace.
-h, --help help for print
--input_file string Path to the manifest file. When set, the tool will read ingresses from the file instead of reading from the cluster. Supported files are yaml and json
-n, --namespace string If present, the namespace scope for this CLI request
-o, --output string Output format. One of: (json, yaml) (default "yaml")
--providers strings If present, the tool will try to convert only resources related to the specified providers, supported values are [ingress-nginx istio kong apisix] (default [kong,apisix,ingress-nginx,istio])`

@pgold30 pgold30 added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 13, 2024
@LiorLieberman
Copy link
Member

The error your are getting is related to #138

In the meantime, you can solve it by specifying the provider/s you need with the flag.

/cc @sunjayBhatia for contour support request

@pgold30
Copy link
Author

pgold30 commented Mar 20, 2024

Hi @LiorLieberman @sunjayBhatia Countour is not in the list of supported providers , so we can not select it , this is for adding that provider to the supported ones

@LiorLieberman
Copy link
Member

LiorLieberman commented Mar 27, 2024

To add the provider for the supported providers, someone need to implement the provider specific logic. Thats the reason I cc'd @sunjayBhatia

Regardless of contour support, I explained why you are getting an error.

@sunjayBhatia
Copy link
Member

Yep, I haven't had a lot of bandwidth to implement contour support in ingress2gateway, but if there is a willing contributor I would definitely help shepherd the changes!

@ipsum-0320
Copy link

Yep, I haven't had a lot of bandwidth to implement contour support in ingress2gateway, but if there is a willing contributor I would definitely help shepherd the changes!是的,我没有足够的带宽来在 ingress2gateway 中实现轮廓支持,但如果有愿意的贡献者,我肯定会帮助引导这些更改!

@sunjayBhatia @LiorLieberman hi, i want to give it a try, can u tell me how to finish it?

@ipsum-0320
Copy link

/assign

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 7, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants