Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate A (instead of CNAME) records for <IP>.nip.io targets #2049

Closed

Conversation

multi-io
Copy link
Contributor

@multi-io multi-io commented Apr 14, 2021

Signed-off-by: Olaf Klischat [email protected]

Description

This implements a shortcut that transforms <IP>.nip.io targets (on LB services or ingresses) into an A record for <IP> instead of CNAME record for <IP>.nip.io (the nip.io public resolver just resolves any <IP>.nip.io back to <IP>). We needed this because we're running the openstack cloud controller, which has the patch kubernetes/cloud-provider-openstack#1449 built in now, which has a cloud-config option that will rewrite all service and ingress .status.loadBalancer.ingress[].hostname fields that contain an IP address to <IP>.nip.io. This is needed to circumvent a K8s-internal optimization that would otherwise make <IP> inaccessible from within the cluster if the PROXY protocol is enabled on the lb service (via an annotation).

But this rewriting breaks upstream external-dns because it would now try to create a CNAME record for <IP>.nip.io instead of just an A record for <IP>, and this CNAME record would clash with the TXT record of the same name. You can avoid this by running external-dns with --txt-prefix=someprefix. -- but we're a managed K8s provider managing hundreds of clusters for customers on Openstack, many of them running external-dns, and we'd rather have a solution that requires as little work on the part of the user as possible. With the --txt-prefix solution, they'd first have to add that to their external-dns deployment and then also delete the old TXT record manually, whereas with this patch they'd just need to update external-dns, and it will just do the right thing. Any LB services or ingresses with something other than <IP>.nip.io in their status will be totally unaffected, so this shouldn't break anything.

Maybe you guys can consider merging this, even though it's not really a bugfix nor strictly an enhancement.

Checklist

  • Unit tests updated
  • End user documentation updated

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Apr 14, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: multi-io
To complete the pull request process, please assign raffo after the PR has been reviewed.
You can assign the PR to them by writing /assign @raffo in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@multi-io multi-io force-pushed the nip-io-a-records-fix branch from 3b3bb46 to f10132f Compare April 14, 2021 11:01
@seanmalloy
Copy link
Member

seanmalloy commented Apr 15, 2021

@Raffo @njuettner please let us know if you would be willing to accept this patch. Thanks!

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 15, 2021
@Raffo
Copy link
Contributor

Raffo commented May 12, 2021

@multi-io I am not against the nip.io, but is there any way that we can avoid modifying the source.go file and have something that is more pluggable for other implementation? nip.io is not the only one providing such functionality.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 9, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants