-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use peer's wireguard port, not our own #2200
base: master
Are you sure you want to change the base?
Use peer's wireguard port, not our own #2200
Conversation
if n.mode == Ipv4 { | ||
publicEndpoint = fmt.Sprintf("%s:%d", event.Lease.Attrs.PublicIP.String(), n.dev.attrs.listenPort) | ||
} else if n.mode == Ipv6 { | ||
publicEndpoint = fmt.Sprintf("[%s]:%d", event.Lease.Attrs.PublicIPv6.String(), n.dev.attrs.listenPort) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line had an additional bug of using the ipv6 IP + ipv4 port, so I think this probably wouldn't generally work.
Before this change, the wireguard code constructed a peer endpoint via "PeerIP + n.dev.listenPort", i.e. we used the peer's IP, but our port. This works fine if every k8s node has the same ListenPort, which is admittedly the common setup... However, some people may desire to use different ports for some cases, and in that case, we should respect that different port. I've manually tested that this works in my cluster, where 1 node has different ports for wireguard from all the others. After deploying this diff to each node, I get a working pod network. I also simplified some code.
468e91f
to
35249c2
Compare
I've pushed an update which should fix the e2e failures I was getting before. I was dereferencing My local cluster is dual-stack (ipv4+6 on all nodes), so I didn't get that in my own local usage, and there aren't any unit tests in the wireguard package to catch it. I think that's all sorted now! The e2e tests pass locally, and I've updated my cluster to the current commit, and all my pod networking is still happy. |
if ip4 != nil && ip6 == nil { | ||
return ip4.String() | ||
func (n *network) selectMode(ip4 ip.IP4, ip6 *ip.IP6) Mode { | ||
if ip6 == nil { | ||
return Ipv4 | ||
} | ||
|
||
if ip4 == nil && ip6 != nil { | ||
return fmt.Sprintf("[%s]", ip6.String()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ip4 == nil
/ != nil
stuff got removed because this method is always called as selectPublicEndpoint(&PublicIP
... which of course means ipv4 can't ever be nil, so that check wasn't doing anything.
Dropping it simplifies the logic a bit.
Before, we were also always using the ipv4 port, no matter the ip we pulled out, so I fixed that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Before this change, the wireguard code constructed a peer endpoint via "PeerIP + n.dev.listenPort", i.e. we used the peer's IP, but our port.
This works fine if every k8s node has the same ListenPort, which is admittedly the common setup...
However, some people may desire to use different ports for some cases, and in that case, we should respect that different port.
I've manually tested that this works in my cluster, where 1 node has different ports for wireguard from all the others.
After deploying this diff to each node, I get a working pod network.
Note: I believe this is a bugfix which shouldn't break any properly configured setups, so I didn't add an opt-in or opt-out to it. I'm happy to add an opt-in flag if we can think of any reasonable setup this would break.
Release Note