-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] unable to route LoadBalancer traffic to underlay pods #5112
Comments
Kube-OVN handles routing for underlay networks by allowing the configuration of custom routing rules via pod annotations. You can configure custom routes using the In your case, it seems that the traffic is not being routed correctly to the "podnet" interface. To address this, you can try configuring custom routing rules to ensure that traffic is directed to the correct interface. Here's a general approach you can take:
By configuring the routes through these supported methods, you should be able to direct the LoadBalancer traffic to the correct interface and avoid the issue of traffic being lost. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
did you use the "U2O" function? |
This pr may help you solve the underlay + metallb scenario |
Yes we do |
Thank you! |
@patriziobassi This is a significant change affecting both the data plan and the control plan, so we will not backport it to the stable release. |
@oilbeater i see. i understood the 1.13 branch was the stable one too. So can we expect this fix to land in the 1.13.x branch or is it targetting 1.14x? thank you a lot |
Kube-OVN Version
v1.12.6
Kubernetes Version
v1.30
Operation-system/Kernel Version
"Ubuntu 22.04.4 LTS"
5.15.0-126-generic
Description
we have a setup where k8s workers have 3 different nics:
when an external client connects to the LB ip, it get correctly routed to the K8s worker announcing the traffic (externalTrafficPolicy Local/Cluster).
Metallb configure ipvs to route to the underlay pod address correctly, but then the traffic gets lost.
with tcpdump we see the traffic like this:
17:26:58.994530 vipnet In IP .42144 > .6379: Flags [S], seq 1139930890, win 35840, options [mss 8960,sackOK,TS val 435725208 ecr 0,nop,wscale 9], length 0
17:26:58.994637 ens3 Out IP .11999 > .6379: Flags [S], seq 1139930890, win 35840, options [mss 8860,sackOK,TS val 435725208 ecr 0,nop,wscale 9], length 0
So the worker doesn't have a rule to route the traffic to the "podnet" interface which is the interface kube-ovn knows and uses to allocate IPs to pods and thus forwards to the ens3 interface where the worker has the default gateway: basically the traffic is leaving the worker instead of being processed internally. so it's lost.
When using the LoadBalancer with POD in the overlay network we do not get this problem because the kernel routing table already has ovn0 interface:
via 100.64.0.1 dev ovn0 src
100.64.0.0/16 dev ovn0 proto kernel scope link src 100.64.0.5
If we manually try to add a routing rule with "ip r add ....." kube-ovn-cni will immediately delete it.
How to add a "sink" in kube-ovn in order to catch with traffic with underlay networks too?
thank you
Steps To Reproduce
.
Current Behavior
.
Expected Behavior
.
The text was updated successfully, but these errors were encountered: