Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] when i confiugred BFD, ECMP ovnext(network namespace) have some problem. #4998

Open
inyongma1 opened this issue Feb 17, 2025 · 16 comments
Labels
bug Something isn't working subnet

Comments

@inyongma1
Copy link

inyongma1 commented Feb 17, 2025

Kube-OVN Version

v.1.13

Kubernetes Version

1.30.9

Operation-system/Kernel Version

"Rocky Linux 8.10 (Green Obsidian)"

Description

https://kube-ovn.readthedocs.io/zh-cn/latest/en/advance/ovn-l3-ha-based-ecmp-with-bfd/
I reference this page and configure BFD, External, Internal subnet and oeip, ofip
and testify the gw node ovnext

[root@pc-node-1 ~]# ip netns exec ovnext bash ip a
/usr/sbin/ip: /usr/sbin/ip: cannot execute binary file
[root@pc-node-1 ~]#
[root@pc-node-1 ~]# ip netns exec ovnext ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
1541: ovnext0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:00:00:ab:bd:87 brd ff:ff:ff:ff:ff:ff
    inet 10.5.204.108/24 brd 10.5.204.255 scope global ovnext0
       valid_lft forever preferred_lft forever
    inet6 fe80::200:ff:feab:bd87/64 scope link
       valid_lft forever preferred_lft forever
[root@pc-node-1 ~]#
[root@pc-node-1 ~]# ip netns exec ovnext route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.5.204.254    0.0.0.0         UG    0      0        0 ovnext0
10.5.204.0      0.0.0.0         255.255.255.0   U     0      0        0 ovnext0


[root@pc-node-1 ~]# ip netns exec ovnext bfdd-control status
There are 1 sessions:
Session 1
 id=1 local=10.5.204.108 (p) remote=10.5.204.122 state=Up

## This is the other end of the lrp bfd session and one of the next hops of the lrp ecmp


[root@pc-node-1 ~]# ip netns exec ovnext ping -c1 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=115 time=21.6 ms

# No problem to the public network
like this page says.
















================================================
but in my environment have some proble and bugs came up.


cni-9374e2ea-9c78-7101-cc0a-0efc8c98db02 (id: 2)
[root@vnode-117-35 ~]# ip netns list
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
ovnext

[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash
setting the network namespace "ovnext" failed: Invalid argument

something peer network interface has not connected.

Steps To Reproduce

https://kube-ovn.readthedocs.io/zh-cn/latest/en/advance/ovn-eip-fip-snat/

https://kube-ovn.readthedocs.io/zh-cn/latest/en/advance/ovn-l3-ha-based-ecmp-with-bfd/

I referenced these pages.

Current Behavior

[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash
setting the network namespace "ovnext" failed: Invalid argument

have some problem.

Expected Behavior

[root@pc-node-1 ~]# ip netns exec ovnext bash ip a
/usr/sbin/ip: /usr/sbin/ip: cannot execute binary file
[root@pc-node-1 ~]#
[root@pc-node-1 ~]# ip netns exec ovnext ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
1541: ovnext0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:00:00:ab:bd:87 brd ff:ff:ff:ff:ff:ff
    inet 10.5.204.108/24 brd 10.5.204.255 scope global ovnext0
       valid_lft forever preferred_lft forever
    inet6 fe80::200:ff:feab:bd87/64 scope link
       valid_lft forever preferred_lft forever
[root@pc-node-1 ~]#
[root@pc-node-1 ~]# ip netns exec ovnext route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.5.204.254    0.0.0.0         UG    0      0        0 ovnext0
10.5.204.0      0.0.0.0         255.255.255.0   U     0      0        0 ovnext0


[root@pc-node-1 ~]# ip netns exec ovnext bfdd-control status
There are 1 sessions:
Session 1
 id=1 local=10.5.204.108 (p) remote=10.5.204.122 state=Up

## This is the other end of the lrp bfd session and one of the next hops of the lrp ecmp


[root@pc-node-1 ~]# ip netns exec ovnext ping -c1 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=115 time=21.6 ms

# No problem to the public network
like this page says.
@inyongma1 inyongma1 added the bug Something isn't working label Feb 17, 2025
@dosubot dosubot bot added the subnet label Feb 17, 2025
Copy link

dosubot bot commented Feb 17, 2025

The "Invalid argument" error when executing commands in a network namespace like ovnext can be caused by several issues. Here are some steps you can take to troubleshoot and resolve the problem:

  1. Check Namespace Existence:
    Ensure that the network namespace ovnext actually exists. You can list all network namespaces using:

    ip netns list
  2. Verify Namespace Setup:
    If the namespace exists, verify that it is set up correctly. Sometimes, namespaces might not be fully initialized, leading to such errors.

  3. Binary Compatibility:
    The error message /usr/sbin/ip: cannot execute binary file suggests there might be an issue with the binary compatibility. Ensure that the ip command is correctly installed and compatible with your system architecture.

  4. Namespace Cleanup:
    If there are stale or corrupted namespaces, they might cause issues. Try deleting and recreating the namespace:

    ip netns delete ovnext
    ip netns add ovnext
  5. Check Kernel and OS Compatibility:
    Ensure that your kernel and OS version are compatible with the network namespace features you are using. Sometimes, specific kernel versions might have bugs or missing features.

  6. Review Configuration:
    Double-check your BFD and ECMP configurations to ensure they are correct. Misconfigurations can lead to unexpected behavior.

  7. Logs and Diagnostics:
    Check system logs for any additional error messages that might provide more context. Use dmesg or check /var/log/syslog for relevant entries.

If these steps do not resolve the issue, consider reviewing the specific configurations and any recent changes that might have affected the network namespace setup.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@zbb88888
Copy link
Collaborator

================================================
but in my environment have some proble and bugs came up.


cni-9374e2ea-9c78-7101-cc0a-0efc8c98db02 (id: 2)
[root@vnode-117-35 ~]# ip netns list
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
ovnext # it seems the ns is already exist

[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext ip a
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# ip netns exec ovnext bash
setting the network namespace "ovnext" failed: Invalid argument

it seems the ns is already exist, but i do not know why you can not exec it

@inyongma1
Copy link
Author

inyongma1 commented Feb 18, 2025

yes I cannot exec the network namespace in the gw nodes.
These are configurations which i applied.

[root@vnode-117-34 ~]# k get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vnode-117-34 Ready control-plane 7d21h v1.30.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,external0.provider-network.kubernetes.io/interface=eth0,external0.provider-network.kubernetes.io/mtu=1500,external0.provider-network.kubernetes.io/ready=true,kube-ovn/role=master,kubernetes.io/arch=amd64,kubernetes.io/hostname=vnode-117-34,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
vnode-117-35 Ready 7d21h v1.30.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,external0.provider-network.kubernetes.io/interface=eth0,external0.provider-network.kubernetes.io/mtu=1500,external0.provider-network.kubernetes.io/ready=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=vnode-117-35,kubernetes.io/os=linux,ovn.kubernetes.io/external-gw=true,ovn.kubernetes.io/node-ext-gw=true
vnode-117-36 Ready 7d21h v1.30.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,external0.provider-network.kubernetes.io/interface=eth0,external0.provider-network.kubernetes.io/mtu=1500,external0.provider-network.kubernetes.io/ready=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=vnode-117-36,kubernetes.io/os=linux,ovn.kubernetes.io/external-gw=true,ovn.kubernetes.io/node-ext-gw=true

[root@vnode-117-34 ~]# k get nodes
NAME STATUS ROLES AGE VERSION
vnode-117-34 Ready control-plane 7d21h v1.30.9
vnode-117-35 Ready 7d21h v1.30.9
vnode-117-36 Ready 7d21h v1.30.9
vnode-117-35, vnode-117-36 are gw nodes.

[root@vnode-117-34 ~]# k get provider-network
NAME DEFAULTINTERFACE READY
external0 eth0 true

[root@vnode-117-34 ~]# k get vlan
NAME ID PROVIDER
vlan0 0 external0

listing subnets
external0 ovn ovn-cluster vlan0 IPv4 10.9.0.0/16 false false false distributed 5 39928
vpc2-subnet1 ovn vpc2 IPv4 192.168.0.0/24 false false false distributed 3 250

external subnet.

apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: external0
spec:
  protocol: IPv4
  cidrBlock: 10.9.0.1/16
  gateway: 10.9.0.1
  vlan: vlan0
  excludeIps:
  - 10.9.0.1..10.9.100.1

Internal overlay subnet.

apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: vpc2-subnet1
spec:
  cidrBlock: 192.168.0.0/24
  default: false
  disableGatewayCheck: false
  disableInterConnection: true
  enableEcmp: true  # enable ecmp
  gatewayNode: ""
  gatewayType: distributed
  #gatewayType: centralized
  natOutgoing: false
  private: false
  protocol: IPv4
  provider: ovn
  vpc: vpc2
  namespaces:
  - vpc2
apiVersion: v1
data:
  enable-external-gw: "true"
  external-gw-addr: 10.9.0.1/16
  external-gw-nic: vlan0
  external-gw-nodes: vnode-117-35,vnode-117-36
  type: centralized
kind: ConfigMap
metadata:
  name: ovn-external-gw-config
  namespace: kube-system
kind: Vpc
apiVersion: kubeovn.io/v1
metadata:
  name: vpc2
spec:
  namespaces:
  - vpc2
  enableExternal: true
  enableBfd: true # bfd switch can be switched at will
  #enableBfd: false

router 963d12e7-f72f-4d9c-9a0e-1b7c42da7a03 (vpc2)
port vpc2-vpc2-subnet1
mac: "06:6f:e8:a9:65:41"
networks: ["192.168.0.1/24"]
port vpc2-external0
mac: "96:26:df:a8:57:4e"
networks: ["10.9.100.29/16"]
gateway chassis: [4b759ed4-789c-4198-b9a7-0895861efa07 1c6cac1a-d307-42cd-99a9-f200c5f57b8c]
nat 9815ce9a-ed0a-49bd-ad5a-b73be6c2c99e
external ip: "10.9.100.30"
logical ip: "192.168.0.2"
type: "dnat_and_snat"

[root@vnode-117-34 ~]# k ko nbctl list bfd
_uuid : 821a50f0-73d3-49ac-82a0-ec2363430dc9
detect_mult : 3
dst_ip : "10.9.0.1"
external_ids : {}
logical_port : vpc2-external0
min_rx : 100
min_tx : 100
options : {}
status : admin_down

_uuid : 2e89a14b-cf2c-4c7e-89c7-a9ad38a935df
detect_mult : 3
dst_ip : "10.9.100.28"
external_ids : {}
logical_port : vpc2-external0
min_rx : 100
min_tx : 100
options : {}
status : up

_uuid : 575551cf-7c64-420a-8d15-a10e0f104698
detect_mult : 3
dst_ip : "10.9.100.27"
external_ids : {}
logical_port : vpc2-external0
min_rx : 100
min_tx : 100
options : {}
status : up

[root@vnode-117-34 ~]# k ko nbctl find Logical_Router_Static_Route policy=src-ip options=ecmp_symmetric_reply="true"
_uuid : daf80eb4-eb48-4d49-9d4f-1f4f5a61583c
bfd : []
external_ids : {}
ip_prefix : "192.168.0.0/24"
nexthop : "10.9.100.15"
options : {ecmp_symmetric_reply="true"}
output_port : []
policy : src-ip
route_table : ""

_uuid : 3bda2bc6-4110-48b7-977d-114aac43c55e
bfd : []
external_ids : {}
ip_prefix : "192.168.0.0/24"
nexthop : "10.9.100.16"
options : {ecmp_symmetric_reply="true"}
output_port : []
policy : src-ip
route_table : ""

_uuid : fb7f394a-3d9e-43c1-9db8-1efa4533240a
bfd : 575551cf-7c64-420a-8d15-a10e0f104698
external_ids : {}
ip_prefix : "192.168.0.0/24"
nexthop : "10.9.100.27"
options : {ecmp_symmetric_reply="true"}
output_port : []
policy : src-ip
route_table : ""

_uuid : cfd0b1ee-d811-4787-9e10-f2489903ffda
bfd : 2e89a14b-cf2c-4c7e-89c7-a9ad38a935df
external_ids : {}
ip_prefix : "192.168.0.0/24"
nexthop : "10.9.100.28"
options : {ecmp_symmetric_reply="true"}
output_port : []
policy : src-ip
route_table : ""

@zbb88888
Copy link
Collaborator

In your env: Can you set up a netns and exec into it? like this: https://girondi.net/post/network_namespaces/

@inyongma1
Copy link
Author

[root@vnode-117-35 ~]#
[root@vnode-117-35 ~]# sudo ip netns add ovnext
Cannot create namespace file "/var/run/netns/ovnext": File exists
[root@vnode-117-35 ~]# sudo ip netns add ovnext
Cannot create namespace file "/var/run/netns/ovnext": File exists
[root@vnode-117-35 ~]# sudo ip netns exec ovnext ip link set dev eth1 up
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-35 ~]# sudo ip netns exec ovnext ip link set dev eth1 up
setting the network namespace "ovnext" failed: Invalid argument

It says already existed and cannot apply exec

@zbb88888
Copy link
Collaborator

please change a name, not use ovnext, just use a new name for your local env test

@inyongma1
Copy link
Author

[root@vnode-117-29 ~]# ip netns add helloworld
[root@vnode-117-29 ~]# sudo ip link add veth0 type veth peer name veth1
[root@vnode-117-29 ~]#
[root@vnode-117-29 ~]# sudo ip link set veth1 netns helloworld
[root@vnode-117-29 ~]# sudo ip link set veth0 up
[root@vnode-117-29 ~]#
[root@vnode-117-29 ~]#
[root@vnode-117-29 ~]# sudo ip netns exec ns1 ip link set veth1 up
Cannot open network namespace "ns1": No such file or directory
[root@vnode-117-29 ~]#
[root@vnode-117-29 ~]# sudo ip netns exec ns1 ip link set veth1 up
Cannot open network namespace "ns1": No such file or directory
[root@vnode-117-29 ~]# sudo ip netns exec server ip li^C
[root@vnode-117-29 ~]# udo ip netns exec helloworld ip link set veth1 up
-bash: udo: command not found
[root@vnode-117-29 ~]# sudo ip netns exec helloworld ip link set veth1 up
[root@vnode-117-29 ~]# sudo ip netns exec helloworld ip link set dev veth1 up
[root@vnode-117-29 ~]# sudo ip netns exec helloworld ip addr add dev veth1 192.168.99.1/24
[root@vnode-117-29 ~]# sudo ip netns exec helloworld ip addr show
1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
16: veth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
Error: Peer netns reference is invalid.
link/ether e6:56:e8:95:09:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.99.1/24 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::e456:e8ff:fe95:91b/64 scope link
valid_lft forever preferred_lft forever

It works very well in my enviroment to config by my hand.

@zbb88888
Copy link
Collaborator

could you please test this?

  1. exec the kube-ovn-cni pod
  2. exec the ovnext ns

@inyongma1
Copy link
Author

[root@vnode-117-29 ~]# ip netns exec cni-3be0d59d-e577-7099-da25-d79c9578fd8d bash
[root@vnode-117-29 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
Error: Peer netns reference is invalid.
link/ether a6:88:4a:c3:c7:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.16.0.3/16 brd 10.16.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a488:4aff:fec3:c7d7/64 scope link
valid_lft forever preferred_lft forever

[root@vnode-117-29 ~]# ip netns exec ovnext bash
setting the network namespace "ovnext" failed: Invalid argument
[root@vnode-117-29 ~]#

cni pod can be listed but ovnext is not executed.

@zbb88888
Copy link
Collaborator

kubectl exec into the kube-ovn-cni pod, and then, inside the pod, run ip netns exec into the ovnext

@inyongma1
Copy link
Author

Okay It works well. Thank you for your answer.

@inyongma1
Copy link
Author

but I have one more issue
[root@vnode-117-34 ~]# k ko nbctl list bfd
_uuid : 821a50f0-73d3-49ac-82a0-ec2363430dc9
detect_mult : 3
dst_ip : "10.9.0.1"
external_ids : {}
logical_port : vpc2-external0
min_rx : 100
min_tx : 100
options : {}
status : admin_down

_uuid : 2e89a14b-cf2c-4c7e-89c7-a9ad38a935df
detect_mult : 3
dst_ip : "10.9.100.28"
external_ids : {}
logical_port : vpc2-external0
min_rx : 100
min_tx : 100
options : {}
status : up

10.9.0.1 is the physical gw of public subnet but its status is admin_down.
Do you know why ?
and these are my configurations about external gw addr.

[root@vnode-117-34 ~]# k get cm -n kube-system ovn-external-gw-config -oyaml
apiVersion: v1
data:
enable-external-gw: "true"
external-gw-addr: 10.9.0.1/16
external-gw-nic: vlan0
external-gw-nodes: vnode-117-35,vnode-117-36
type: centralized
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"enable-external-gw":"true","external-gw-addr":"10.9.0.1/16","external-gw-nic":"vlan0","external-gw-nodes":"vnode-117-35,vnode-117-36","type":"centralized"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"ovn-external-gw-config","namespace":"kube-system"}}
creationTimestamp: "2025-02-14T03:58:57Z"
name: ovn-external-gw-config
namespace: kube-system
resourceVersion: "1235604"
uid: d41ab0e1-4444-4cb6-bf89-ac371a40c6a9

[root@vnode-117-34 ~]# k get subnet external0 -oyaml

apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kubeovn.io/v1","kind":"Subnet","metadata":{"annotations":{},"name":"external0"},"spec":{"cidrBlock":"10.9.0.1/16","excludeIps":["10.9.0.1..10.9.100.1"],"gateway":"10.9.0.1","protocol":"IPv4","vlan":"vlan0"}}
creationTimestamp: "2025-02-10T04:18:43Z"
finalizers:

  • kubeovn.io/kube-ovn-controller
    generation: 2
    name: external0
    resourceVersion: "1649482"
    uid: 5b7d6ec0-191d-486d-b483-4820eca1cb3e
    spec:
    cidrBlock: 10.9.0.0/16
    default: false
    enableLb: true
    excludeIps:
  • 10.9.0.1..10.9.100.1
    gateway: 10.9.0.1
    gatewayNode: ""
    gatewayType: distributed
    natOutgoing: false
    private: false
    protocol: IPv4
    provider: ovn
    vlan: vlan0
    vpc: ovn-cluster
    status:
    activateGateway: ""
    conditions:
  • lastTransitionTime: "2025-02-10T04:18:43Z"
    lastUpdateTime: "2025-02-14T03:58:58Z"
    reason: ResetLogicalSwitchAclSuccess
    status: "True"
    type: Validated
  • lastTransitionTime: "2025-02-10T04:18:44Z"
    lastUpdateTime: "2025-02-10T04:18:44Z"
    reason: ResetLogicalSwitchAclSuccess
    status: "True"
    type: Ready
  • lastTransitionTime: "2025-02-10T04:18:44Z"
    lastUpdateTime: "2025-02-10T04:18:44Z"
    message: Not Observed
    reason: Init
    status: Unknown
    type: Error
    dhcpV4OptionsUUID: ""
    dhcpV6OptionsUUID: ""
    mcastQuerierIP: ""
    mcastQuerierMAC: ""
    natOutgoingPolicyRules: []
    u2oInterconnectionIP: ""
    u2oInterconnectionMAC: ""
    u2oInterconnectionVPC: ""
    v4availableIPrange: 10.9.100.3-10.9.100.26,10.9.100.31-10.9.255.254
    v4availableIPs: 39928
    v4usingIPrange: 10.9.100.2,10.9.100.27-10.9.100.30
    v4usingIPs: 5
    v6availableIPrange: ""
    v6availableIPs: 0
    v6usingIPrange: ""
    v6usingIPs: 0

@zbb88888
Copy link
Collaborator

10.9.0.1 is the physical gw of public subnet but its status is admin_down.
Do you know why ?
and these are my configurations about external gw addr.

maybe 10.9.0.1 is the physical gw in the phsical switch which not enable the bfd function

@inyongma1
Copy link
Author

is it afftected distributed routers in gw nodes??

@zbb88888
Copy link
Collaborator

is it afftected distributed routers in gw nodes??

In my opinion, no.

but i wonder, why you has a bfd refer to the physical gw 10.9.0.1

@inyongma1
Copy link
Author

inyongma1 commented Feb 19, 2025

My servers env's public ip subnet is 10.9.0.0/16 cidr,
Do you have any suggestion, how to configure the gw ip??

enable-external-gw: "true"
external-gw-addr: 10.9.0.1/16
external-gw-nic: vlan0
external-gw-nodes: vnode-117-35,vnode-117-36
type: centralized
kind: ConfigMap

https://kubeovn.github.io/docs/v1.13.x/en/guide/eip-snat/
in this page, external-gw-addr: The IP and mask of the physical network gateway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working subnet
Projects
None yet
Development

No branches or pull requests

2 participants