Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] when kube-ovn deployed in underlay mode, networkpolicy does not work. #5043

Open
thomasnew opened this issue Feb 28, 2025 · 9 comments
Assignees
Labels
bug Something isn't working network policy

Comments

@thomasnew
Copy link

thomasnew commented Feb 28, 2025

Kube-OVN Version

v1.13.1-stable

Kubernetes Version

Client Version: v1.28.15
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.15

Operation-system/Kernel Version

ubuntu@a-master-1:~/yaml$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Description

配置:

  1. 创建两个pod A,pod B,其中pod A起一个service A(clusterIP,端口为80);
  2. pod B 上打label : tag=client
  3. pod A上打label: tag=frontend
  4. 在设置了如下的networkpolicy,policy中没有放开 80端口和8000端口;

在pod B中访问service A 80 和“pod A port 8000 ” ,都可以访问成功;

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: client-workloads
  namespace: default
spec:
  podSelector:
    matchLabels:
      tag: client
  egress:
    - ports:
        - port: 443
          protocol: TCP
        - port: 8443
          protocol: TCP
      to:
        - podSelector:
            matchLabels:
              tag: frontend
  policyTypes:
    - Egress

Steps To Reproduce

  1. 创建两个pod A,pod B,其中pod A起一个service A(clusterIP,端口为80);
  2. pod B 上打label : tag=client
  3. pod A上打label: tag=frontend
  4. 在设置了如下的networkpolicy,policy中没有放开 80端口和8000端口;

Current Behavior

可以访问成功

Expected Behavior

在networkpolicy中未放开80和8000端口情况下,不能访问成功。

@thomasnew thomasnew added the bug Something isn't working label Feb 28, 2025
Copy link

dosubot bot commented Feb 28, 2025

The issue you're experiencing with NetworkPolicy not working as expected in Kube-OVN underlay mode could be related to how egress rules are processed. In Kube-OVN, if no specific peers are defined in the egress rule, it defaults to allowing all addresses, which might explain why access to ports 80 and 8000 is still possible despite the policy not explicitly allowing them [1].

To address this, ensure that your NetworkPolicy explicitly defines the peers and ports you want to allow. If the policy is intended to block access to certain ports, make sure that the egress rules are correctly specified to only allow the desired traffic. Additionally, verify that there are no other NetworkPolicies or configurations that might be affecting the behavior.

If the issue persists, it might be helpful to review the Kube-OVN configuration and logs to ensure that the NetworkPolicy is being applied as expected.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@patriziobassi
Copy link

i've experimented a similar issue in 1.12.x

@oilbeater
Copy link
Collaborator

I cannot reproduce it. Is there any other networkpolicies affect the traffic? You can use kubectl ko trace to check which networkpolicy this traffic match.

@thomasnew
Copy link
Author

I cannot reproduce it. Is there any other networkpolicies affect the traffic? You can use kubectl ko trace to check which networkpolicy this traffic match.

@oilbeater 你好,今天我又测试了下。发现情况不太一样,(svc 端口是 80, pod本身端口是 8000)
1 无论networkpolicy中是否添加80端口,均访问不通。kubectl ko trace 结果是一样的,没显示drop,只是显示 recirc;
2 如果在networkpolicy中添加 8000 端口,则可以访问通;

这个测试结论显示,networkpolicy中限制的是实际pod监听的端口,而不是 svc的端口。这和k8s规范不一致。

underlay-networkpolicy-issue.txt

详细信息请参考附件,谢谢;

@oilbeater
Copy link
Collaborator

@changluyi 看下是不是我们 acl 的 after-lb 应该只在 target 有 pod 选择器的时候才生效。

@thomasnew
Copy link
Author

看了看acl表,如下:
_uuid : 98aca913-fcbc-4d79-b2eb-ffc9d168b72e
action : allow-related
direction : from-lport
external_ids : {parent=client.workloads.default}
label : 0
log : false
match : "inport == @client.workloads.default && ip && ip4.dst == $client.workloads.default.egress.allow.IPv4.0 && ip4.dst != $client.workloads.default.egress.except.IPv4.0 && tcp.dst == 443"
meter : []
name : []
options : {apply-after-lb="true"}
priority : 2001
severity : []
tier : 2

_uuid : bd0ffbd5-c6fd-44e0-8989-0abba9456b89
action : allow-related
direction : from-lport
external_ids : {parent=client.workloads.default}
label : 0
log : false
match : "inport == @client.workloads.default && ip && ip4.dst == $client.workloads.default.egress.allow.IPv4.0 && ip4.dst != $client.workloads.default.egress.except.IPv4.0 && tcp.dst == 8443"
meter : []
name : []
options : {apply-after-lb="true"}
priority : 2001
severity : []
tier : 2

@changluyi
Copy link
Collaborator

networkpolicy中限制的是实际pod监听的端口,而不是 svc的端口,这个应该就是k8s的规范吧。

@thomasnew
Copy link
Author

thomasnew commented Mar 17, 2025

networkpolicy中限制的是实际pod监听的端口,而不是 svc的端口,这个应该就是k8s的规范吧。


@changluyi @oilbeater 两位好,我重新看了下k8s官网文档,按照文档的说法,我的理解是:
“如果networkpolicy中设置的ingress策略,应该针对的是pod自身的监听的端口;”
“如果networkpolicy中设置的egress策略,应该针对的是service的端口,或者说本pod对外发起连接的目的端口”
请两位也确认下,这里的规格应该如何理解?

谢谢

@changluyi
Copy link
Collaborator

可以用calico 试试, egress/ingress 都是针对 pod 的 ip 和 port。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working network policy
Projects
None yet
Development

No branches or pull requests

4 participants