Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPC网关的DNAT和SNAT无法使用同一个EIP #2729

Closed
Learntotolearn opened this issue Apr 27, 2023 · 7 comments · Fixed by #2805
Closed

VPC网关的DNAT和SNAT无法使用同一个EIP #2729

Learntotolearn opened this issue Apr 27, 2023 · 7 comments · Fixed by #2805
Assignees

Comments

@Learntotolearn
Copy link

Learntotolearn commented Apr 27, 2023

Expected Behavior

VPC网关的DNAT(指定的POD)和SNAT使用同一个EIP

Actual Behavior

无法使用同一个EIP,会报如下错误:

E0427 08:16:31.790143       7 vpc_nat_gw_nat.go:329] error syncing 'dnat01': failed to create dnat dnat01, eip 'eips01' is used by nat snat, requeuing
E0427 08:16:32.790788       7 vpc_nat_gw_nat.go:329] error syncing 'dnat01': failed to create dnat dnat01, eip 'eips01' is used by nat snat, requeuing
E0427 08:16:34.791836       7 vpc_nat_gw_nat.go:329] error syncing 'dnat01': failed to create dnat dnat01, eip 'eips01' is used by nat snat, requeuing
E0427 08:16:38.792562       7 vpc_nat_gw_nat.go:329] error syncing 'dnat01': failed to create dnat dnat01, eip 'eips01' is used by nat snat, requeuing
E0427 08:16:46.792762       7 vpc_nat_gw_nat.go:329] error syncing 'dnat01': failed to create dnat dnat01, eip 'eips01' is used by nat snat, requeuing

Steps to Reproduce the Problem

1.配置vpc网关
2.配置snat

---
kind: IptablesEIP
apiVersion: kubeovn.io/v1
metadata:
  name: eips01
  namespace: ns1
spec:
  natGwDp: gw1
  v4ip: 192.168.100.32
---
kind: IptablesSnatRule
apiVersion: kubeovn.io/v1
metadata:
  name: snat01
  namespace: ns1
spec:
  eip: eips01
  internalCIDR: 10.0.1.1/32

3.配置dnat

---
kind: IptablesDnatRule
apiVersion: kubeovn.io/v1
metadata:
  name: dnat01
spec:
  eip: eips01 
  externalPort: '8888'
  internalIp: 10.0.1.1
  internalPort: '80'
  protocol: tcp

Additional Info

  • Kubernetes version:
Client Version: v1.26.0
Kustomize Version: v4.5.7
Server Version: v1.26.0
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:58:30Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:45Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
  • kube-ovn version:
v1.11.3
  • operation-system/kernel version:
Ubuntu 20.04.1 LTS
5.4.0-147-generic
@zbb88888
Copy link
Collaborator

考虑支持下

@zbb88888 zbb88888 self-assigned this Apr 27, 2023
@Learntotolearn
Copy link
Author

考虑支持下
可以!!近期会开始考虑如何去实现吗?我可以为爱发电支持一下

@zbb88888
Copy link
Collaborator

考虑支持下
可以!!近期会开始考虑如何去实现吗?我可以为爱发电支持一下

对,近期就做 最快五一后。 感谢支持

@Learntotolearn
Copy link
Author

考虑支持下
可以!!近期会开始考虑如何去实现吗?我可以为爱发电支持一下

对,近期就做 最快五一后。 感谢支持

@shane965
Copy link
Contributor

Hi @bobz965 ,
前段时间我有实现过这个功能,下面是改动的地方,这种方式可以正常使用,但这个实现并不优雅,因为在这个实现中没有去考虑IptablesEIP.status.nat字段的维护,如果一个EIP既做SNAT又作DNAT,那么IptablesEIP.status.nat字段可能需要新增,例如mix之类,这个状态的生命周期也要考虑进去。
@Learntotolearn 如果你希望参与这个feature的开发,可以参考下面的实现,然后一起讨论下状态维护的问题。
shane965@706f4c0

@zbb88888
Copy link
Collaborator

  1. Eip is used by fip, not support sharing with another fip

  2. Eip is used by snat, support share with any other nat

  3. Eip is used by dnat, support share with any other nat

  4. Eip is used by nat should prevent it from deletion

  5. Remove Eip annotaion NAT

  6. support eip fip dnat snat bind eip ip addr label to help check conflict
    采用间接基于ip的关联,来解决label使用资源名过长的问题,以及目前的基于annotation关联 filter list 运维低效的问题
    iptables-eip <--> eip v4ip <--> nat

@zbb88888
Copy link
Collaborator

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants