You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what to do about explicitly conflicting ANPs (“priority=100, drop” vs “priority=100, accept”)
[Shaun]
Could adopt Calico's tie-breaker: policy with lowest-sorting name wins(!) 🙂. We try to maintain the invariant that restarting Calico won't churn the dataplane (and that two nodes will make the same policy decisions) so we enforce a deterministic tie-breaker.
In practice,
I think there's a lot of utility in allowing policies that share priority and that apply to non-overlapping groups of pods. We see customers doing this a lot. They'll break policies up into groups at priorities 100, 200, 300, … and then leave room for adding new groups in between later.
I don't think this type of conflict happens very often. I think there are two models of policy:
Perhaps some "early" deny rules to block "bad" traffic from coming in, then lots of allow/pass policies, with a default deny policy at the end of the tier. No conflicts because the deny/allow/final deny are in different "bands" of priorities within the tier.
Policy per group of pods, where each policy has allow/deny rules, but they're working together to express some more complex match. "allow to this k8s service and this CIDR, but drop to these sensitive sub-CIDRs"
To recap the technical challenge with detecting "real" conflicts:
Labels can change at runtime, so you might apply a policy and walk away with no conflicts. Then a new pod gets created or the labels on one or more pods are updated, and only at that point does the conflict get created.
If we do try to block it, I think it has to be the API server rejecting same priority. Otherwise the problem won't be obvious to the user. A status that says "policy won't be enforced because was created first with same priority" feels like a worse recipe for security holes!
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The text was updated successfully, but these errors were encountered:
- // Every AdminNetworkPolicy should have a unique priority value; if two (or more)
- // policies with the same priority could both match a connection, then the
- // implementation can apply any of the matching policies to the connection, and
- // there is no way for the user to reliably determine which one it will choose.
+ // If two (or more) policies with the same priority could both match a connection,
+ // then the implementation can apply any of the matching policies to the
+ // connection, and there is no way for the user to reliably determine which one it
+ // will choose. Administrators must be careful about assigning the priorities for
+ // policies with rules that will match many connections, and ensure that policies
+ // have unique priority values in cases where ambiguity would be unacceptable.
current ambiguity where we force ppl to not use the same priority because we do allow them to use same priorities
What happened:
This was discussed during KubeCon EU London 2025.
what to do about explicitly conflicting ANPs (“priority=100, drop” vs “priority=100, accept”)
[Shaun]
Could adopt Calico's tie-breaker: policy with lowest-sorting name wins(!) 🙂. We try to maintain the invariant that restarting Calico won't churn the dataplane (and that two nodes will make the same policy decisions) so we enforce a deterministic tie-breaker.
In practice,
I think there's a lot of utility in allowing policies that share priority and that apply to non-overlapping groups of pods. We see customers doing this a lot. They'll break policies up into groups at priorities 100, 200, 300, … and then leave room for adding new groups in between later.
I don't think this type of conflict happens very often. I think there are two models of policy:
Perhaps some "early" deny rules to block "bad" traffic from coming in, then lots of allow/pass policies, with a default deny policy at the end of the tier. No conflicts because the deny/allow/final deny are in different "bands" of priorities within the tier.
Policy per group of pods, where each policy has allow/deny rules, but they're working together to express some more complex match. "allow to this k8s service and this CIDR, but drop to these sensitive sub-CIDRs"
To recap the technical challenge with detecting "real" conflicts:
Labels can change at runtime, so you might apply a policy and walk away with no conflicts. Then a new pod gets created or the labels on one or more pods are updated, and only at that point does the conflict get created.
If we do try to block it, I think it has to be the API server rejecting same priority. Otherwise the problem won't be obvious to the user. A status that says "policy won't be enforced because was created first with same priority" feels like a worse recipe for security holes!
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: