You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We noticed that when using the Env Injector, if we had a service deployment with 10 replicas and the service pulled 5 secrets from an AKV, we would see 50 calls made to the AKV during the scale up. Because the calls get the same 5 values for each replica, it would be great if there was a way for the env injector to know that it has already pulled the value within a certain sliding window and used the cache value instead.
This would help reduce the risk of AKV throttling that might occur when too many calls are made within a short time window (especially if there is a lot of secrets and a lot of replicas getting spawned).
It seems having a sliding window cache of 10 sec that aligns with Microsoft's own "2000 calls per 10 sec" throttling rule would allow us to scale horizontally as much as we want and not have to worry about socket exhaustion or throttling failures. Not to mention, the reduced number of HTTPS calls will positively impact the performance of POD startup.
We could solve this going the Kubernetes secrets route with the configuration module. The downside is that we have a 3rd party MSP managing our K8S, so exposing the AKV secrets as Kubernetes secrets might create compliance/security issues. It would require not only managing the access of AKV and the PODs environment, but also the access to Kubernetes secrets, which gets tricky when you have an MSP administrating your environment.
The text was updated successfully, but these errors were encountered:
We noticed that when using the Env Injector, if we had a service deployment with 10 replicas and the service pulled 5 secrets from an AKV, we would see 50 calls made to the AKV during the scale up. Because the calls get the same 5 values for each replica, it would be great if there was a way for the env injector to know that it has already pulled the value within a certain sliding window and used the cache value instead.
This would help reduce the risk of AKV throttling that might occur when too many calls are made within a short time window (especially if there is a lot of secrets and a lot of replicas getting spawned).
It seems having a sliding window cache of 10 sec that aligns with Microsoft's own "2000 calls per 10 sec" throttling rule would allow us to scale horizontally as much as we want and not have to worry about socket exhaustion or throttling failures. Not to mention, the reduced number of HTTPS calls will positively impact the performance of POD startup.
We could solve this going the Kubernetes secrets route with the configuration module. The downside is that we have a 3rd party MSP managing our K8S, so exposing the AKV secrets as Kubernetes secrets might create compliance/security issues. It would require not only managing the access of AKV and the PODs environment, but also the access to Kubernetes secrets, which gets tricky when you have an MSP administrating your environment.
The text was updated successfully, but these errors were encountered: