-
Notifications
You must be signed in to change notification settings - Fork 472
Potential memory leak in WebMvcLinkBuilder
#1701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FYI: The headline is supposed to be "Potential memory leak in WebMvcLinkBuilder and WebFluxLinkBuilder" |
In addition, |
@reda-alaoui – Would you mind opening a separate ticket for that problem? It looks like we could just switch to the fully qualified type name of the class to the type itself to end up in the cache key. |
I'll look into the memory leak ASAP. Regarding the comments on the caching and the I am happy to tweak things further or in a different direction but only if the suggestions made are backed by improving numbers of a JMH benchmark for review as it's easy to over-optimize for a very particular use case and – by doing that – significantly impact other, likely also very common use cases. |
WebMvcLinkBuilder
and WebFluxLinkBuilder
WebMvcLinkBuilder
and WebFluxLinkBuilder
WebMvcLinkBuilder
We now constrain the cache of controller proxy instances to 256 elements using Spring's ConcurrentLruCache to avoid instances created via DummyInvocationUtils.methodOn(Class<?>, Object…). The parameters are part of the cache key and used to expand the type-level mappings. If those vary for each call and a request creates a lot of links (>100000) the memory consumption grows significantly, first and foremost indefinitely. Using the ThreadLocal will still make sure that the cache is local to a current request, so the proxies can actually be reused as the method invocations used to record the mappings would interfere for concurrent requests otherwise. Removed obsolete generic parameter on the CacheKey type.
The commits for #467 significantly degraded performance as CachingMappingDiscoverer.getParams(Method) doesn't properly cache the result of the call which causes quite expensive, unnecessarily repeated annotation lookups for the very same method. We also avoid the creation of an Optional instance for the sole purpose of a simple null check. Introduce FormatterFactory to potentially cache the to-String formatting functions and thus avoid repeated evaluation and Function object creation. We now also avoid the creation of ParamRequestCondition instances if no @RequestParams(params = …) values could be found in the first place.
I've limited the amount of instance held in the cache using a The provided sample keeps a constant memory load now. That said, it's a very a typical scenario as requests usually do not produce tens of thousands of links per request and the memory pressure should be resolved due to the I also took the chance to run some benchmarks, and it turned out that the fix for #467 introduced a severe performance regression that I've now fixed by adding proper caching to the expensive (and superfluous) annotation lookups. Backported to 1.4.x via #1723, too. |
HI @odrotbohm
The problem manifested by a classical out-of-memory error after a days/weeks of our servers running. By analysing the heap dump I could see that each HTTPS-Execution-Thread of Tomcat (in Spring Boot) had a ThreadLocal instance of the cache hashmap containing ten thousands of proxy instances. Each cache entry counted for about 1-2KB of memory. In total we had 300-400 MB of heap occupied by the caches. Since our JVM are limited to 1GB this was slowly eating up the free memory causing additional GCs especially full GCs (G1 Old). These GCs do not free up memory as cache was not build on weak references and the problem did not go away. Finally GC CPU load goes to 50% and OOMs happen. The reason for this is quite simple. We habe a customer base of a few millions customers and a few endpoints had built their links using the @RestController
@RequestMapping("/api/customers/{customerId}")
public class Controller {
}
...
Link link= linkTo(methodOn(Controller.class, customerId).getResource(null)).withSelfRel() Since customer requests are not pinned to a certain HTTPS-Execution thread each customer interacting with our API caused entries in more or less each Link link1= linkTo(methodOn(Controller.class).getResource(customerId)).withSelfRel() But we still have at least endpoint which uses the |
@odrotbohm
The problem did not occur because we generated a lot of links in a single request. That is not the case as explained in my previous post. The In regard of that the Cache size could be bigger in my opionen than the used 256 entries, as users usually interact for a longer time with the API in their user sessions, e.g. interacting with a customer self service portal. Allowing the caching across multiple requests might be beneficial for longer user sessions. |
The usage of
<T> T methodOn(Class<T> controller, Object... parameters)
causes a memory leak when used with varying parameters.Root cause:
methodOn
uses theDummyInvocationUtils
for generating a proxy for recording a method call. The proxy is then cached for later usage. The proxy is cached by the unique combination of controller class and the provided parameters.According to the documentation of
methodOn()
the parameters are used to define template variables on type level.Example:
With a controller like this it is possible to create the link to the controller method with two approaches:
A developer might be tempted to use the first variant as the template variable is defined on the type level and not on the method level. The first option causes a memory leak, the second option is totally safe. The generated link is exactly the same in both options.
Test case:
spring-hateoas-memory-leak.zip
Workaround: never use
methodOn()
with parameters.Solution:
Cache should be limited in size using LRU strategy and potentially shared between threads to increase hit ratio and reduce memory footprint.
The text was updated successfully, but these errors were encountered: