-
-
Notifications
You must be signed in to change notification settings - Fork 271
Memory leak on v20231217 #429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
v20240203 also has this issue, the memory usage of the beam.smp process is constantly increasing. |
Thanks for the report. Great that you have the graph, this indeed looks like a leak. I'll try to reproduce that locally. |
@ku1ik thanks for the feedback and for this wonderful project. If you need more information, logs, etc. I can also provide. |
I'm currently use the But I can confirm that this indeed happens. I've been running the server for about 4 days now and I've noticed the memory usage steadily climbing. I've restarted the container one time to see if it was just a fluke but the memory usage is still climbing like before. I'll probably make a cron job to restart it at midnight every day to get around this until it's patched. Also: there's no usage at all. I have a single account on this instance, here are no recordings (apart from the welcome one), no recordings are shared so no websites are loading them and I haven't used it for about 2 days now but it's still going up. If you're interested I can make some tests, etc., as it's a personal instance where downtime won't annoy anyone. |
I just modified my view so that I can filter containers more easily and I also found a way to capture non-visibile parts of a website in Firefox (the built in screenshot tool is still broken) so here's a better screenshot. You can also see the network usage to have an idea of how much this instance is not used. Note If you see multiple containers names |
Does the mem usage grow indefinitely, or does it top at some value, and go down and the up again? |
So far it's indefinitely until I restart it. |
Or until OOM-killer kills the process eating up all the memory) |
Thanks. Yeah, the problem is definitely there. I'm trying to pinpoint it now. |
I see that about the time that the
https://github.com/asciinema/asciinema-server/releases/tag/v20231216 Maybe that could be the culpruit? |
I think it's the built-in prometheus endpoint ( |
That's likely it: beam-telemetry/telemetry_metrics_prometheus_core#52 |
I'll let this run overnight to see how the memory usage behaves and whether opening the dashboard causes the memory to be freed. |
I did a quick test before I went to sleep and after opening the dashboard the memory usage fell down. I can confirm that the admin panel feature is what is the cause here. Note RSS memory usage didn't go down. This may be caused by my 2GB swap that is 80% free for the majority of time. From reading the issue upstream it looks like the metrics are not flushed from memory until they're loaded by the panel. Since the metrics are static data it would make sense that Linux would move it to swap. |
I'll release a new version with the fix soon. |
Would gmthere be a config option for it? If I'd be able to monitor things like the number of recordings, etc, I may integrate it to my monitoring stack. |
The built-in prometheus endpoint provided the following stats: asciinema-server/lib/asciinema/telemetry.ex Lines 48 to 76 in b3a852d
I removed it for now to solve the leak, especially that it was undocumented and nobody used it (including me). I may re-add it in the future, with some more useful stats, and with an explicit config option to enable it. |
Yeah it was weird for me when I saw it in the logs ( And these stats are useless for me since I gather these things a other way. CPU, memory and network Rx/Tx are gathered by Docker and cAdvisor, HTTP requests are gathered by Traefik. |
Both the dashboard and the That admin panel is still there, and it was not the cause of the leak. This panel is a basic dashboard with some metrics, but it doesn't use any extra resources (not noticeably). It doesn't hurt when it's not used. The problem was the prometheus |
How does the |
@jiriks74 this one runs on its own port, 9568. |
@jiriks74 the new release is out (docker image), I'm just wrapping up the release notes. |
Thanks <3 |
Using the new release seems to be working. It's been only cca 30 minutes but I can confirm that I cannot see the memory leak trend as I did before. Note The spike on the second graph is when I updated to the new release and started the server. Both graphs are over an interval of 30 minutes. |
Hello, here's my last comment on this, i promise. Since memory leaks can sometimes be pain I was monitoring whether something would come up. I can now confirm that there hasn't been any significant memory usage over the last 24 hours. Below is a graph over 48 hours to see the differences between the old and new releases more easily. |
Thank you @jiriks74! I'm observing the same on asciinema.org so we can safely assume the problem has been solved. Thanks for the assistance! 🤝 |
I have a small VDS with 1GB of memory (https://asciinema.tw1.ru). I installed asciinema-server v20231217 for personal use. This server has no real load, several requests per day. But every 2 days OOM occurs:
The memory usage graph looks like this:

The text was updated successfully, but these errors were encountered: