Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

file descriptor leak #170

Open
marcbrevoort-cyberhive opened this issue Mar 5, 2021 · 2 comments · May be fixed by #387
Open

file descriptor leak #170

marcbrevoort-cyberhive opened this issue Mar 5, 2021 · 2 comments · May be fixed by #387
Assignees
Labels
bug Something isn't working

Comments

@marcbrevoort-cyberhive
Copy link

marcbrevoort-cyberhive commented Mar 5, 2021

Currently boringtun leaks file descriptors. If we add this to src/device/integration_tests/mod.rs:

    #[test]
    /// Test if wireguard leaks resources on closing
    fn test_fd_leaks() {
        let n_before = count_file_descriptors_currently_in_use();
        let wg = WGHandle::init("192.0.2.0".parse().unwrap(), "::2".parse().unwrap());
        let response = wg.wg_get();
        assert!(response.ends_with("errno=0\n\n"));
        drop(wg);  // call destructor
        let n_before = count_file_descriptors_currently_in_use();
        assert_eq!(n_before, n_after);
    }

This test will fail.
Comments:

  • count_file_descriptors_currently_in_use() would count only file descriptors in use by the current process.
  • The implementation for the count_file_descriptors_currently_in_use() method is platform-specific but a naive implementation for Linux would be
pub fn count_file_descriptors_currently_in_use() -> u16 {
    use std::process;
    use std::process::Command;
    let path = format!("/proc/{}/fd/", process::id());
    let output = Command::new("ls")
        .args(&["-l", &path])
        .output()
        .expect("failed to get fd info");
    let stdout = String::from_utf8_lossy(&output.stdout);
    let mut n: u16 = 0;
    for x in stdout.lines() {
        n += 1;
    }
    n
}
@vkrasnov vkrasnov self-assigned this Mar 8, 2021
@marcbrevoort-cyberhive
Copy link
Author

Having investigated this a bit further the main leak appears to be in epoll/kqueue where file descriptors in use by the queue aren't freed when the queue is dropped.

@Noah-Kennedy Noah-Kennedy added the bug Something isn't working label Feb 14, 2022
@thomasqueirozb thomasqueirozb linked a pull request Jan 20, 2024 that will close this issue
@marcbrevoort-cyberhive
Copy link
Author

marcbrevoort-cyberhive commented Mar 17, 2025

Stop leaking file descriptors #387

While this PR tidies up file descriptors, it doesn't play nice with Rust 1.80+ which has stricter checks on file descriptor use, resulting in the dreaded message

fatal runtime error: IO Safety violation: owned file descriptor already closed.

Since a crash has a greater impact on the end user than a few leaked descriptors, I would recommend against merging in this change without further investigation into duplicate closing of descriptors.

When I keep tabs on the created file descriptors and how they were created by epoll, and I close everything except the ones created with new_event then the above double close does not seem to occur. This approach somewhat alleviates the problem.

Previously my descriptor metrics were

  • Before running a test: 6 descriptors open
  • After running a test: 32 descriptors open

When clearing all descriptors except the new_event ones, the double close crash does not occur.
The metrics then improve to

  • Before running a test: 6 descriptors open
  • After running a test: 18 descriptors open

Although this alleviates the problem somewhat, it is not yet fully solved. Further investigation is needed to pinpoint the cause of the double close. It seems the descriptors created through new_event could mostly be replaced with standard threads. Perhaps some of the event callbacks can be reworked with std::thread instead of using file descriptors, as this would leave management of the file descriptors to the compiler, rather than to the developer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants