-
Notifications
You must be signed in to change notification settings - Fork 11
🐞 Crash when writing >2GB memory to fd_out #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Fixed. All branches. |
The fix is not proper because |
Would you mind taking a look at the code and suggest a better fix? I will reopen. Please provide additional details as you can. Thank you |
Limit the maximum size of nmemb to 0x7ffff000, the rest of the code in write_fdout will treat it as a partial write, so it is okay. bool write_fdout(rzip_control *control, void *buf, i64 len)
{
uchar *offset_buf = buf;
ssize_t ret, nmemb;
while (len > 0) {
nmemb = len;
if (nmemb > 0x7ffff000) nmemb = 0x7ffff000;
ret = write(control->fd_out, offset_buf, (size_t)nmemb);
/* error if ret == -1 only. Otherwise, buffer not wholly written */
if (unlikely(ret == -1)) /* error, not underflow */
fatal("Failed to write %'"PRId64" bytes to fd_out in write_fdout\n", nmemb);
len -= ret;
offset_buf += ret;
}
return true;
} According to POSIX.1, if count is greater than So in order to be absolutely safe, the maximum amount of data that can be written at once would only be 32767 bytes. |
This is standard for the write function. From the manpage:
Since I don't have a MAC to test this on, please run the unmodified program with |
Output:
This is consistent what's stated in macOS's write(2) manpage:
5,726,589,567 bytes is larger than 2,147,483,647 bytes |
Thank you, Nick. I am not sure it's a MAC thing though. write is returning an error. This could be a number of things. I'll check here and see. I have some really big files too. I'll wash it through the debugger, check errno. Make sure your temp directory has space enough, or set tmpdir. |
the errno appears to be 22, which is Invalid argument on macOS.
from macOS
|
Interesting. But that's not part of the write function. That's the fatal macro that prints an error. I appreciate the work. I'll take a look sometime this week. Thank you again. |
since the breakpoint is at the start of the fatal function, and fatal is called immediately after the failed write, errno is not overwritten yet at that time |
I am unable to duplicate on x86_64.
As you can see, on this system, Can you compile and run this code?
On my system, I see this:
Depending on your output, we could put an |
macOS 14.2.1 (arm64)
|
Well, I hate kludges. And i cannot explain why the arm64 On my system
|
The libc and all other system libraries and frameworks on macOS are all linked together in files that are called “dyld shared cache”. It’s not a cache despite it’s name. |
I'll see what I can do. I can pretty much guarantee an |
github actions macOS runs on macOS x86_64, , write() there doesn't work for size bigger than INT_MAX either https://github.com/asdfugil/macos-write-test/actions/runs/7655319378/job/20861107940 |
I'll plug in your code, but limit it to APPLE so it won't impact other systems where this works properly. |
… 0x7fff0000. (Nick Chan). Normally, write() will write at most 2G at a time, and continue until the entire buffer is written. However, as @asdfugil pointed out, if nmemb > 0x7fff0000, write() on MacOS will return an error 22 - Invalid Argument. This commit limits nmemb to 0x7fff0000.
Fix proposed in branch whats-next. Please review and comment. |
doesn't work because the correct macro is |
Because I am an idiot. I manually changed the APPLE macro so I could test, and just forgot to undo it. numb, I have no idea what I was thinking. Thank you for taking a look. Pushing update.
|
I can confirm that it all works on mac now |
… 0x7fff0000. (Nick Chan). Normally, write() will write at most 2G at a time, and continue until the entire buffer is written. However, as @asdfugil pointed out, if nmemb > 0x7fff0000, write() on MacOS will return an error 22 - Invalid Argument. This commit limits nmemb to 0x7fff0000.
lrzip-next Version
0.9
lrzip-next command line
lrzip-next -t | -d file.lrz
What happened?
In function
write_fdout()
, large > 2GB buffers fail on first write. This should not happen.write()
function has a limit of 2 GB but returns number of bytes written. If no error,write_fdout()
should continue usingret
value to reduce number of bytes remaining to write. Lines 588 and 589 are not correct to have a fatal outcome. Also, the use of formatting parameterPRId32
is not correct since ssize_t is a 64 bit int. Also,fatal
call is not correct unless an error is set. Otherwise, it should just report bytes written or nothing and continue.What was expected behavior?
write()
function should continue until all bytes are written.fatal
call is not correct.Steps to reproduce
Decompress file > 2GB uncompressed.
Relevant log output
The text was updated successfully, but these errors were encountered: