-
Notifications
You must be signed in to change notification settings - Fork 31.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fs: enable chunked reading for large files in readFileHandle #56022
base: main
Are you sure you want to change the base?
Conversation
I wonder if I should apply this sorting in in addition there are still places that use |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #56022 +/- ##
==========================================
+ Coverage 90.22% 90.24% +0.01%
==========================================
Files 629 629
Lines 184847 184844 -3
Branches 36207 36207
==========================================
+ Hits 166780 166813 +33
+ Misses 11015 11011 -4
+ Partials 7052 7020 -32
🚀 New features to boost your workflow:
|
I removed the limit test because the limit for reading large files exceeding the GiB limit with fs.readFile has been removed. |
19dc0c4
to
6c85d68
Compare
6c85d68
to
ed82387
Compare
I added the tsc label to discuss, if we want to allow users to read such big files into memory, or if it would be better to try to point out streams instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error is removed from the promise version but it's missing the callback readFile implementation. The error itself would not be needed anymore due to that and also has to be removed.
This has to be addressed before we could land this.
We discussed in the TSC meeting that it's not a good idea to read beyond that, while it's acceptable for some cases.
We also discussed around warning when reaching that limit instead. We did not yet have consensus around it, but we'll discuss it again next week to finish the decision for that.
just wondering what is the next action here - @BridgeAR |
@gireeshpunathil I believe you wanted to think about the warning again. I kept my change request since the implementation should also include the callback version next to the warning. That's currently missing :) |
thanks a lot for your comments, unfortunately I saw this place late and now I sent a commit for the callback version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original reason for the limit was AFAIK something about some systems on some versions not being able to read that file size. If that's the case, we'd have to handle chunking on our side. I am just not certain if that still applies or if it's a legacy issue.
Please also add the warning instead of the error.
I thought we are going to continue the discussion in the TSC on the necessity of the warning, as we didn't converge on that IIRC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the warning to all occurrences and test for the warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
this test was successful on my local linux machine |
if (size > kIoMaxLength) { | ||
process.emitWarning( | ||
`Warning: Detected \`fs.readFile()\` to read a huge file in memory (${size} bytes). Please consider using \`fs.createReadStream()\` instead to minimize memory overhead and increase the performance.`, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I try to use the ERR_FS_FILE_TOO_LARGE
error code, do you think this is true?
|
||
function createVirtualLargeFile() { | ||
return Buffer.alloc(LARGE_FILE_SIZE, 'A'); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite a large allocation that may fail on some our more resource challenge builders. I believe there's a utility in test/common
for skipping if there's not enough available memory? Can't recall exactly what it is but we likely should be safer here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks I find this method enoughTestMem
|
||
const virtualFile = createVirtualLargeFile(); | ||
|
||
await writeFile(FILE_PATH, virtualFile); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's also possible that the FS on test runners could completely run out of space with this large of a file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a check for this place too, thanks for the suggestion
Co-authored-by: Ruben Bridgewater <[email protected]>
Added chunked reading support to readFileHandle to handle files larger than 2 GiB, resolving size limitations while preserving existing functionality.
#55864