-
Notifications
You must be signed in to change notification settings - Fork 11
🐞 Unable to hash password due to lack of memory #165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is not a bug. SCRYPT hashing is intended to defeat large scale cracking operations. From Wikipedia: https://en.wikipedia.org/wiki/Scrypt
Your system does not have enough RAM to process. There's probably a way to anticipate memory requirements, but I don't know it. For your case, try NOT using Thank you for the report, but I am closing because you have a system with insufficient RAM. |
Closing. |
I did a little research. This site explains the memory requirements for SCRYPT key derivation, which
The function Thank you for reporting this. |
Reopening. |
It works on both compression and decompression! I compiled and tested the whats-next branch with the command: Fantastic job! Thank you! |
We're not done yet. One more check has to be done in case the decompression is performed on a different system with more/less ram. Available ram affects costfactor so we have to be sure that the old costfactor already computed is used to decrypt. Otherwise an error will be reported. This involves storing the hash loops in bytes 6 and 7 of the header and then NOT trying to recompute the hash on decompression. What you have now will work fine on your system, but won't be portable. Thank you again for testing. Stay tuned. |
Hello again! You were completely correct. I set up an Arch Linux virtual machine and gave it only one GB of RAM as to test out the portability of SCRYPTed files. Attempting to decompress a file that was compressed on the host (4 gigs of RAM) did not work. The error that was outputted was This was for redundancy's sake but I also attempted increasing the amount of memory that the guest has to 2 GB and no cigar, It is exactly as you said! The file in question is the enwik8 file and it was compressed with the following options on both the host and the guest: Log output: |
@AlleyPally , right on. Thank you for confirming. The bug is a little involved and has to do with the writing and then reading of the encrypted headers for each block. Interestingly, but not surprisingly, with no encryption, a system of any size can decompress a follow created anywhere. I'll keep on this. Thank you again. Quick tip. Use the See manage. |
@AlleyPally , I haven't forgotten about this. The issue appears to be that during decompression, the costfactor for SCRYPTing is recomputed based on available ram and should not be. The costfactor should be stored in the magic header as it is derived on the host system. Stay tuned... |
@AlleyPally Try the whats-next branch, please and report. You will notice that I tested decompression with different memory sizes. See what you can find!
|
Greetings! In my testing, I noticed that a PC with less RAM is unable to decompress a file that was compressed by a PC with bigger RAM. However, the opposite isn't true; a PC with more RAM can decompress what was compressed by a PC with less RAM! I compressed a file and subsequently decompressed it with the -m option. I noticed that I gave the guest 1 GB RAM and decompression was unsuccessful. Likewise with 2 gigs to make sure. It spat out the error below, error 32854: However, compressing a file on the guest and then decompressing it on the host worked was successful. For redundancy's sake, I gave the guest 4 gigs of (shared) RAM and it decompressed the file just fine! |
@AlleyPally , the error makes sense. lrzip-next will store the cost factor in the header. In the report you submitted
The memory requirements for the stored cost factor is |
@AlleyPally , I made some minor changes, but the fact is, when trying to decompress on a system with less ram than the system an encrypted compression was made on, it may fail. And that is expected behavior. I think I have taken this as far as I can. |
I have re-done my previous tests as to confirm and it is like before; the guest was unable to decompress but the host was able to decompress just fine. I cannot thank you enough for your dedication and care mister Peter! You poured a lot of effort, looking into this small issue of mine and have done a great job. I appreciate it! I believe there is nothing more to do in this case. Thank you! |
Unfortunately, the only way to assure compatibility would be to have a much lower cost factor. The default used for logins is typically N=16384. This would make memory requirements 16MB (16384 * 128 * 8). Currently, memory requirements are 1GB plus. Is it overkill? A command line option to set cost factor is also a conssideration - e.g. N=14 where Cost factor = 2^14 = 16384 and memory requirements would be 16MB. See https://datatracker.ietf.org/doc/html/rfc7914, and also the However, making it universally compatible across systems would require limiting N to the value that would apply to the smallest system! Thanks again!
|
Hello, I couldn't help reading this report and I noticed that this issue seems to revolve around the amount of RAM available when decompressing on systems with little RAM. It was mentioned that the file seem to decompress fine when going from smaller RAM to bigger RAM, but I wonder if that holds in all cases. I have a recently built desktop with 128GB of RAM, would it be helpful to test the current latest code for decompression from 1GB or so to ~64GB, or would that be unnecessary because the problem you've pinpointed is different? I'm having a bit of trouble following the comment chain because I'm exhausted at the moment, but if it would help at all I'd be happy to do some testing over the next few days! |
@Theelx , you are correct that the current Obviously, the higher the cost factor, the harder to brute force the key. But, is this really necessary? So what I am working on now, and this may take a little time is:
On a 16GB system currently, the cost factor is 8MB or 2^23. But for a smaller system, this won't work in the master branch. Thanks for the offer to help, but this is a really low probability error and the |
v0.14.0 is in whats-next branch. New option |
@AlleyPally @Theelx , just curious how the testing is going? |
Oh sorry, I misunderstood your reply and thought testing wasn't needed. I'll get on it in the next few hours! |
I have encountered what seems to be a bug :( lrzip-next --zstd --zstd-level 17 --costfactor 25 -vP -p12 -U -o ./thing_25.lrz thing.tar
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 12
Detected 134,757,314,560 bytes ram
Nice Value: 19
Show Progress
Verbose
Output Filename Specified: ./thing_25.lrz
Temporary Directory set as: /tmp/
Compression mode is: ZSTD. LZ4 Compressibility testing enabled
Compression level 7
RZIP Compression level 7
ZSTD Compression Level: 17, ZSTD Compression Strategy: btopt
MD5 Hashing Used
Using Unlimited Window size
File size: 41,772,001,280
Will take 1 pass
Per Thread Memory Overhead is 0
Beginning rzip pre-processing phase
Total: 99% Chunk: 99%
thing.tar - Compression Ratio: 28.152. bpb: 0.284. Average Compression Speed: 145.387MB/s.
Total time: 00:04:33.68 Next, I tried it with costfactor 21 and it worked and produced the same compressed output, but it was slower (~5m30s). Then, I tried using costfactor 27 and it hung at 99%. I noticed the amount of RAM used spiked from 12.4GB to ~55GB, but it didn't even get close to being full. Why this occurs, I don't know, and I'd like help getting tooling for debugging it: lrzip-next --zstd --zstd-level 17 --costfactor 27 -vP -p12 -U -o ./thing_27.lrz thing.tar
The following options are in effect for this COMPRESSION.
Threading is ENABLED. Number of CPUs detected: 12
Detected 134,757,314,560 bytes ram
Nice Value: 19
Show Progress
Verbose
Output Filename Specified: ./thing_27.lrz
Temporary Directory set as: /tmp/
Compression mode is: ZSTD. LZ4 Compressibility testing enabled
Compression level 7
RZIP Compression level 7
ZSTD Compression Level: 17, ZSTD Compression Strategy: btopt
MD5 Hashing Used
Using Unlimited Window size
File size: 41,772,001,280
Will take 1 pass
Per Thread Memory Overhead is 0
Beginning rzip pre-processing phase
Total: 99% Chunk: 99% |
@Theelx , cost factor has no meaning with no encryption. I'm concerned that the per thread memory shows as 0. With 128gb of ram, why use |
I have muscle memory for -U, but I agree it's not necessary. I recognized that cost factor has no meaning with no encryption, and when I tested it with encryption it worked fine (sorry for leaving that out), it's just that the combination of options seems to have broken something (what, I'm not sure). |
@Theelx , please take a moment to try using the master branch, with the same options that caused the crash (except |
Apologies for the lack of reply! Decryption no longer works on encrypted files no matter what cost factor is used. However, while somewhat irrelevant, I noticed that decryption (using latest lrzip-next) of a file compressed by Compression options and output: Decompression options and output: I tried decompressing files from a guest VM and vice versa, no cigar! I also tried using @Theelx's options using the |
@AlleyPally there is a problem. On decryption, testing, info, the value stored in the lrz file is not read for costfactor. This forces it to be recomputed incorrectly. Thank you. Please stay tuned. |
@Theelx @AlleyPally Please try now. Don't go crazy with big files or lots of options. Even something simple like this will suffice. Costfactor can be anything.
You can test the storage of costfactor by using this command. If lrzip-next returns without error, costfactor and decryption works fine.
Where byte 6, 0f, is the costfactor exponent |
Everything is Oll Korrect! I have tested the newest version and have done the aforementioned test and hexdump check. I have also done so in a guest VM for redundancy's sake and all worked flawlessly! Thank you! |
I found out that my earlier error was due to an errant compiler option that I have for custom builds, so that can be ignored. Everything works fine on my end, thanks so much for your hard work! |
lrzip-next Version
0.13.2
lrzip-next command line
lrzip-next -Uvv --encrypt=goodpassword hello.txt
What happened?
SCRYPT was unable to hash the password due to a lack of memory, i'm not sure (I have 4 gigs of RAM, perhaps the error is genuinely not a bug?)
The file in question:
hello.txt
What was expected behavior?
To compress and encrypt and hash a small text file.
Steps to reproduce
Relevant log output
Please provide system details
OS Distro: Arch Linux
Kernel Version (uname -a): Linux Alley 6.12.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:52:26 +0000 x86_64 GNU/Linux
System ram (free -h): total used free shared buff/cache available
Mem: 3.6Gi 1.9Gi 387Mi 407Mi 2.0Gi 1.8Gi
Swap: 4.0Gi 123Mi 3.9Gi
Additional Context
I have tried to use all the different hashing algorithms, different compressors, different compressors, different memsize, etc, to no avail.
Normal lrzip works just fine but it's encryption scheme is not as robust lrzip-next's. My specs are on the lower side so this might be a me issue.
Thank you mister Peter for your continuous hard work and dedication!
The text was updated successfully, but these errors were encountered: