Open
Description
As I wrote in a previous comment, I noticed that the performance of BigUint::div_rem
for large numbers is worse than python int.__divmod__
.
Consider the following computation, i.e the splitting of 2^2^23
into two digits of base 10**1262611
(1262611
being half the size of 2^2^23
in decimal digits). It takes 1 second to run using python 3.13:
$ time python -c "divmod(1<<(1<<23), 10**1262611)"
1,04s user 0,01s system 99% cpu 1,051 total
Now consider the same computation using num-bigint
:
use num_bigint::BigUint;
use num_integer::Integer;
fn main() {
let x = BigUint::from(1u32) << (1u32 << 23);
let w = 1262611; // Half the size of x in decimal digits
x.div_rem(&BigUint::from(10u32).pow(w));
}
It takes more than 4 seconds:
% time ./target/release/bigint
./target/release/bigint 4,44s user 0,00s system 99% cpu 4,443 total
For the record, the int.__divmod__
implementation is written in pure python:
https://github.com/python/cpython/blob/2904ec2273762df58645a8e245b2281884855b8c/Lib/_pylong.py#L528-L531
Apparently, the complexity of this algorithm is O(n**1.58)
:
Its time complexity is O(n**1.58), where n = #bits(a) + #bits(b).
Metadata
Metadata
Assignees
Labels
No labels