Skip to content

Performance fixes for rand #156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 24, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions src/extras/random.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
function rand(rng::AbstractRNG, ::Random.SamplerTrivial{Random.CloseOpen01{DoubleFloat{T}}}) where {T<:IEEEFloat}
hi, lo = rand(rng, T, 2)
hi = rand(rng, T)
lo = rand(rng, T)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unfortunate because vectorized random numbers are faster to generate. Do we need a better idiom in Julia for generating a small fixed amount of random numbers?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this won't be needed once the compiler get the full power of escape analysis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that StaticArrays hits the same problem with their rand functions. I tried taking a look at the code, but all the generated functions made me nauseous: https://github.com/JuliaArrays/StaticArrays.jl/blob/master/src/arraymath.jl#L38

On the other hand, maybe the only real fix would be for Julia to provide a contiguous stack-allocated array type, because I don't think that tuple operations can be vectorized because they don't necessarily have contiguous layout in memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some measurements now, and it turns out that separate rand() calls seem to be faster than rand!(array), at least for tiny lengths like 2 and 4. For length two:

using Random, BenchmarkTools

f(::Val{n}) where {n} = ntuple(i -> rand(), Val{n}())

# warm up
rand!(zeros(2))
f(Val(2))

@benchmark rand!(a) setup = (a = zeros(2);)  # Time  (median):     5.691 ns
@benchmark rand!(a) setup = (a = zeros(2); Random.seed!(1234))  # Time  (median):     5.851 ns

@benchmark f(Val(2))  # Time  (median):     4.949 ns
@benchmark f(Val(2)) setup = (Random.seed!(1234);)  # Time  (median):     4.950 ns

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The above measurements are with nightly Julia.

if hi === zero(T)
if lo === zero(T)
return zero(DoubleFloat(T))
return DoubleFloat(zero(T))
end
hi, lo = lo, hi
end
Expand Down Expand Up @@ -81,16 +82,16 @@ end
# normal variates

function randn(rng::AbstractRNG, ::Type{DoubleFloat{T}}) where {T<:IEEEFloat}
urand1, urand2 = rand(rng, DoubleFloat{T}, 2)
urand1 = urand1 + urand1 - 1
urand2 = urand2 + urand2 - 1
s = urand1*urand1 + urand2*urand2
urand1, urand2, s = ntuple(i -> zero(DoubleFloat{T}), Val{3}())

while s >= 1 || s === 0
urand1, urand2 = rand(rng, DoubleFloat{T}, 2)
while true
urand1 = rand(rng, DoubleFloat{T})
urand2 = rand(rng, DoubleFloat{T})
urand1 = urand1 + urand1 - 1
urand2 = urand2 + urand2 - 1
s = urand1*urand1 + urand2*urand2

(s >= 1 | iszero(s)) || break
end

s = sqrt( -log(s) / s )
Expand Down