Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get tests passing in Julia 10 #103

Merged
merged 12 commits into from
Mar 27, 2024
Merged

Get tests passing in Julia 10 #103

merged 12 commits into from
Mar 27, 2024

Conversation

rafaelorozco
Copy link
Collaborator

Need to run tests on GPU before merging.

Copy link

codecov bot commented Mar 23, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.28%. Comparing base (1a4cba9) to head (bbd4519).

Additional details and impacted files
@@             Coverage Diff             @@
##           master     #103       +/-   ##
===========================================
+ Coverage   64.04%   89.28%   +25.24%     
===========================================
  Files          34       34               
  Lines        2581     2595       +14     
===========================================
+ Hits         1653     2317      +664     
+ Misses        928      278      -650     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

env:
test_suite: ${{ matrix.tests }}

strategy:
fail-fast: false

matrix:
version: ['1.6', '1.7', '1.8']
version: ['1.6', '1.7', '1.8', '1.10']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1.9?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wanted to start with the superset of 1.10, and then add 1.9. In retrospect might have been smarter to go more incrementally and do 1.9 first. Will add 1.9 in a bit

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh no I meant adding both 😆

@@ -110,7 +110,10 @@ end
function mat_tens_i(out::AbstractVector{T}, Mat::AbstractArray{T, 2},
Tens::AbstractArray{T, 3}, Mat2::AbstractArray{T, 2}) where T
# Computes sum( (Mat * tens) .* Mat2) for each element in the batch

isa(Mat, CUDA.CuArray) && (Mat2 = CUDA.CuArray(Mat2)) #new Julia 1.10 subarrays require this
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think just Mat2 = collect(Mat2) should always work

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Darn looks like that puts the Mat2 on CPU:

Mat2 = collect(Mat2)
copyto!(out, map(i -> dot(Mat * Tens[i, :, :], Mat2) , 1:size(Tens, 1)))
  LoadError: MethodError: no method matching dot(::Int64, ::CUDA.CuPtr{Float32}, ::Int64, ::Ptr{Float32}, ::Int64)
  
  Closest candidates are:
    dot(::Integer, ::Union{Ptr{Float32}, AbstractArray{Float32}}, ::Integer, ::Union{Ptr{Float32}, AbstractArray{Float32}}, ::Integer)
     @ LinearAlgebra /net/coc-herrmann/GATechBundle/Julia/julia-1.10.0/share/julia/stdlib/v1.10/LinearAlgebra/src/blas.jl:344
    dot(::Integer, ::Union{Ptr{Float64}, AbstractArray{Float64}}, ::Integer, ::Union{Ptr{Float64}, AbstractArray{Float64}}, ::Integer)
     @ LinearAlgebra /net/coc-herrmann/GATechBundle/Julia/julia-1.10.0/share/julia/stdlib/v1.10/LinearAlgebra/src/blas.jl:344
  
  Stacktrace:
    [1] dot(x::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, y::Matrix{Float32})
      @ LinearAlgebra.BLAS /net/coc-herrmann/GATechBundle/Julia/julia-1.10.0/share/julia/stdlib/v1.10/LinearAlgebra/src/blas.jl:395
    [2] dot(x::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, y::Matrix{Float32})
      @ LinearAlgebra /net/coc-herrmann/GATechBundle/Julia/julia-1.10.0/share/julia/stdlib/v1.10/LinearAlgebra/src/matmul.jl:15
    [3] (::InvertibleNetworks.var"#249#250"{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, CUDA.CuArray{Float32, 3, CUDA.Mem.DeviceBuffer}, Matrix{Float32}})(i::Int64)
      @ InvertibleNetworks ~/.julia/dev/InvertibleNetworks/src/layers/invertible_layer_conv1x1.jl:116
    [4] iterate
      @ Base ./generator.jl:47 [inlined]
    [5] _collect
      @ Base ./array.jl:854 [inlined]
    [6] collect_similar
      @ Base ./array.jl:763 [inlined]
    [7] map
      @ Base ./abstractarray.jl:3282 [inlined]
    [8] mat_tens_i(out::CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, Mat::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, Tens::CUDA.CuArray{Float32, 3, CUDA.Mem.DeviceBuffer}, Mat2::Base.ReshapedArray{Float32, 2, SubArray{Float32, 3, CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{}})

@@ -175,7 +175,7 @@ for logdet_bool = [true,false] #logdet is not tested in Jacobian yet.
dX_, dθ_, _, _ = G.adjointJacobian(dY_, Y)
a = dot(dY, dY_)
b = dot(dX, dX_) + dot(dθ, dθ_)
@test isapprox(a, b; rtol=1f-3)
@test isapprox(a, b; rtol=1f-1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quite the change

@@ -11,15 +11,15 @@ on:
jobs:
test:
name: Julia ${{ matrix.version }} - ${{ matrix.tests }} - ${{ matrix.os }}
runs-on: ${{ matrix.os }}
runs-on: self-hosted
Copy link
Member

@mloubout mloubout Mar 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's overkill you can add an extra to the matrix to do just one run on GPU

 matrix:
         version: ['1.6', '1.7', '1.8', '1.10']
         tests: ["basics", "layers", "networks"]
         os: [ubuntu-latest]

         include:
             - os: self-hosted
                tests:  whichever
                version: whichever

and then just needs

runs-on: ${{ matrix.os }} 

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SMART thank you!

@rafaelorozco rafaelorozco merged commit 2f8304a into master Mar 27, 2024
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants