Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for CuArray via Adapt.jl #67

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,17 @@ uuid = "1277b4bf-5013-50f5-be3d-901d8477a67a"
repo = "https://github.com/JuliaArrays/ShiftedArrays.jl.git"
version = "2.0.0"

[weakdeps]
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"

[extensions]
CUDASupportExt = ["CUDA", "Adapt"]

[compat]
julia = "1"
CUDA = "5.1.1"
Adapt = "3.7.2"
julia = "1.9"

[extras]
AbstractFFTs = "621f4979-c628-5d54-868e-fcf4e3e8185c"
Expand Down
26 changes: 26 additions & 0 deletions ext/CUDASupportExt.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
module CUDASupportExt
using CUDA
using Adapt
using ShiftedArrays
using Base # to allow displaying such arrays without causing the single indexing CUDA error

Adapt.adapt_structure(to, x::CircShiftedArray{T, D, CT}) where {T,D,CT<:CuArray} = CircShiftedArray(adapt(to, parent(x)), shifts(x));
RainerHeintzmann marked this conversation as resolved.
Show resolved Hide resolved
function Base.Broadcast.BroadcastStyle(::Type{T}) where (T<: CircShiftedArray{<:Any,<:Any,<:CuArray})
CUDA.CuArrayStyle{ndims(T)}()

Check warning on line 9 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L7-L9

Added lines #L7 - L9 were not covered by tests
end
RainerHeintzmann marked this conversation as resolved.
Show resolved Hide resolved

Adapt.adapt_structure(to, x::ShiftedArray{T, M, N, <:CuArray}) where {T,M,N} =

Check warning on line 12 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L12

Added line #L12 was not covered by tests
RainerHeintzmann marked this conversation as resolved.
Show resolved Hide resolved
# lets do this for the ShiftedArray type
ShiftedArray(adapt(to, parent(x)), shifts(x); default=ShiftedArrays.default(x))
function Base.Broadcast.BroadcastStyle(::Type{T}) where (T<: ShiftedArray{<:Any,<:Any,<:Any,<:CuArray})
CUDA.CuArrayStyle{ndims(T)}()

Check warning on line 16 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L15-L16

Added lines #L15 - L16 were not covered by tests
end

function Base.show(io::IO, mm::MIME"text/plain", cs::CircShiftedArray)
CUDA.@allowscalar invoke(Base.show, Tuple{IO, typeof(mm), AbstractArray}, io, mm, cs)

Check warning on line 20 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L19-L20

Added lines #L19 - L20 were not covered by tests
end

function Base.show(io::IO, mm::MIME"text/plain", cs::ShiftedArray)
CUDA.@allowscalar invoke(Base.show, Tuple{IO, typeof(mm), AbstractArray}, io, mm, cs)

Check warning on line 24 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L23-L24

Added lines #L23 - L24 were not covered by tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part makes me a little uncomfortable (it's type-piracy in some sense), maybe it's best to remove it for the time being and accept that showing errors and we can see how to fix it later. How do other wrapping packages do it? Should we maybe just depend on GPUArraysCore and take @allowscalar from there?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I will commend this out for the time being and test your other great suggestions,

end
end
22 changes: 21 additions & 1 deletion test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
using ShiftedArrays, Test
using AbstractFFTs
use_cuda = false; # set this to true to test ShiftedArrays for the CuArray datatype
if (use_cuda)
using CUDA
CUDA.allowscalar(true); # needed for some of the comparisons
end
Comment on lines +3 to +7
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could use https://github.com/JuliaGPU/GPUArrays.jl/blob/4278412a6b9b1d859c290232a9f8223eb4416d1e/lib/JLArrays/src/JLArrays.jl#L6 to test without a GPU? Also, maybe we can try and use @allowscalar to selectively allow scalar indexing where it's needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not want CUDA to be a dependency on testing, if we move @allowscalar outside the if clause, we would need to always have CUDA loaded for testing. If JLArrays.jl is a lightweight way of doing this, then this may be a good idea.


function opt_convert(v)
if (use_cuda)
CuArray(v)
else
v
end
end

@testset "ShiftedVector" begin
v = [1, 3, 5, 4]
v = opt_convert(v);
@test all(v .== ShiftedVector(v))
sv = ShiftedVector(v, -1)
@test isequal(sv, ShiftedVector(v, (-1,)))
Expand All @@ -28,6 +42,7 @@ end

@testset "ShiftedArray" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(v .== ShiftedArray(v))
sv = ShiftedArray(v, (-2, 0))
@test length(sv) == 16
Expand Down Expand Up @@ -64,6 +79,7 @@ end

@testset "padded_tuple" begin
v = rand(2, 2)
v = opt_convert(v);
@test (1, 0) == @inferred ShiftedArrays.padded_tuple(v, 1)
@test (0, 0) == @inferred ShiftedArrays.padded_tuple(v, ())
@test (3, 0) == @inferred ShiftedArrays.padded_tuple(v, (3,))
Expand All @@ -82,11 +98,12 @@ end

@testset "CircShiftedVector" begin
v = [1, 3, 5, 4]
v = opt_convert(v);
@test all(v .== CircShiftedVector(v))
sv = CircShiftedVector(v, -1)
@test isequal(sv, CircShiftedVector(v, (-1,)))
@test length(sv) == 4
@test all(sv .== [3, 5, 4, 1])
@test all(sv .== opt_convert([3, 5, 4, 1]))
diff = v .- sv
@test diff == [-2, -2, 1, 3]
@test shifts(sv) == (3,)
Expand All @@ -110,6 +127,7 @@ end

@testset "CircShiftedArray" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(v .== CircShiftedArray(v))
sv = CircShiftedArray(v, (-2, 0))
@test length(sv) == 16
Expand All @@ -130,6 +148,7 @@ end

@testset "circshift" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(circshift(v, (1, -1)) .== ShiftedArrays.circshift(v, (1, -1)))
@test all(circshift(v, (1,)) .== ShiftedArrays.circshift(v, (1,)))
@test all(circshift(v, 3) .== ShiftedArrays.circshift(v, 3))
Expand Down Expand Up @@ -163,6 +182,7 @@ end

@testset "laglead" begin
v = [1, 3, 8, 12]
v = opt_convert(v);
diff = v .- ShiftedArrays.lag(v)
@test isequal(diff, [missing, 2, 5, 4])

Expand Down
Loading