Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for CuArray via Adapt.jl #67

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,17 @@ uuid = "1277b4bf-5013-50f5-be3d-901d8477a67a"
repo = "https://github.com/JuliaArrays/ShiftedArrays.jl.git"
version = "2.0.0"

[weakdeps]
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"

[extensions]
CUDASupportExt = ["CUDA", "Adapt"]

[compat]
julia = "1"
CUDA = "5.1.1"
Adapt = "3.7.2"
julia = "1.9"

[extras]
AbstractFFTs = "621f4979-c628-5d54-868e-fcf4e3e8185c"
Expand Down
24 changes: 24 additions & 0 deletions ext/CUDASupportExt.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
module CUDASupportExt
using CUDA
using Adapt
using ShiftedArrays
using Base # to allow displaying such arrays without causing the single indexing CUDA error

Adapt.adapt_structure(to, x::CircShiftedArray{T, D}) where {T, D} = CircShiftedArray(adapt(to, parent(x)), shifts(x));
parent_type(::Type{CircShiftedArray{T, N, S}}) where {T, N, S} = S
Base.Broadcast.BroadcastStyle(::Type{T}) where {T<:CircShiftedArray} = Base.Broadcast.BroadcastStyle(parent_type(T))

Check warning on line 9 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L7-L9

Added lines #L7 - L9 were not covered by tests

# lets do this for the ShiftedArray type
Adapt.adapt_structure(to, x::ShiftedArray{T, M, N}) where {T, M, N} = ShiftedArray(adapt(to, parent(x)), shifts(x); default=ShiftedArrays.default(x));
function Base.Broadcast.BroadcastStyle(::Type{T}) where (T<: ShiftedArray{<:Any,<:Any,<:Any,<:CuArray})
CUDA.CuArrayStyle{ndims(T)}()

Check warning on line 14 in ext/CUDASupportExt.jl

View check run for this annotation

Codecov / codecov/patch

ext/CUDASupportExt.jl#L12-L14

Added lines #L12 - L14 were not covered by tests
end

# function Base.show(io::IO, mm::MIME"text/plain", cs::CircShiftedArray)
# CUDA.@allowscalar invoke(Base.show, Tuple{IO, typeof(mm), AbstractArray}, io, mm, cs)
# end

# function Base.show(io::IO, mm::MIME"text/plain", cs::ShiftedArray)
# CUDA.@allowscalar invoke(Base.show, Tuple{IO, typeof(mm), AbstractArray}, io, mm, cs)
# end
end
22 changes: 21 additions & 1 deletion test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
using ShiftedArrays, Test
using AbstractFFTs
use_cuda = false; # set this to true to test ShiftedArrays for the CuArray datatype
if (use_cuda)
using CUDA
CUDA.allowscalar(true); # needed for some of the comparisons
end
Comment on lines +3 to +7
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could use https://github.com/JuliaGPU/GPUArrays.jl/blob/4278412a6b9b1d859c290232a9f8223eb4416d1e/lib/JLArrays/src/JLArrays.jl#L6 to test without a GPU? Also, maybe we can try and use @allowscalar to selectively allow scalar indexing where it's needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not want CUDA to be a dependency on testing, if we move @allowscalar outside the if clause, we would need to always have CUDA loaded for testing. If JLArrays.jl is a lightweight way of doing this, then this may be a good idea.


function opt_convert(v)
if (use_cuda)
CuArray(v)
else
v
end
end

@testset "ShiftedVector" begin
v = [1, 3, 5, 4]
v = opt_convert(v);
@test all(v .== ShiftedVector(v))
sv = ShiftedVector(v, -1)
@test isequal(sv, ShiftedVector(v, (-1,)))
Expand All @@ -28,6 +42,7 @@ end

@testset "ShiftedArray" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(v .== ShiftedArray(v))
sv = ShiftedArray(v, (-2, 0))
@test length(sv) == 16
Expand Down Expand Up @@ -64,6 +79,7 @@ end

@testset "padded_tuple" begin
v = rand(2, 2)
v = opt_convert(v);
@test (1, 0) == @inferred ShiftedArrays.padded_tuple(v, 1)
@test (0, 0) == @inferred ShiftedArrays.padded_tuple(v, ())
@test (3, 0) == @inferred ShiftedArrays.padded_tuple(v, (3,))
Expand All @@ -82,11 +98,12 @@ end

@testset "CircShiftedVector" begin
v = [1, 3, 5, 4]
v = opt_convert(v);
@test all(v .== CircShiftedVector(v))
sv = CircShiftedVector(v, -1)
@test isequal(sv, CircShiftedVector(v, (-1,)))
@test length(sv) == 4
@test all(sv .== [3, 5, 4, 1])
@test all(sv .== opt_convert([3, 5, 4, 1]))
diff = v .- sv
@test diff == [-2, -2, 1, 3]
@test shifts(sv) == (3,)
Expand All @@ -110,6 +127,7 @@ end

@testset "CircShiftedArray" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(v .== CircShiftedArray(v))
sv = CircShiftedArray(v, (-2, 0))
@test length(sv) == 16
Expand All @@ -130,6 +148,7 @@ end

@testset "circshift" begin
v = reshape(1:16, 4, 4)
v = opt_convert(v);
@test all(circshift(v, (1, -1)) .== ShiftedArrays.circshift(v, (1, -1)))
@test all(circshift(v, (1,)) .== ShiftedArrays.circshift(v, (1,)))
@test all(circshift(v, 3) .== ShiftedArrays.circshift(v, 3))
Expand Down Expand Up @@ -163,6 +182,7 @@ end

@testset "laglead" begin
v = [1, 3, 8, 12]
v = opt_convert(v);
diff = v .- ShiftedArrays.lag(v)
@test isequal(diff, [missing, 2, 5, 4])

Expand Down
Loading