Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring GPU extensions #1365

Merged
merged 66 commits into from
Apr 12, 2024
Merged
Show file tree
Hide file tree
Changes from 51 commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
a36f446
Update the CUDA extension library move usings out of module name file
kmp5VT Mar 22, 2024
1962f6f
Merge commit 'b48b88168caa0bb66aee86116a932d4f71713658' into kmp5/ref…
kmp5VT Mar 27, 2024
b2a2490
format
kmp5VT Mar 27, 2024
7fe2013
Update adapt, checking on workstation
kmp5VT Mar 27, 2024
56a9209
format
kmp5VT Mar 27, 2024
d7fe3b3
Fix cuarray adaptor issue
kmp5VT Mar 27, 2024
bff4b16
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Mar 28, 2024
00dcb54
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Mar 28, 2024
2016ff7
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Mar 28, 2024
4a59610
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Mar 29, 2024
7690456
Update Fill so CUDA DMRG works
kmp5VT Mar 30, 2024
d158aef
Merge branch 'kmp5/refactor/update_gpu_backends' of github.com:kmp5VT…
kmp5VT Mar 30, 2024
7694636
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Mar 30, 2024
a3617c7
format
kmp5VT Mar 30, 2024
bad9b37
Remove an artifact of debugging
kmp5VT Mar 30, 2024
78a306e
Create CUDA append! function.
kmp5VT Mar 30, 2024
7bac7bb
add append to module
kmp5VT Mar 30, 2024
e2a65ff
Missing a using
kmp5VT Mar 30, 2024
4730c84
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 1, 2024
e997c38
Create expose version of append! to use `@allowscalar`
kmp5VT Apr 1, 2024
d9c7d69
Merge branch 'kmp5/refactor/update_gpu_backends' of github.com:kmp5VT…
kmp5VT Apr 1, 2024
252c9c9
Don't test append! with Metal
kmp5VT Apr 1, 2024
caf124d
Streamline generic_zeros functions
kmp5VT Apr 1, 2024
cd53b01
Flatten the block_size variables
kmp5VT Apr 1, 2024
b1b0201
format
kmp5VT Apr 1, 2024
e5a7d74
Remove duplicate lines
kmp5VT Apr 1, 2024
aaf61af
cu and roc now works
kmp5VT Apr 1, 2024
fe2e8a2
Make append! for each GPU backend
kmp5VT Apr 1, 2024
6a60f90
Update comment
kmp5VT Apr 1, 2024
5f73d1e
remove append!!
kmp5VT Apr 1, 2024
5ed42b2
Allow any dimension for generic_randn instead of integer
kmp5VT Apr 1, 2024
4f7bb44
format
kmp5VT Apr 1, 2024
22532be
storage -> storagemode
kmp5VT Apr 2, 2024
a2783b7
Delete examples
kmp5VT Apr 2, 2024
790ba98
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 2, 2024
83bb90c
remove usings from NDTensorsMetalExt module file
kmp5VT Apr 3, 2024
8e568d5
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 3, 2024
5ffaf67
Update rest of Metal ext package
kmp5VT Apr 3, 2024
11200ab
Add metalarrayadaptor and cleanup
kmp5VT Apr 3, 2024
ebb3669
Merge branch 'kmp5/refactor/update_gpu_backends' of github.com:kmp5VT…
kmp5VT Apr 3, 2024
f290d79
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 3, 2024
a0b2c0a
swapping NDTensors and ITensors dev should fix git tests
kmp5VT Apr 3, 2024
f66574a
Same error with CUDA and NDTensors so activate temp environment to fix
kmp5VT Apr 3, 2024
ed9866a
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 3, 2024
eb52e6f
Add warning comments about append being slow
kmp5VT Apr 3, 2024
dab7098
Add a check for the dimensions
kmp5VT Apr 4, 2024
5710c81
Add comment about requireing Position of ndims to be defined
kmp5VT Apr 4, 2024
f00b810
Merge branch 'main' into kmp5/refactor/update_gpu_backends
kmp5VT Apr 4, 2024
d263d57
Update generic_randn for dense
kmp5VT Apr 4, 2024
ee8ba2e
format
kmp5VT Apr 4, 2024
511d434
Update fill
kmp5VT Apr 4, 2024
153295e
rename fill.jl to generic_array_constructors.jl
kmp5VT Apr 4, 2024
15dc0b2
Code review
kmp5VT Apr 5, 2024
e088096
Swap internal function
kmp5VT Apr 5, 2024
b61311d
remove functions
kmp5VT Apr 5, 2024
7eda284
move using to top
kmp5VT Apr 5, 2024
dd437d7
Bump NDTensors minor version
kmp5VT Apr 5, 2024
cd1eb24
Use ndims instead of type_parameter
kmp5VT Apr 5, 2024
4dfbbdb
update generic_randn function for dense
kmp5VT Apr 5, 2024
296f0da
Update
kmp5VT Apr 5, 2024
64f0778
format
kmp5VT Apr 5, 2024
57abe1f
Update tests to use temp project instead of monorepo for safety
kmp5VT Apr 5, 2024
1ca8c8b
Reverse the structure again
kmp5VT Apr 5, 2024
036bcb1
format
kmp5VT Apr 5, 2024
d212282
Update ITensors and NDTensors versions
kmp5VT Apr 12, 2024
a530e0b
Update function call
kmp5VT Apr 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions .github/workflows/test_itensors_base_ubuntu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,13 @@ jobs:
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- name: Install Julia dependencies
shell: julia --project=monorepo {0}
- name: Install Julia dependencies and run tests
shell: julia {0}
run: |
using Pkg;
Pkg.develop(path=".");
Pkg.activate(temp=true)
Pkg.develop(path="./NDTensors");
- name: Run the tests
shell: julia --project=monorepo {0}
run: |
using Pkg;
# https://github.com/JuliaLang/Pkg.jl/pull/1226
Pkg.develop(path=".");
Pkg.test("ITensors"; coverage=true, test_args=["base"])
- uses: julia-actions/julia-uploadcodecov@latest
env:
Expand Down
11 changes: 5 additions & 6 deletions .github/workflows/test_ndtensors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,11 @@ jobs:
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- name: Install Julia dependencies
shell: julia --project=monorepo {0}
- name: Install Julia dependencies and run tests
shell: julia --depwarn=yes {0}
run: |
using Pkg;
Pkg.develop(path=".");
Pkg.activate(temp=true);
Pkg.develop(path="./NDTensors");
- name: Run the tests
run: |
julia --project=monorepo --depwarn=yes -e 'using Pkg; Pkg.test("NDTensors")'
Pkg.develop(path=".");
Pkg.test("NDTensors");
1 change: 1 addition & 0 deletions NDTensors/ext/NDTensorsAMDGPUExt/NDTensorsAMDGPUExt.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
module NDTensorsAMDGPUExt

include("append.jl")
include("copyto.jl")
include("set_types.jl")
include("adapt.jl")
Expand Down
2 changes: 1 addition & 1 deletion NDTensors/ext/NDTensorsAMDGPUExt/adapt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function Adapt.adapt_storage(adaptor::ROCArrayAdaptor, xs::AbstractArray)
end

function NDTensors.adapt_storagetype(
adaptor::ROCArrayAdaptor, xs::Type{EmptyStorage{ElT,StoreT}}
adaptor::ROCArrayAdaptor, ::Type{EmptyStorage{ElT,StoreT}}
) where {ElT,StoreT}
roctype = set_type_parameters(
ROCVector, (eltype, storagemode), (ElT, storagemode(adaptor))
Expand Down
8 changes: 8 additions & 0 deletions NDTensors/ext/NDTensorsAMDGPUExt/append.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
using GPUArraysCore: @allowscalar
using AMDGPU: ROCArray
using NDTensors.Expose: Exposed, unexpose

kmp5VT marked this conversation as resolved.
Show resolved Hide resolved
## Warning this append function uses scalar indexing and is therefore extremely slow
function Base.append!(Ecollection::Exposed{<:ROCArray}, collections...)
return @allowscalar append!(unexpose(Ecollection), collections...)
end
12 changes: 1 addition & 11 deletions NDTensors/ext/NDTensorsCUDAExt/NDTensorsCUDAExt.jl
Original file line number Diff line number Diff line change
@@ -1,15 +1,5 @@
module NDTensorsCUDAExt

using NDTensors
using NDTensors.Expose
using Adapt
using Functors
using LinearAlgebra: LinearAlgebra, Adjoint, Transpose, mul!, svd
using CUDA
using CUDA.CUBLAS
using CUDA.CUSOLVER

include("imports.jl")
include("append.jl")
include("default_kwargs.jl")
include("copyto.jl")
include("set_types.jl")
Expand Down
30 changes: 16 additions & 14 deletions NDTensors/ext/NDTensorsCUDAExt/adapt.jl
Original file line number Diff line number Diff line change
@@ -1,24 +1,26 @@
using NDTensors.TypeParameterAccessors: TypeParameterAccessors
using NDTensors.GPUArraysCoreExtensions: storagemode
using Adapt: Adapt
using CUDA: CUDA, CuArray, CuVector
using Functors: fmap
using NDTensors: NDTensors, EmptyStorage, adapt_storagetype, emptytype
using NDTensors.CUDAExtensions: CUDAExtensions, CuArrayAdaptor
using NDTensors.GPUArraysCoreExtensions: storagemode
using NDTensors.TypeParameterAccessors:
TypeParameterAccessors, default_type_parameter, set_type_parameters, type_parameters

## TODO make this work for unified. This works but overwrites CUDA's adapt_storage. This fails for emptystorage...
function CUDAExtensions.cu(xs; unified::Bool=false)
return fmap(
x -> adapt(CuArrayAdaptor{unified ? Mem.UnifiedBuffer : Mem.DeviceBuffer}(), x), xs
)
function CUDAExtensions.cu(xs; storagemode=default_type_parameter(CuArray, storagemode))
return fmap(x -> adapt(CuArrayAdaptor{storagemode}(), x), xs)
end

## Could do this generically
function Adapt.adapt_storage(adaptor::CuArrayAdaptor, xs::AbstractArray)
ElT = eltype(xs)
BufT = storagemode(adaptor)
N = ndims(xs)
return isbits(xs) ? xs : adapt(CuArray{ElT,N,BufT}, xs)
params = (type_parameters(xs, (eltype, ndims))..., storagemode(adaptor))
cutype = set_type_parameters(CuArray, (eltype, ndims, storagemode), params)
return isbits(xs) ? xs : adapt(cutype, xs)
end

function NDTensors.adapt_storagetype(
adaptor::CuArrayAdaptor, xs::Type{EmptyStorage{ElT,StoreT}}
adaptor::CuArrayAdaptor, ::Type{EmptyStorage{ElT,StoreT}}
) where {ElT,StoreT}
BufT = storagemode(adaptor)
return NDTensors.emptytype(NDTensors.adapt_storagetype(CuVector{ElT,BufT}, StoreT))
cutype = set_type_parameters(CuVector, (eltype, storagemode), (ElT, storagemode(adaptor)))
return emptytype(adapt_storagetype(cutype, StoreT))
end
8 changes: 8 additions & 0 deletions NDTensors/ext/NDTensorsCUDAExt/append.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
using GPUArraysCore: @allowscalar
using CUDA: CuArray
using NDTensors.Expose: Exposed, unexpose

## Warning this append function uses scalar indexing and is therefore extremely slow
function Base.append!(Ecollection::Exposed{<:CuArray}, collections...)
return @allowscalar append!(unexpose(Ecollection), collections...)
end
4 changes: 4 additions & 0 deletions NDTensors/ext/NDTensorsCUDAExt/copyto.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
using CUDA: CuArray
using NDTensors.Expose: Exposed, expose, unexpose
using LinearAlgebra: Adjoint

# Same definition as `MtlArray`.
function Base.copy(src::Exposed{<:CuArray,<:Base.ReshapedArray})
return reshape(copy(parent(src)), size(unexpose(src)))
Expand Down
3 changes: 3 additions & 0 deletions NDTensors/ext/NDTensorsCUDAExt/default_kwargs.jl
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
using CUDA: CuArray
using NDTensors: NDTensors

NDTensors.default_svd_alg(::Type{<:CuArray}, a) = "qr_algorithm"
3 changes: 0 additions & 3 deletions NDTensors/ext/NDTensorsCUDAExt/imports.jl

This file was deleted.

9 changes: 7 additions & 2 deletions NDTensors/ext/NDTensorsCUDAExt/indexing.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
using CUDA: CuArray
using GPUArraysCore: @allowscalar
using NDTensors: NDTensors
using NDTensors.Expose: Exposed, expose, unexpose

function Base.getindex(E::Exposed{<:CuArray})
return CUDA.@allowscalar unexpose(E)[]
return @allowscalar unexpose(E)[]
end

function Base.setindex!(E::Exposed{<:CuArray}, x::Number)
CUDA.@allowscalar unexpose(E)[] = x
@allowscalar unexpose(E)[] = x
return unexpose(E)
end

Expand Down
5 changes: 4 additions & 1 deletion NDTensors/ext/NDTensorsCUDAExt/iscu.jl
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
iscu(::Type{<:CuArray}) = true
using CUDA: CuArray
using NDTensors: NDTensors

NDTensors.iscu(::Type{<:CuArray}) = true
10 changes: 7 additions & 3 deletions NDTensors/ext/NDTensorsCUDAExt/linearalgebra.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
using Adapt: adapt
using CUDA: CUDA, CuMatrix
using LinearAlgebra: Adjoint, svd
using NDTensors: NDTensors
using NDTensors.Expose: Expose, expose, ql, ql_positive
using NDTensors.GPUArraysCoreExtensions: cpu
using NDTensors.TypeParameterAccessors: unwrap_array_type
function NDTensors.svd_catch_error(A::CuMatrix; alg::String="jacobi_algorithm")
if alg == "jacobi_algorithm"
alg = CUDA.CUSOLVER.JacobiAlgorithm()
Expand Down Expand Up @@ -42,9 +49,6 @@ function NDTensors.svd_catch_error(A::CuMatrix, ::CUDA.CUSOLVER.QRAlgorithm)
return USV
end

using NDTensors.GPUArraysCoreExtensions: cpu
using NDTensors.Expose: Expose, expose, ql, ql_positive
using NDTensors.TypeParameterAccessors: unwrap_array_type
## TODO currently AMDGPU doesn't have ql so make a ql function
function Expose.ql(A::Exposed{<:CuMatrix})
Q, L = ql(expose(cpu(A)))
Expand Down
4 changes: 4 additions & 0 deletions NDTensors/ext/NDTensorsCUDAExt/mul.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
using CUDA: CuArray
using LinearAlgebra: LinearAlgebra, mul!, transpose
using NDTensors.Expose: Exposed, expose, unexpose

# This was calling generic matrix multiplication.
# TODO: Raise an issue with `CUDA.jl`.
function LinearAlgebra.mul!(
Expand Down
3 changes: 3 additions & 0 deletions NDTensors/ext/NDTensorsCUDAExt/permutedims.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
using CUDA: CuArray
using NDTensors.Expose: Exposed, expose, unexpose

function Base.permutedims!(
Edest::Exposed{<:CuArray,<:Base.ReshapedArray}, Esrc::Exposed{<:CuArray}, perm
)
Expand Down
1 change: 1 addition & 0 deletions NDTensors/ext/NDTensorsCUDAExt/set_types.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# TypeParameterAccessors definitions
using CUDA: CUDA, CuArray
using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position
using NDTensors.GPUArraysCoreExtensions: storagemode

Expand Down
9 changes: 0 additions & 9 deletions NDTensors/ext/NDTensorsMetalExt/NDTensorsMetalExt.jl
Original file line number Diff line number Diff line change
@@ -1,14 +1,5 @@
module NDTensorsMetalExt

using Adapt
using Functors
using LinearAlgebra: LinearAlgebra, Adjoint, Transpose, mul!, qr, eigen, svd
using NDTensors
using NDTensors.Expose: qr_positive, ql_positive, ql

using Metal

include("imports.jl")
include("adapt.jl")
include("set_types.jl")
include("indexing.jl")
Expand Down
34 changes: 23 additions & 11 deletions NDTensors/ext/NDTensorsMetalExt/adapt.jl
Original file line number Diff line number Diff line change
@@ -1,17 +1,29 @@
using NDTensors.MetalExtensions: MetalExtensions
using NDTensors.GPUArraysCoreExtensions: GPUArraysCoreExtensions, set_storagemode
using NDTensors.TypeParameterAccessors: specify_type_parameters, type_parameters
using Adapt: Adapt, adapt
using Functors: fmap
using Metal: MtlArray, MtlVector, DefaultStorageMode
using NDTensors: NDTensors, EmptyStorage, adapt_storagetype, emptytype
using NDTensors.Expose: Exposed
using NDTensors.MetalExtensions: MetalExtensions, MtlArrayAdaptor
using NDTensors.GPUArraysCoreExtensions: GPUArraysCoreExtensions
using NDTensors.TypeParameterAccessors: set_type_parameters, type_parameters

GPUArraysCoreExtensions.cpu(e::Exposed{<:MtlArray}) = adapt(Array, e)

function MetalExtensions.mtl(xs; storage=DefaultStorageMode)
return adapt(set_storagemode(MtlArray, storage), xs)
function MetalExtensions.mtl(xs; storagemode=DefaultStorageMode)
return fmap(x -> adapt(MtlArrayAdaptor{storagemode}(), x), xs)
end

# More general than the version in Metal.jl
## TODO Rewrite this using a custom `MtlArrayAdaptor` which will be written in `MetalExtensions`.
function Adapt.adapt_storage(arraytype::Type{<:MtlArray}, xs::AbstractArray)
params = type_parameters(xs)
arraytype_specified = specify_type_parameters(arraytype, params)
return isbitstype(typeof(xs)) ? xs : convert(arraytype_specified, xs)
function Adapt.adapt_storage(adaptor::MtlArrayAdaptor, xs::AbstractArray)
new_parameters = (type_parameters(xs, (eltype, ndims))..., storagemode(adaptor))
mtltype = set_type_parameters(MtlArray, (eltype, ndims, storagemode), new_parameters)
return isbits(xs) ? xs : adapt(mtltype, xs)
end

function NDTensors.adapt_storagetype(
adaptor::MtlArrayAdaptor, ::Type{EmptyStorage{ElT,StoreT}}
) where {ElT,StoreT}
mtltype = set_type_parameters(
MtlVector, (eltype, storagemode), (ElT, storagemode(adaptor))
)
return emptytype(adapt_storagetype(mtltype, StoreT))
end
13 changes: 9 additions & 4 deletions NDTensors/ext/NDTensorsMetalExt/append.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# This circumvents an issues that `MtlArray` can't call `resize!`.
# TODO: Raise an issue with Metal.jl.
function NDTensors.append!!(::Type{<:MtlArray}, collection, collections...)
return vcat(collection, collections...)
## Right now append! is broken on metal because of a missing resize! function
## but make this available in the next release this will allow metal to work working
using GPUArraysCore: @allowscalar
using Metal: MtlArray
using NDTensors.Expose: Exposed, unexpose

## Warning this append function uses scalar indexing and is therefore extremely slow
function Base.append!(Ecollection::Exposed{<:MtlArray}, collections...)
return @allowscalar append!(unexpose(Ecollection), collections...)
end
3 changes: 3 additions & 0 deletions NDTensors/ext/NDTensorsMetalExt/copyto.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
using Metal: MtlArray
using NDTensors.Expose: Exposed, expose, unexpose

function Base.copy(src::Exposed{<:MtlArray,<:Base.ReshapedArray})
return reshape(copy(parent(src)), size(unexpose(src)))
end
Expand Down
3 changes: 0 additions & 3 deletions NDTensors/ext/NDTensorsMetalExt/imports.jl

This file was deleted.

9 changes: 7 additions & 2 deletions NDTensors/ext/NDTensorsMetalExt/indexing.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
using Metal: MtlArray
using GPUArraysCore: @allowscalar
using LinearAlgebra: Adjoint
using NDTensors.Expose: Exposed, expose, unexpose

function Base.getindex(E::Exposed{<:MtlArray})
return Metal.@allowscalar unexpose(E)[]
return @allowscalar unexpose(E)[]
end

function Base.setindex!(E::Exposed{<:MtlArray}, x::Number)
Metal.@allowscalar unexpose(E)[] = x
@allowscalar unexpose(E)[] = x
return unexpose(E)
end

Expand Down
3 changes: 3 additions & 0 deletions NDTensors/ext/NDTensorsMetalExt/linearalgebra.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
using Metal: MtlMatrix
using LinearAlgebra: LinearAlgebra, qr, eigen, svd
using NDTensors.Expose: qr_positive, ql_positive, ql
using NDTensors.TypeParameterAccessors:
set_type_parameters, type_parameters, unwrap_array_type

Expand Down
2 changes: 2 additions & 0 deletions NDTensors/ext/NDTensorsMetalExt/mul.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
using Metal: MtlArray
using LinearAlgebra: LinearAlgebra, Adjoint, Transpose, mul!
# This was calling generic matrix multiplication.
# TODO: Raise an issue with `Metal.jl`.
function LinearAlgebra.mul!(
Expand Down
1 change: 1 addition & 0 deletions NDTensors/ext/NDTensorsMetalExt/permutedims.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
using Metal: MtlArray
## Theres an issue in metal that `ReshapedArray' wrapped arrays cannot be permuted using
## permutedims (failing in that Metal uses scalar indexing)
## These functions are to address the problem in different instances of permutedims
Expand Down
1 change: 1 addition & 0 deletions NDTensors/ext/NDTensorsMetalExt/set_types.jl
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
using Metal: Metal, MtlArray
# `TypeParameterAccessors.jl` definitions.

using NDTensors.TypeParameterAccessors: TypeParameterAccessors, Position, set_type_parameter
Expand Down
Loading
Loading