Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] [NDTensors] Excise libraries #1601

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 4 additions & 0 deletions NDTensors/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Strided = "5e0ebb24-38b0-5f93-81fe-25c709ecae67"
StridedViews = "4db3bf67-4bd7-4b4e-b153-31dc3fb37143"
TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
TupleTools = "9d95972d-f1c8-5527-a6e0-b4b365fa01f6"
TypeParameterAccessors = "7e5a90cf-f82e-492e-a09b-e3e26432c138"
VectorInterface = "409d34a3-91d5-4945-b6ec-7529ddf182d8"

[weakdeps]
Expand All @@ -55,6 +56,9 @@ NDTensorsOctavianExt = "Octavian"
NDTensorsTBLISExt = "TBLIS"
NDTensorscuTENSORExt = "cuTENSOR"

[sources]
TypeParameterAccessors = {url = "https://github.com/ITensor/TypeParameterAccessors.jl"}

[compat]
AMDGPU = "0.9, 1"
Accessors = "0.1.33"
Expand Down
5 changes: 2 additions & 3 deletions NDTensors/ext/NDTensorsGPUArraysCoreExt/blocksparsetensor.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
using GPUArraysCore: @allowscalar, AbstractGPUArray
using NDTensors: NDTensors, BlockSparseTensor, dense, diag, map_diag!
using NDTensors.DiagonalArrays: diaglength
using NDTensors: NDTensors, BlockSparseTensor, dense, diag, diaglength, map_diag!
using NDTensors.Expose: Exposed, unexpose

## TODO to circumvent issues with blocksparse and scalar indexing
Expand All @@ -11,7 +10,7 @@ function NDTensors.diag(ETensor::Exposed{<:AbstractGPUArray,<:BlockSparseTensor}
return diag(dense(unexpose(ETensor)))
end

## TODO scalar indexing is slow here
## TODO scalar indexing is slow here
function NDTensors.map_diag!(
f::Function,
exposed_t_destination::Exposed{<:AbstractGPUArray,<:BlockSparseTensor},
Expand Down
10 changes: 1 addition & 9 deletions NDTensors/src/NDTensors.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ include("abstractarray/similar.jl")
include("abstractarray/mul.jl")
include("abstractarray/permutedims.jl")
include("abstractarray/generic_array_constructors.jl")
include("abstractarray/diaginterface.jl")
include("array/permutedims.jl")
include("array/mul.jl")
include("tupletools.jl")
Expand Down Expand Up @@ -91,15 +92,6 @@ include("empty/adapt.jl")
#
include("deprecated.jl")

#####################################
# NDTensorsNamedDimsArraysExt
# I tried putting this inside of an
# `NDTensorsNamedDimsArraysExt` module
# but for some reason it kept overloading
# `Base.similar` instead of `NDTensors.similar`.
#
include("NDTensorsNamedDimsArraysExt/NDTensorsNamedDimsArraysExt.jl")

#####################################
# A global timer used with TimerOutputs.jl
#
Expand Down

This file was deleted.

1 change: 0 additions & 1 deletion NDTensors/src/NDTensorsNamedDimsArraysExt/fill.jl

This file was deleted.

5 changes: 0 additions & 5 deletions NDTensors/src/NDTensorsNamedDimsArraysExt/similar.jl

This file was deleted.

29 changes: 29 additions & 0 deletions NDTensors/src/abstractarray/diaginterface.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Selected interface functions from https://github.com/ITensor/DiagonalArrays.jl,
# copied here so we don't have to depend on `DiagonalArrays.jl`.

function diaglength(a::AbstractArray)
return minimum(size(a))
end

function diagstride(a::AbstractArray)
s = 1
p = 1
for i in 1:(ndims(a) - 1)
p *= size(a, i)
s += p
end
return s
end

function diagindices(a::AbstractArray)
maxdiag = LinearIndices(a)[CartesianIndex(ntuple(Returns(diaglength(a)), ndims(a)))]
return 1:diagstride(a):maxdiag
end

function diagindices(a::AbstractArray{<:Any,0})
return Base.OneTo(1)
end

function diagview(a::AbstractArray)
return @view a[diagindices(a)]
end
2 changes: 1 addition & 1 deletion NDTensors/src/abstractarray/generic_array_constructors.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using .TypeParameterAccessors:
using TypeParameterAccessors:
unwrap_array_type, specify_default_type_parameters, type_parameter

# Convert to Array, avoiding copying if possible
Expand Down
2 changes: 1 addition & 1 deletion NDTensors/src/abstractarray/iscu.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using .TypeParameterAccessors: unwrap_array_type
using TypeParameterAccessors: unwrap_array_type
# TODO: Make `isgpu`, `ismtl`, etc.
# For `isgpu`, will require a `NDTensorsGPUArrayCoreExt`.
iscu(A::AbstractArray) = iscu(typeof(A))
Expand Down
2 changes: 1 addition & 1 deletion NDTensors/src/abstractarray/set_types.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using .TypeParameterAccessors: TypeParameterAccessors
using TypeParameterAccessors: TypeParameterAccessors

"""
# Do we still want to define things like this?
Expand Down
2 changes: 1 addition & 1 deletion NDTensors/src/abstractarray/similar.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
using Base: DimOrInd, Dims, OneTo
using .TypeParameterAccessors: IsWrappedArray, unwrap_array_type, set_eltype, similartype
using TypeParameterAccessors: IsWrappedArray, unwrap_array_type, set_eltype, similartype

## Custom `NDTensors.similar` implementation.
## More extensive than `Base.similar`.
Expand Down
2 changes: 1 addition & 1 deletion NDTensors/src/adapt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ double_precision(x) = fmap(x -> adapt(double_precision(eltype(x)), x), x)
# Used to adapt `EmptyStorage` types
#

using .TypeParameterAccessors: specify_type_parameter, specify_type_parameters
using TypeParameterAccessors: specify_type_parameter, specify_type_parameters
function adapt_storagetype(to::Type{<:AbstractVector}, x::Type{<:TensorStorage})
return set_datatype(x, specify_type_parameter(to, eltype, eltype(x)))
end
Expand Down

This file was deleted.

This file was deleted.

61 changes: 0 additions & 61 deletions NDTensors/src/backup/arraystorage/arraystorage/storage/contract.jl

This file was deleted.

This file was deleted.

This file was deleted.

31 changes: 0 additions & 31 deletions NDTensors/src/backup/arraystorage/arraystorage/tensor/contract.jl

This file was deleted.

Loading
Loading