Replies: 2 comments 5 replies
-
For the dependencies:
|
Beta Was this translation helpful? Give feedback.
-
Building on the bullet point above about the minimal interface, here is a new interface proposal:
I'm not certain that covers everything we need so we'll have to try it out and see. Note that the names are meant to stick as close as possible to the corresponding Footnotes
|
Beta Was this translation helpful? Give feedback.
-
To-do list for splitting off
NDTensors.SparseArraysBase
as a separate registered packageSparseArraysBase.jl
:AnyAbstractSparseArray
type union in favor of an@derive
macro, similar toMoshi.@derive
, the Rust derive attribute for implementing traits, andArrayLayouts.@layoutmatrix
and related macros inArrayLayouts.jl
. This would basically automatically definegetindex
,map!
, etc. assparse_getindex
,sparse_map!
, etc. on a specified type or wrapper.NDTensors.jl
thatSparseArraysBase
depends on, and either remove those dependencies or assess what we need to do to split off those libraries into packages as well, or maybe merge them intoSparseArraysBase
. For example, right now it relies on:BroadcastMapConversion
, which converts broadcast calls to map calls (which is heavily inspired by the broadcasting code logic inStrided.jl
). That library is also used in other sub-modules ofNDTensors.jl
, such asBlockSparseArrays
andNamedDimsArrays
.TypeParameterAccessors
for generically accessing type parameters, in particular it uses functionality for generically getting the type of the parent of a wrapper type. We've been planning to split that off for a while, though I think there are still some type instability issues and interface questions to decide on so I'm not sure how comfortable I am doing that right now.NestedPermutedDimsArrays
. This dependency could get turned into a package extension with the sparse array interface getting overloaded with@derive
.a
it could beFillArrays.Zeros(eltype(a), size(a))
(based onFillArrays.Zeros
), but for a block sparse array it could beMappedArrays.mappedarray(I -> zeros(eltype(a), blocksizes(a)[I]), CartesianIndices(blocksize(a)))
(based onMappedArrays.MappedArray
).storage_index_to_index
,index_to_storage_index
,stored_indices
, etc. (both the names and functionality). For example, what should be assumed aboutstored_indices
? Should it list the indices in the same order that the corresponding values are stored in thesparse_storage
? Should they be expected to have fast (in the DOK case,O(1)
) lookup of indices?SparseArraysBase
corresponds to the current or planned functions in theSparseArrays.jl
standard library, such asnnz
,nonzerovals
(proposed to be changed tostoredvals
),nonzeroinds
(proposed to be changed tostoredinds
),isstored
, etc.@avoid_alloc
that changes the behavior ofa[B]
for block sparse arraya
so that if the blockB
doesn't exist in the storage, it returns an unallocatedFillArrays.Zeros
orUnallocatedArrays.UnallocatedZeros
object of the correct size. This can help with optimizing map operations, for example expressions likea[B] += x
right now have an unnecessary intermediate allocation, since for the defaulta::BlockSparseArray
(with the default zero constructor),a[B]
allocates an array filled with zeros.@avoid_alloc a[B] += x
could rewrite that expression as@avoid_alloc BlockSparseArray(blocks(a), z)[B] += x
wherez = mappedarray(I -> Zeros(eltype(a), blocksizes(a)[I]), CartesianIndices(blocksize(a)))
, i.e. an array defining the zero elements such that the zero elements of each index are a lazy zero array.VectorInterface.jl
,Zeros.jl
,StaticNumbers.jl
,Static.jl
, etc. Alternatively, if the elements are arrays, we could useFillArrays.Zeros
. However, these approaches have failure modes, for example if the function being passed isn't generic enough to accept those types, or we can't determine which types or array sizes to use based on the input array.Beta Was this translation helpful? Give feedback.
All reactions