You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a user of the MathNet.Numerics library, extensively utilizing its linear algebra capabilities for performance-critical applications. I've noticed that the current API provides several methods for creating vectors from different data structures, such as arrays and enumerables. However, there seems to be a gap when it comes to modern memory types introduced in recent versions of .NET, namely ReadOnlyMemory<T> and ReadOnlySpan<T>.
Current Methods:
As of now, the library supports vector creation through the following methods:
These methods are incredibly useful but do not accommodate the scenarios where data resides in ReadOnlyMemory<T> or ReadOnlySpan<T>, which are becoming increasingly common in high-performance .NET applications.
Proposed Enhancements:
To bridge this gap, I propose the introduction of the following methods:
Vector<T>.Build.DenseOfMemory(ReadOnlyMemory<T>)
Vector<T>.Build.DenseOfSpan(ReadOnlySpan<T>)
These additions would not only enhance the flexibility and performance of vector creation in memory-constrained or latency-sensitive environments but also align MathNet.Numerics with modern .NET development practices. Specifically, it would allow for efficient vector creation from both ReadOnlyMemory<T> and Memory<T>, as well as from ReadOnlySpan<T> and Span<T>, without the need for copying or converting the underlying data.
Motivation:
The motivation for this request stems from the increasing use of Span<T> and Memory<T> types in .NET for managing memory more efficiently. By supporting these types, MathNet.Numerics can provide developers with more flexibility in how they handle numerical data, potentially reducing overhead and improving the performance of numerical computations.
I believe that these enhancements will significantly benefit users of MathNet.Numerics who are working with large datasets or in performance-critical applications, where memory efficiency and speed are paramount.
Thank you for considering this feature request. I am looking forward to any discussion on this topic and am happy to contribute to the implementation if needed.
The text was updated successfully, but these errors were encountered:
Hi, this sounds like a valid und useful request. Would you mind sending a PR for it so I can have a detailed look on how you envision its concrete usage?
After reviewing the source code, I've determined that significant modifications would be necessary to implement this feature effectively.
Accepting Memory or Span would only be advantageous if used for all internal mathematical operations. However, the current implementation of Vector<T> and Matrix<T> relies heavily on memory being stored in arrays. Changing this would require substantial modifications to all helper methods in ILinearAlgebraProvider, among others. Such changes would constitute a massive breaking change, as much of the public API would be altered.
To implement this feature, it would be necessary to first refactor the code to internally utilize ReadOnlySpan<T> and Span<T>, similar to what is done in the new System.Numerics.Tensors.TensorPrimitives.
Subsequently, we could refactor VectorStorage<T> and similar components to work with Memory<T>. Only after these steps could we introduce new Build.DenseOf... methods.
The benefits from a performance and memory efficiency standpoint would be enormous.
I am a user of the
MathNet.Numerics
library, extensively utilizing its linear algebra capabilities for performance-critical applications. I've noticed that the current API provides several methods for creating vectors from different data structures, such as arrays and enumerables. However, there seems to be a gap when it comes to modern memory types introduced in recent versions of .NET, namelyReadOnlyMemory<T>
andReadOnlySpan<T>
.Current Methods:
As of now, the library supports vector creation through the following methods:
Vector<T>.Build.DenseOfArray(T[])
Vector<T>.Build.DenseOfEnumerable(IEnumerable<T>)
Vector<T>.Build.DenseOfIndexed(int length, (int, T)[])
Vector<T>.Build.DenseOfVector(Vector<T>)
These methods are incredibly useful but do not accommodate the scenarios where data resides in
ReadOnlyMemory<T>
orReadOnlySpan<T>
, which are becoming increasingly common in high-performance .NET applications.Proposed Enhancements:
To bridge this gap, I propose the introduction of the following methods:
Vector<T>.Build.DenseOfMemory(ReadOnlyMemory<T>)
Vector<T>.Build.DenseOfSpan(ReadOnlySpan<T>)
These additions would not only enhance the flexibility and performance of vector creation in memory-constrained or latency-sensitive environments but also align
MathNet.Numerics
with modern .NET development practices. Specifically, it would allow for efficient vector creation from bothReadOnlyMemory<T>
andMemory<T>
, as well as fromReadOnlySpan<T>
andSpan<T>
, without the need for copying or converting the underlying data.Motivation:
The motivation for this request stems from the increasing use of
Span<T>
andMemory<T>
types in .NET for managing memory more efficiently. By supporting these types,MathNet.Numerics
can provide developers with more flexibility in how they handle numerical data, potentially reducing overhead and improving the performance of numerical computations.I believe that these enhancements will significantly benefit users of
MathNet.Numerics
who are working with large datasets or in performance-critical applications, where memory efficiency and speed are paramount.Thank you for considering this feature request. I am looking forward to any discussion on this topic and am happy to contribute to the implementation if needed.
The text was updated successfully, but these errors were encountered: