Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add binary IO for matrix_data #984

Merged
merged 3 commits into from
Mar 10, 2022
Merged

Add binary IO for matrix_data #984

merged 3 commits into from
Mar 10, 2022

Conversation

upsj
Copy link
Member

@upsj upsj commented Mar 7, 2022

This PR adds a binary IO format for matrix_data. It should not depend on byte order/endianness, but the files are not compatible between big endian and little endian architectures. That could definitely be handled, because the header can be used to detect endianness, but I didn't want to put in the effort if we don't consider it necessary.

This should heavily speed up benchmarks, which are currently spending a lot of time just parsing text data from the .mtx format.
If this is merged, I will probably convert our storage on LSDF to binary format.

Related to #101

@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label Mar 7, 2022
@upsj upsj requested a review from a team March 7, 2022 13:00
@upsj upsj self-assigned this Mar 7, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:preconditioner This is related to the preconditioners type:solver This is related to the solvers labels Mar 7, 2022
@upsj
Copy link
Member Author

upsj commented Mar 7, 2022

format-rebase!

@ginkgo-bot
Copy link
Member

Formatting rebase introduced changes, see Artifacts here to review them

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we keep the same 1-based as the text matrix market format?

void process(const char* input, const char* output, bool validate)
{
std::ifstream is(input);
std::cerr << "Reading from " << input << '\n';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using clog?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think providing immediate output is more useful here for long-running conversions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or using std::endl will flush it
Is any reason to use cerr not cout here? for me, these are information not error message

Copy link
Member Author

@upsj upsj Mar 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, cout is for output that can be passed on to other tools e.g. via pipes/redirects, cerr is for status and error messages. As there is no useful information to be passed on, I don't use cout

@@ -41,7 +41,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include <ginkgo/core/base/exception_helpers.hpp>
#include <ginkgo/core/base/executor.hpp>
#include <ginkgo/core/base/lin_op.hpp>
#include <ginkgo/core/base/mtx_io.hpp>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's still needed, I think? Some functions use matrix_data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is matrix_data.hpp, mtx_io.hpp only contains read_raw and write_raw functions + read/write templates, which are orthogonal to this.

Comment on lines +852 to +862
result.nonzeros[i].value = static_cast<ValueType>(
select_helper<is_complex<ValueType>()>::get(value, real(value)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's needed because you have FileValueType = complex but ValueType != complex, right?
although it is not reachable, it still need to be legal

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exactly ;)

Comment on lines +854 to +864
result.nonzeros[i].row = row;
result.nonzeros[i].column = column;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe keep onebased in binary format as MatrixMarket?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would be the advantage compared to 0-based indexing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

almost no, but you have the same base across text and binary

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason MatrixMarket files use 1-based indexing is probably based in its FORTRAN history? As Ginkgo is C++-based and this is our own format, we can probably do whatever we want? 😆

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depends on this format definition. if it ginkgo own binary format, 0-based is good. if it is binary format for matrix market, 1-based is closer to matrix market

core/base/mtx_io.cpp Show resolved Hide resolved
core/base/mtx_io.cpp Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Mar 7, 2022

Codecov Report

Merging #984 (e752b85) into develop (cff196b) will decrease coverage by 0.81%.
The diff coverage is 97.77%.

❗ Current head e752b85 differs from pull request most recent head 092c57a. Consider uploading reports for the commit 092c57a to get more accurate results

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #984      +/-   ##
===========================================
- Coverage    93.05%   92.23%   -0.82%     
===========================================
  Files          479      479              
  Lines        39843    40069     +226     
===========================================
- Hits         37077    36959     -118     
- Misses        2766     3110     +344     
Impacted Files Coverage Δ
core/test/utils/assertions.hpp 67.61% <ø> (ø)
include/ginkgo/core/matrix/dense.hpp 96.32% <ø> (-0.74%) ⬇️
core/base/mtx_io.cpp 87.85% <93.90%> (+3.52%) ⬆️
core/test/base/mtx_io.cpp 98.86% <100.00%> (+1.11%) ⬆️
include/ginkgo/core/base/mtx_io.hpp 100.00% <100.00%> (ø)
test/matrix/matrix.cpp 0.00% <0.00%> (-81.28%) ⬇️
common/unified/matrix/ell_kernels.cpp 28.57% <0.00%> (-35.72%) ⬇️
common/unified/matrix/sellp_kernels.cpp 8.95% <0.00%> (-32.84%) ⬇️
omp/matrix/sparsity_csr_kernels.cpp 35.06% <0.00%> (-23.38%) ⬇️
common/unified/matrix/hybrid_kernels.cpp 35.71% <0.00%> (-21.43%) ⬇️
... and 17 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cff196b...092c57a. Read the comment docs.

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Nice work. The x_generic_raw looks very useful. A minor question on the header and what we should expect (or rather want) it to contain.

core/base/mtx_io.cpp Show resolved Hide resolved
core/base/mtx_io.cpp Show resolved Hide resolved
include/ginkgo/core/base/mtx_io.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/mtx_io.hpp Outdated Show resolved Hide resolved
@MarcelKoch
Copy link
Member

Just a general comment, would it be possible, to document somewhere how the binary format is defined? I think especially describing the header somewhere would be helpful.

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work. I think the new read_binary_* implementation is missing the sorting of the matrix_data. Besides that, it would be nice if you could add some short documentation to the new functions in mtx_io.cpp.

benchmark/tools/CMakeLists.txt Outdated Show resolved Hide resolved
benchmark/tools/mtx_to_binary.cpp Outdated Show resolved Hide resolved
core/base/mtx_io.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/mtx_io.hpp Outdated Show resolved Hide resolved
@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review 1:ST:run-full-test labels Mar 10, 2022
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general. some question about the reservation_size

void process(const char* input, const char* output, bool validate)
{
std::ifstream is(input);
std::cerr << "Reading from " << input << '\n';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or using std::endl will flush it
Is any reason to use cerr not cout here? for me, these are information not error message

Comment on lines +854 to +864
result.nonzeros[i].row = row;
result.nonzeros[i].column = column;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depends on this format definition. if it ginkgo own binary format, 0-based is good. if it is binary format for matrix market, 1-based is closer to matrix market

core/test/base/mtx_io.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/mtx_io.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/mtx_io.hpp Outdated Show resolved Hide resolved
Comment on lines +345 to +346
return 2 * num_nonzeros -
min(2 * num_nonzeros, max(num_rows, num_cols));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return 2 * num_nonzeros -
min(2 * num_nonzeros, max(num_rows, num_cols));
return 2 * num_nonzeros;

the reservation is 0 if 2#nnz is smaller than num_rows or num_cols.
If the reservation required to be large enough, it might only use 2 nnz (the worst case: no diagonal)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not required to be large enough, we only want to allocate enough memory for the common cases. For common, non-hypersparse symmetric matrices, the diagonal is full, and this gives a perfect estimate.

* Simplify binary magic setup
* Improve documentation
* Sort output of binary read

Co-authored-by: Yuhsiang Tsai <yhmtsai@gmail.com>
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
Co-authored-by: Pratik Nayak <pratik.nayak@kit.edu>
@ginkgo-bot
Copy link
Member

Note: This PR changes the Ginkgo ABI:

Functions changes summary: 0 Removed, 0 Changed, 55 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

For details check the full ABI diff under Artifacts here

@upsj upsj merged commit 843880a into develop Mar 10, 2022
@upsj upsj deleted the binary_io branch March 10, 2022 18:34
@sonarcloud
Copy link

sonarcloud bot commented Mar 10, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 27 Code Smells

43.2% 43.2% Coverage
0.0% 0.0% Duplication

tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:preconditioner This is related to the preconditioners type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants