Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Sparse BLAS benchmark #759

Merged
merged 5 commits into from
Sep 20, 2021
Merged

Add Sparse BLAS benchmark #759

merged 5 commits into from
Sep 20, 2021

Conversation

upsj
Copy link
Member

@upsj upsj commented May 5, 2021

To simplify #626 a bit, I pulled out some changes from it:

This PR adds benchmarks for SpGEAM, SpGEAM and transpose as well as named lambda operations to Executors.
As input parameters we use

  • SpGEAM: A + A' where A' is a distance-limited row-permutation of the input matrix
  • SpGEMM: A * A' where A' is either A (square), A^T (rectangular), a generated random sparse or dense matrix of appropriate size.

Additionally it contains wrappers for nsparse, SpECK and KokkosKernels.

@upsj upsj self-assigned this May 5, 2021
@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label May 5, 2021
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing. labels May 5, 2021
@codecov
Copy link

codecov bot commented May 5, 2021

Codecov Report

Merging #759 (3bcd92a) into develop (8ccf6e4) will increase coverage by 0.00%.
The diff coverage is 75.92%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop     #759   +/-   ##
========================================
  Coverage    94.77%   94.78%           
========================================
  Files          429      429           
  Lines        35202    35201    -1     
========================================
+ Hits         33364    33365    +1     
+ Misses        1838     1836    -2     
Impacted Files Coverage Δ
include/ginkgo/core/base/executor.hpp 73.00% <ø> (ø)
omp/solver/multigrid_kernels.cpp 100.00% <ø> (ø)
reference/solver/multigrid_kernels.cpp 100.00% <ø> (ø)
core/solver/multigrid.cpp 93.37% <55.55%> (ø)
reference/test/solver/multigrid_kernels.cpp 95.30% <66.66%> (ø)
core/test/solver/multigrid.cpp 89.52% <82.60%> (ø)
core/multigrid/amgx_pgm.cpp 100.00% <100.00%> (ø)
include/ginkgo/core/solver/multigrid.hpp 98.75% <100.00%> (ø)
omp/test/solver/multigrid_kernels.cpp 100.00% <100.00%> (ø)
omp/reorder/rcm_kernels.cpp 98.13% <0.00%> (+0.60%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8ccf6e4...3bcd92a. Read the comment docs.

pratikvn
pratikvn previously approved these changes May 5, 2021
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Do you want to merge this before or after the SpGEMM PR ?

benchmark/spgeam/spgeam.cpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
core/device_hooks/cuda_hooks.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented May 5, 2021

@pratikvn This would be merged before SpGEMM, I want to slim down the actual PR a bit before returning to it.

@upsj upsj added this to the Ginkgo 1.4.0 milestone May 5, 2021
@upsj upsj modified the milestones: Ginkgo 1.4.0, Ginkgo 1.5.0 May 22, 2021
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general LGTM, but now that there is the BLAS benchmarks, should these be added there, or maybe an equivalent sparse BLAS, which would take care of changing CSR strategies among other things? The SpGEMM is a bit more complex than the SpGEAM, but maybe with a LinOp to represent them it would make unification possible?

benchmark/spgeam/CMakeLists.txt Outdated Show resolved Hide resolved
benchmark/spgemm/CMakeLists.txt Outdated Show resolved Hide resolved
core/device_hooks/cuda_hooks.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general, but I have some question in the comments

benchmark/matrix_statistics/matrix_statistics.cpp Outdated Show resolved Hide resolved
benchmark/spgeam/spgeam.cpp Outdated Show resolved Hide resolved
auto a = start_dist(rng);
auto b = a + delta_dist(rng);
if (b >= 0 && b < data.size[0]) {
std::swap(permutation[a], permutation[b]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

several swap combination can be a large shift. swap(1, 3) swap(3, 5) -> 3x5x1 the 1 is in 5th position
Is it also fine?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is okay, since it only happens with small probability. I only want similar rows on average to not always merge completely disjoint rows.

benchmark/spgeam/spgeam.cpp Outdated Show resolved Hide resolved
benchmark/spgeam/spgeam.cpp Outdated Show resolved Hide resolved
core/test/base/executor.cpp Outdated Show resolved Hide resolved
core/test/base/executor.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
benchmark/spgemm/spgemm.cpp Outdated Show resolved Hide resolved
@upsj upsj force-pushed the spgeamm_benchmarks branch 2 times, most recently from b3fc32c to 18e9a4b Compare July 8, 2021 15:27
@upsj upsj changed the title Add SpGE(A|M)M benchmarks Add Sparse BLAS benchmark Jul 8, 2021
@upsj upsj requested review from yhmtsai, pratikvn and tcojean July 8, 2021 16:12
@ginkgo-project ginkgo-project deleted a comment from sonarcloud bot Jul 14, 2021
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. some minor nits. the swapping distance usage is okay to me because do not mention the matrix is bounded by the distance.

include/ginkgo/core/base/executor.hpp Show resolved Hide resolved
benchmark/sparse_blas/sparse_blas.cpp Outdated Show resolved Hide resolved
"against the ReferenceExecutor solution.");


std::pair<bool, double> validate_result(const Mtx *correct_mtx,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe put some comments here or the output to describe the error meaning
false for inconsistent matrix structure and the true, double is for numerical error, right?

benchmark/sparse_blas/sparse_blas.cpp Outdated Show resolved Hide resolved
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Not sure, but maybe needs to be rebased.

benchmark/sparse_blas/sparse_blas.cpp Show resolved Hide resolved
@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Sep 19, 2021
@upsj upsj force-pushed the spgeamm_benchmarks branch 2 times, most recently from 67d9b49 to 25dcc1f Compare September 19, 2021 15:50
@ginkgo-project ginkgo-project deleted a comment from sonarcloud bot Sep 19, 2021
@upsj
Copy link
Member Author

upsj commented Sep 19, 2021

rebase!

upsj and others added 5 commits September 19, 2021 18:36
Looks like I forgot a few files from the Multigrid PR
in my formatting PR
Co-authored-by: Yuhsiang Tsai <yhmtsai@gmail.com>
@sonarcloud
Copy link

sonarcloud bot commented Sep 20, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 14 Code Smells

0.0% 0.0% Coverage
3.8% 3.8% Duplication

@upsj upsj merged commit 0a5f9fd into develop Sep 20, 2021
@upsj upsj deleted the spgeamm_benchmarks branch September 20, 2021 08:03
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants