Skip to content

hgb-bin-proteomics/CandidateVectorSearch

Repository files navigation

test_state_windows test_state_ubuntu test_state_linux test_state_macos

CandidateVectorSearch

Searching for peptide candidates using sparse matrix + [sparse] vector/matrix multiplication. This is the computational backend for CandidateSearch - a search engine that aims to (quickly) identify peptide candidates for a given mass spectrum without any information about precursor mass or variable modifications. This is also the computational backend for the non-cleavable crosslink search in MS Annika.

Implements the following methods across two DLLs:

  • VectorSearch.dll:
    • findTopCandidates: sparse matrix - sparse vector multiplication [f32] using Eigen.
    • findTopCandidatesInt: sparse matrix - sparse vector multiplication [i32] using Eigen.
    • findTopCandidates2: sparse matrix - dense vector multiplication [f32] using Eigen.
    • findTopCandidates2Int: sparse matrix - dense vector multiplication [i32] using Eigen.
    • findTopCandidatesBatched: sparse matrix - sparse matrix multiplication [f32] using Eigen.
    • findTopCandidatesBatchedInt: sparse matrix - sparse matrix multiplication [i32] using Eigen.
    • findTopCandidatesBatched2: sparse matrix - dense matrix multiplication [f32] using Eigen.
    • findTopCandidatesBatched2Int: sparse matrix - dense matrix multiplication [i32] using Eigen.
  • VectorSearchCUDA.dll:
    • findTopCandidatesCuda: sparse matrix - dense vector multiplication [f32] using CUDA (SpMV).
    • findTopCandidatesCudaBatched: sparse matrix - sparse matrix multiplication [f32] using CUDA (SpGEMM).
    • findTopCandidatesCudaBatched2: sparse matrix - dense matrix multiplication [f32] using CUDA (SpMM).

VectorSearch.dll implements functions that run on the CPU, while VectorSearchCUDA.dll implements functions that run on a NVIDIA GPU using CUDA (version 12.2.0_536.25_windows).

Which functions should be used depends on the problem size and the available hardware. A general recommendation is to use findTopCanidates2 or findTopCandidates2Int on CPUs and findTopCandidatesCuda on GPUs.

Documentation

Functions are documented within the source code:

A better description of the input arrays is given in input.md.

An example usage where functions are called from a C# application is given in here (CPU) and here (GPU). A wrapper for C# is given in here.

Documentation is also available on https://hgb-bin-proteomics.github.io/CandidateVectorSearch/.

Benchmarks

See benchmarks.md.

Requirements

  • .NET may be required on some systems to run the DataLoader testing suite.
  • [Optional] Using GPU based approaches (e.g. anything implemented in VectorSearchCUDA.dll) requires a CUDA capable GPU and CUDA version == 12.2.0 (download here). Other CUDA versions may or may not produce the desired results (see this issue).

Downloads

Compiled DLLs are available in the dll folder or in Releases.

We supply compiled executables and DLLs for:

  • Windows 10/11 (x86, 64-bit)
  • Ubuntu 22.04 (x86, 64-bit)
  • macOS 14.4 (arm, 64-bit)

For other operating systems/architectures please compile the source code yourself! Exemplary compilation instructions for linux and macOS can be found here: linux.md and macos.md.

Limitations

Please be aware of the following limitations:

  • Ions/peaks up to 5000 m/z are supported, beyond that they are discarded.
  • The encoding precision is 0.01 (m/z, Dalton).
  • Only matrices up to 2 * 109 non-zero elements are supported [see this issue].
  • [Eigen][Sparse] Sparse candidate matrices support up to 100 elements per row, beyond that matrix creation might be slow due to resizing.
    • This means every peptide candidate can be encoded up to 100 ions.
  • [Eigen][Sparse] Sparse spectrum matrices support up to 1000 elements per row, beyond that matrix creation might be slow due to resizing.
    • This means spectra with more than 1000 peaks should be deisotoped, deconvoluted or peak picked to decrease the number of peaks.
    • This does not affect dense spectrum matrices.
  • [Eigen][i32] The rounding precision of converting floats to integers is 0.001, the exact rounding for a float val is (int) round(val * 1000.0f).
  • [Eigen][i32] Integer based methods do not allow tolerances below 0.01 because they might cause overflows.
  • [CUDA] Sparse matrix - sparse matrix multiplication tends to be very slow and very memory hungry, most likely caused by memory overhead and the output matrix not being sparse.

Implementing your own matrix products

If you want to implement your own (and hopefully faster) computation for matrix products, we offer a template repository that walks you through that: CandidateVectorSearch_template

Acknowledgements

Citing

If you are using [parts of] CandidateVectorSearch please cite:

MS Annika 3.0 (publication wip)

License

Contact

micha.birklbauer@fh-hagenberg.at