Matrix Multiplication using CUDA
-
Updated
Jan 30, 2023 - C++
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Matrix Multiplication using CUDA
Programs in terminal applying the parallel programming model with the CUDA arquitecture
CUDA code for DE+PSO for Gene Regulatory Network inference
Simple image processing filters for both CPU and NVIDIA GPUs
CUDA real time simulation using GPU resources to create army simulation.
This CMake-based project contains some wrappers around the CUDA functions I use frequently. The wrappers are mainly concerned with throwing an exception with meaningful error messages in case of errors or ensuring that the GPU is always shut down properly and all alocated ressources are released. Some utility functions are also available.
Small Scale Parallel Programming, Sparse Matrix multiplication with CUDA
Repository for parallel programming course.
Exercises from the parallel programming class
The CUDA code of the fluid kspaceFirstOrder3D. This version is intended to be used with shared memory computers and is extended by on-the-fly compression and calculation of time-averaged acoustic Intensity in time-domain ultrasound simulations.
Final project for the "GPU Architecures and Computing" @ TU Wien
Created by Nvidia
Released June 23, 2007