C# Multi-device GPGPU(OpenCL) compute API with an iterative interdevice-loadbalancing feature using multiple pipelining on read/write/compute operations for developers' custom opencl kernels. Main idea is to treat N devices as a single device when possible, taking advantage of entire platform, easily, through shared-distributed memory model under the hood.
64-bit only. "project settings -> build -> platform target -> x64" Also configuration manager needs to look like this:
Needs extra C++ dll built in 64-bit(x86_64) from https://github.com/tugrul512bit/CekirdeklerCPP which must be named KutuphaneCL.dll
The other needed dll is Microsoft's System.Threading.dll and its xml helper for .Net 2.0 - or - you can adjust "using" and use .Net 3.5+ for your own project and don't need System.Threading.dll.
In total, Cekirdekler.dll and KutuphaneCL.dll and using .Net 3.5 should be enough.
Usage: add only Cekirdekler.dll and system.threading.dll as references to your C# projects. Other files needs to exist in same folder with Cekirdekler.dll or the executable of main project.
This project is being enhanced using ZenHub:
- Implicit multi device control: from CPUs to any number of GPUs and ACCelerators. Explicit in library-side for compatibility and performance, implicit for client-coder for the ease of GPGPU to concentrate on opencl kernel code. Selection of devices can be done implicitly or explicitly to achieve ease-of-setup or detailed device query. Handling(computing things) of devices are implicit, selection can be both implicit or explicit. Explicitly chosen multiple devices can be added together with a simple + operator.
- Iterative load balancing between devices: uniquely done for each different compute(explicit control with user-given compute-id). Multiple devices get more and more fair work loads until the ratio of work distribution converges to some point. Partitionig workload completes a kernel with less latency which is applicable for hot-spot loops and some simple embarrassingly-parallel algorithms. Even better for streaming data with pipelining option enabled.
- Pipelining for reads, computes and writes(host - device link): either by the mercy of device drivers or explicit event-based queue management. Hides the latency of least time consuming part(such as writes) behind the most time consuming part(such as compute). GPUs can run buffer copies and opencl kernels concurrently.
- Pipelining between devices(device - host - device): Concurrently run multiple stages to overlap them in timeline and gain advantage of multiple GPUs(and FPGAa, CPUs) for even non-separable(because of atomics and low-level optimizations) kernels of a time-consuming pipeline. Each device runs a different kernel but at the same time with other devices and uses double buffers to overlap even data movements between pipeline stages.
- Batch computing using task pools and device pools: Use every async pipeline of every gpu in system, for a pool of non-separable kernels(as tasks to compute later). Uses greedy scheduling algorithm to keep all GPUs busy.
- Working with different numeric arrays: Either C#-arrays like float[], int[], byte[],... or C++-array wrappers like ClFloatArray, ClArray<float>, ClByteArray, ClArray<byte>
- Automatic buffer copy optimizations for devices: If a device shares RAM with CPU, it uses map/unmap commands to reduce number of array copies(instead of read/write). If also that device is given a C++ wrapper array(such as ClArray<float>), it also uses cl_use_host_ptr flag on buffer for a zero-copy access aka" streaming". By default, all devices have their own buffers.
- Two different usage types: First one lets the developer choose all kernel parameters as arrays more explicitly for a more explicitly readable execution, second one creates same thing using a much shorter definition to complete in less code lines and change only the necessary flags instead of all.
- Automatic resource dispose: When C++ array wrappers are finalized(out-of-scope, garbage collected), they release resources. Also dispose method can be called explicitly by developer.
- Uses OpenCL 1.2: C++ bindings from Khronos.org for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP project produces OpenCL 1.2 backend dll file.
- Uses OpenCL 2.0: C++ bindings from Khronos.org for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP2 project produces OpenCL 2.0 backend dll file.(needs to be renamed to KutuphaneCL.dll)
You can see details and tutorial here in Cekirdekler-wiki
- For C++ array wrappers like Array<float> there is no out-of-bounds-check, don't cross boundaries when accessing array indexing.
- Don't use C++ array wrappers after they are disposed. These features are not added to speed-up array indexing.
- Don't use ClNumberCruncher or Core instances after they are disposed.
- Pay attention to "number of array elements used" per workitem in kernel and how they are given as parameters from API compute() method.
- Pay attenton to "partial read"/"read"/"write" array copy modifiers when your kernel is altering(or reading) whole array or just a part of it.
- No performance output at first iteration. Load balancer needs at least several iterations to distribute fairly and performance report needs at least 2 iterations for console output.
Example that computes 1000 workitems accross all GPUs in a PC: GPU1 computes global id range from 0 to M, GPU2 computes from M+1 to K and GPU_N computes for global id range of Y to Z
Cekirdekler.ClNumberCruncher cr = new Cekirdekler.ClNumberCruncher(
Cekirdekler.AcceleratorType.GPU, @"
__kernel void hello(__global char * arr)
{
int threadId=get_global_id(0);
printf(""hello world"");
}
");
Cekirdekler.ClArrays.ClArray<byte> array = new Cekirdekler.ClArrays.ClArray<byte>(1000);
// Cekirdekler.ClArrays.ClArray<byte> array = new byte[1000]; // host arrays are usable too!
array.compute(cr, 1, "hello", 1000, 100);
// local id range is 100 here. so this example spawns 10x workgroups and all GPUs share them like GPU1 computes 2 groups,
// GPU2 computes 5 groups and another GPU computes 3 groups. Global id values are continuous through all global workitems,
// local id values are also safe to use.
// faster GPUs get more work share over iterations. Performance aware over repeatations of a work.
// no need to dispose anything at the end. they do it themselves when out of scope or gc.