Skip to content

API Documentation

SubSpecs edited this page Jul 19, 2023 · 11 revisions

Method Calls (os_methods.h)

RegisterOnGLFWErrorMethod(void*)

void RegisterOnGLFWErrorMethod(void* Method);

Registers a method callback that will be used if an GLFW error occurs.

Params:

  • [in] Method - The method pointer that you pass in to register the callback for GLFW errors. The callback signature is:
callback_method(const char* ErrorName, const char* FunctionName, const char* ErrorDesc);

RegisterOnProgramRunErrorMethod(void*)

void RegisterOnProgramRunErrorMethod(void* Method);

Registers a method callback that will be used if an Compute Program Runtime error occurs.

Params:

  • [in] Method - The method pointer that you pass in to register the callback for Compute Program Runtime errors. The callback signature is:
callback_method(const char* ErrorDesc);

RegisterOnShaderCompileErrorMethod(void*)

void RegisterOnShaderCompileErrorMethod(void* Method);

Registers a method callback that will be used if an Compute Program Compilation error occurs.

Params:

  • [in] Method - The method pointer that you pass in to register the callback for Compute Program Compilation errors. The callback signature is:
callback_method(const char* ErrorDesc);

GetCurrentTimestamp(void)

long long GetCurrentTimestamp(void);

Gets the currently passed tick count, or in simpler terms a timestamp value.


GetTimestampSeconds(long long)

long long GetTimestampSeconds(long long TimeStamp);

Gets the passed second count from a TimeStamp.

Params:

  • [in] TimeStamp - The input timestamp from which to calculate passed seconds.

Returns:

  • Calculated seconds from given timestamp.

GetTimestampMilliseconds(long long)

long long GetTimestampMilliseconds(long long TimeStamp);

Gets the passed milliseconds count from a TimeStamp.

Params:

  • [in] TimeStamp - The input timestamp from which to calculate passed milliseconds.

Returns:

  • Calculated milliseconds from given timestamp.

GetTimestampMicroseconds(long long)

long long GetTimestampMicroseconds(long long TimeStamp);

Gets the passed microseconds count from a TimeStamp.

Params:

  • [in] TimeStamp - The input timestamp from which to calculate passed microseconds.

Returns:

  • Calculated microseconds from given timestamp.

GetTimestampNanoseconds(long long)

long long GetTimestampNanoseconds(long long TimeStamp);

Gets the passed nanoseconds count from a TimeStamp.

Params:

  • [in] TimeStamp - The input timestamp from which to calculate passed nanoseconds.

Returns:

  • Calculated nanoseconds from given timestamp.




Declerations (gpu_methods.h)

struct GPUDevice

typedef struct GPUDevice { void* GPUContext; void* GPUMonitor; char* DisplayName; char*MonitorName; int GPUDeviceLimits[3]; } GPUDevice;



Fields:

  • GPUContext - An pointer to the created GPU context if any.
  • GPUMonitor - An pointer to the monitor of the created GPU context if any.
  • DisplayName - A human readable display name of the GPU device.
  • GPUDeviceLimits - GPU device processing limitations.




Method Calls (gpu_methods.h)

Initialize(void)

void Initialize(void);

Initializes the Cocaine library, this method MUST be called BEFORE using any methods from the library.


RefreshGPUList(void)

void RefreshGPUList(void);

Refreshes the internal detected GPU list. Initialize() already does this once.
WARNING: Make sure ALL GPU contexts are disposed before calling this, otherwise behavior undefined.


ReleaseResources(void)

void ReleaseResources(void);

Releases all resources used by the library. Note: This will also close all contexts as well.


CreateGPUContext(GPUDevice*)

bool CreateGPUContext(GPUDevice* Device);

Creates an GPU context for the specified GPU device. WARNING: Must only be called from the MAIN THREAD.
NOTE: It doesn't matter from which thread you called Initialize() from, this STILL requires you to call it from the applications main thread.
This is a limitation I cannot remove because of GLFW.
GPUDevice struct declaration can be found in the gpu_methods.h header file.

Params:

  • [in] Device - GPU device from which to create the GPU context.

Returns:

  • A 1(true) if it was successful or a 0(false) if it wasn't.

DisposeGPUContext(GPUDevice*)

void DisposeGPUContext(GPUDevice* Device);

Disposes an previously created GPU context. WARNING: Must only be called from the MAIN THREAD.
NOTE: It doesn't matter from which thread you called Initialize() from, this STILL requires you to call it from the applications main thread.
This is a limitation I cannot remove because of GLFW.
GPUDevice struct declaration can be found in the gpu_methods.h header file.

Params:

  • [in] Device - GPU device whose context needs disposing.

GetRawGPUDevices(GPUDevice**)

int GetRawGPUDevices(GPUDevice** Devices);

Gets the internal RAW pointer of the currently detected GPU devices.
WARNING: This is only intended for reading, any modifications on your end are your own problems.
GPUDevice struct declaration can be found in the gpu_methods.h header file.

Params:

  • [out] Devices- An NULL GPU device pointer of where to store detected GPU devices.

Returns:

  • The detected GPU device count.




Declerations (api_methods.h)

struct GPUBuffer

typedef struct GPUBuffer { unsigned int Buffer; int BufferID; } GPUBuffer;



Fields:

  • Buffer - An pointer/ID to the created GPU buffer.
  • BufferID - The buffer ID used when the buffer was created.

enum GPUBufferTypes

typedef enum GPUBufferTypes { NoReadWrite, Read, FastRead, ReadWrite } GPUBufferTypes;



Values:

  • NoReadWrite - Bufer is intended to be neither read nor written to outside of the shader program.
  • Read - Buffer is read-only and when read reads directly from GPU memory.
  • FastRead - Buffer is read-only and the buffers contents are stored within a copy buffer in RAM, making reading fast but consumes 'ByteCount' bytes of extra RAM to create.
  • ReadWrite - Buffer is in read-write mode, you can read-write at leisure, but note it will read-write to GPU memory directly so it might be slower.




Method Calls (api_methods.h)

AllocateGPUBuffer(GPUBufferTypes, int, void*, long long);

GPUBuffer AllocateGPUBuffer(GPUBufferTypes BufferType, int BufferID, void* ByteBuffer, long long ByteCount);

Allocate a byte buffer on the GPU VRAM memory on the currently active GPU context on the calling thread.
WARNING: Allocate only the amount of memory that your GPU device has. Allocating more that there is can result in undefined* behavior.
*Trying this myself showed that it started consuming SHARED(RAM) memory as well. Don't know what will happen on other platforms.

NOTE: GPUBufferTypes enum and GPUBuffer struct declarations can be found in the api_methods.h header file.

Params:

  • [in] BufferType - An enum representing what you're planning to do with said buffer.
  • [in] BufferID - An ID you pass to enumerate/represent your buffer later in the GLSL compute shader.
  • [in] ByteBuffer - A byte buffer you can pass in that will be copied in to the GPU buffer along side it's creation.
    NOTE: You can pass NULL to skip the copying part an just create an zero/clear byte buffer. ByteCount has to be > 0.
  • [in] ByteCount - The amount of bytes to copy to the GPU buffer from the ByteBuffer buffer.

Returns:

  • The allocated GPU buffer struct, containing a pointer to the GPU buffer and the BufferID you passed in.

DeallocateGPUBuffer(GPUBuffer* GPUBuffer);

void DeallocateGPUBuffer(GPUBuffer* GPUBuffer);

Deallocate a byte buffer on the GPU VRAM memory on the currently active GPU context on the calling thread.
NOTE: GPUBufferTypes enum and GPUBuffer struct declarations can be found in the api_methods.h header file.

Params:

  • [in] GPUBuffer - The GPU buffer to deallocate.

DeallocateGPUBuffer(GPUBuffer* GPUBuffer);

void DeallocateGPUBuffer(GPUBuffer* GPUBuffer);

Deallocate a byte buffer on the GPU VRAM memory on the currently active GPU context on the calling thread.
NOTE: GPUBuffer struct declaration can be found in the api_methods.h header file.

Params:

  • [in] GPUBuffer - The GPU buffer to deallocate.

ReadFromGPUBuffer(GPUBuffer*, void*, long long, long long);

void ReadFromGPUBuffer(GPUBuffer* GPUBuffer, void* Buffer, long long GPUBufferOffset, long long Count);

Read from the GPU buffer(VRAM) into a specified byte buffer(RAM) on the currently active GPU context on the calling thread.
NOTE: GPUBuffer struct declaration can be found in the api_methods.h header file.

Params:

  • [in] GPUBuffer - An GPU buffer to read from.
  • [in] Buffer - An byte buffer to read the data into.
  • [in] GPUBufferOffset - The byte based offset to read from on the GPU buffer(VRAM).
  • [in] Count - The amount of bytes to read from the GPU buffer(VRAM).

WriteToGPUBuffer(GPUBuffer*, void*, long long, long long);

void WriteToGPUBuffer(GPUBuffer* GPUBuffer, void* Buffer, long long GPUBufferOffset, long long Count);

Write to the GPU buffer(VRAM) from a specified byte buffer(RAM) on the currently active GPU context on the calling thread.
NOTE: GPUBuffer struct declaration can be found in the api_methods.h header file.

Params:

  • [in] GPUBuffer - An GPU buffer to write to.
  • [in] Buffer - An byte buffer to read the data from.
  • [in] GPUBufferOffset - The byte based offset to write to on the GPU buffer(VRAM).
  • [in] Count - The amount of bytes to write to the GPU buffer(VRAM).

CompileProgram(char*, unsigned int*);

bool CompileProgram(char* ShaderCode, unsigned int* OutProgram);

Compile a string/char based GLSL compute shader program on the currently active GPU context on the calling thread.
NOTE: Use RegisterOnShaderCompileErrorMethod method for error checking. For how to actually use buffers previously allocated check the example use.

WARNING: This method might fail on some GPU's, this is because:
a) The GPU is so bad/old/random af that it doesn't support compiling. (Check your error callbacks for a specific error message.)
b) For some retarded af reason some GPU's (Intel for instance) have slightly different GLSL shader syntax. Follow the shader error callbacks to see what it complains about and fix it.

WE DO NOT PROVIDE GLSL SHADER COURSES SINCE WE DIDN'T DESIGN NOR MADE IT, LOOK FOR OTHER SOURCES ON THAT.
ANY ISSUES WE SEE RELATING TO GLSL SHADERS WE WILL AUTOMATICALLY CLOSE AND IGNORE.
YOU HAVE BEEN WARNED!

Params:

  • [in] ShaderCode - Compute Shader program text to pass in.
  • [out] OutProgram - A pointer where to store our compiled compute shader program.

Returns:

  • A 1(true) if compilation was successful or a 0(false) if it wasn't. Again, use RegisterOnShaderCompileErrorMethod method for error checking.

LoadComputeProgram(unsigned char*, int, GLuint*);

bool LoadComputeProgram(unsigned char* Buffer, int Count, unsigned int* Program);

Loads a pre-compiled program binary from memory. Only works if called on a thread that has an active GPU context!

Params:

  • [in] Buffer - A memory buffer where the binary program is stored.
  • [in] Count - Byte count of the binary program.
  • [out] Program - A pointer where we should store our loaded program.

Returns:

  • A 1(true) if successful or a 0(false) if it wasn't.

SaveComputeProgram(unsigned int, unsigned char*);

int SaveComputeProgram(unsigned int Program, unsigned char* Buffer);

Saves a compiled program binary to memory. Only works if called on a thread that has an active GPU context!

Params:

  • [in] Program - An program object where the program is stored.
  • [out] Buffer - A memory buffer of where to store our program.

Returns:

  • The byte count written to Buffer.

SetActiveGPUContext(GPUDevice*);

void SetActiveGPUContext(GPUDevice* Device);

Sets the currently active GPU context on the calling thread.

Params:

  • [in] Device - An GPU device which to make active on the current thread.

RunComputeProgram(unsigned int, int*, long long, bool);

bool RunComputeProgram(unsigned int Program, int* GPUDeviceLimits, long long ProcessCount, bool PreciseCycleCount);

Runs a specific Compute Shader Program on the currently active GPU context on the calling thread.

Params:

  • [in] Program - An program object where the program is stored.
  • [in] GPUDeviceLimits - GPU device limits which you get from your GPUDevice object. These are used to optimize program execution on the GPU.
  • [in] ProcessCount - How many times to run specified program. Think of it as 'for' cycle iterations.
  • [in] PreciseCycleCount - Because of complicated math, sometimes the ProcessCount can be exceeded in favor of speed. From my testing, this doesn't affect program inputs/outputs or stability at all. But just in-case, I added an bool(1(true) or 0(false)) so you can specify this.

Returns:

  • A 1(true) if the program run was successful or a 0(false) if it wasn't.