Skip to content

ktdq/arrayfire

 
 

Repository files navigation

ArrayFire is a general-purpose tensor library that simplifies the process of software development for the parallel architectures found in CPUs, GPUs, and other hardware acceleration devices. The library serves users in every technical computing market.

Several of ArrayFire's benefits include:

  • Hundreds of accelerated tensor computing functions, in the following areas:
    • Array handling
    • Computer vision
    • Image processing
    • Linear algebra
    • Machine learning
    • Standard math
    • Signal Processing
    • Statistics
    • Vector algorithms
  • Easy to use, stable, well-documented API
  • Rigorous benchmarks and tests ensuring top performance and numerical accuracy
  • Cross-platform compatibility with support for CUDA, OpenCL, and native CPU on Windows, Mac, and Linux
  • Built-in visualization functions through Forge
  • Commercially friendly open-source licensing
  • Enterprise support from ArrayFire

ArrayFire provides software developers with a high-level abstraction of data that resides on the accelerator, the af::array object. Developers write code that performs operations on ArrayFire arrays, which, in turn, are automatically translated into near-optimal kernels that execute on the computational device.

ArrayFire runs on devices ranging from low-power mobile phones to high-power GPU-enabled supercomputers. ArrayFire runs on CPUs from all major vendors (Intel, AMD, ARM), GPUs from the prominent manufacturers (NVIDIA, AMD, and Qualcomm), as well as a variety of other accelerator devices on Windows, Mac, and Linux.

Getting ArrayFire

Instructions to install or to build ArrayFire from source can be found on the wiki.

Conway's Game of Life Using ArrayFire

Visit the Wikipedia page for a description of Conway's Game of Life.

Conway's Game of Life

static const float h_kernel[] = { 1, 1, 1, 1, 0, 1, 1, 1, 1 };
static const array kernel(3, 3, h_kernel, afHost);

array state = (randu(128, 128, f32) > 0.5).as(f32); // Init state
Window myWindow(256, 256);
while(!myWindow.close()) {
    array nHood = convolve(state, kernel); // Obtain neighbors
    array C0 = (nHood == 2);  // Generate conditions for life
    array C1 = (nHood == 3);
    state = state * C0 + C1;  // Update state
    myWindow.image(state);    // Display
}

The complete source code can be found here.

Perceptron

Perceptron

array predict(const array &X, const array &W) {
    return sigmoid(matmul(X, W));
}

array train(const array &X, const array &Y,
        double alpha = 0.1, double maxerr = 0.05,
        int maxiter = 1000, bool verbose = false) {
    array Weights = constant(0, X.dims(1), Y.dims(1));

    for (int i = 0; i < maxiter; i++) {
        array P   = predict(X, Weights);
        array err = Y - P;
        if (mean<float>(abs(err) < maxerr) break;
        Weights += alpha * matmulTN(X, err);
    }
    return Weights;
}
...

array Weights = train(train_feats, train_targets);
array test_outputs  = predict(test_feats, Weights);
display_results<true>(test_images, test_outputs,
                      test_targets, 20);

The complete source code can be found here.

For more code examples, visit the examples/ directory.

Documentation

You can find the complete documentation here.

Quick links:

Language support

ArrayFire has several official and community maintained language API's:

C++ Python Rust Julia Nim

  Community maintained wrappers

In-Progress Wrappers

.NET Fortran Go Java Lua NodeJS R Ruby

Contributing

The community of ArrayFire developers invites you to build with us if you are interested and able to write top-performing tensor functions. Together we can fulfill The ArrayFire Mission for fast scientific computing for all.

Contributions of any kind are welcome! Please refer to the wiki and our Code of Conduct to learn more about how you can get involved with the ArrayFire Community through Sponsorship, Developer Commits, or Governance.

Citations and Acknowledgements

If you redistribute ArrayFire, please follow the terms established in the license. If you wish to cite ArrayFire in an academic publication, please use the following citation document.

ArrayFire development is funded by AccelerEyes LLC and several third parties, please see the list of acknowledgements for an expression of our gratitude.

Support and Contact Info

Trademark Policy

The literal mark "ArrayFire" and ArrayFire logos are trademarks of AccelerEyes LLC (dba ArrayFire). If you wish to use either of these marks in your own project, please consult ArrayFire's Trademark Policy

About

ArrayFire: a general purpose GPU library.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 85.1%
  • C 6.3%
  • Cuda 4.3%
  • CMake 4.2%
  • Other 0.1%