Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #596

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 9 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ LLAMA – Low-Level Abstraction of Memory Access

![LLAMA](docs/images/logo_400x169.png)

LLAMA is a cross-platform C++17 template header-only library for the abstraction of memory
LLAMA is a cross-platform C++17 header-only template library for the abstraction of memory
access patterns. It distinguishes between the view of the algorithm on
the memory and the real layout in the background. This enables performance
portability for multicore, manycore and gpu applications with the very same code.
Expand All @@ -24,29 +24,23 @@ striding or any other run time or compile time access pattern.
To achieve this goal LLAMA is split into mostly independent, orthogonal parts
completely written in modern C++17 to run on as many architectures and with as
many compilers as possible while still supporting extensions needed e.g. to run
on GPU or other many core hardware.
on GPU or other many-core hardware.

Documentation
-------------

The user documentation can be found here:
https://llama-doc.rtfd.io.
The user documentation is available on [Read the Docs](https://llama-doc.rtfd.io).
It includes:

* Installation instructions
* Motivation and goals
* Overview of concepts and ideas
* Descriptions of LLAMA's constructs

Doxygen generated API documentation is located here:
https://alpaka-group.github.io/llama/.

We submitted a scientific preprint on LLAMA to arXiv here:
https://arxiv.org/abs/2106.04284.

An API documentation is generated by Doxygen from the C++ source and is uploaded to your [GitHub Page](https://alpaka-group.github.io/llama/).
We published a paper on LLAMA to the [Wiley Digital Library](https://doi.org/10.1002/spe.3077).
We gave a talk on LLAMA at CERN's Compute Accelerator Forum on 2021-05-12.
The video recording (starting at 40:00) and slides are available here:
https://indico.cern.ch/event/975010/.
The video recording (starting at 40:00) and slides are available here on [CERN's Indico](https://indico.cern.ch/event/975010/).

Single header
-------------
Expand All @@ -69,8 +63,10 @@ Attribution
-----------

If you use LLAMA for scientific work, please consider citing this project.
We upload all releases to [zenodo](https://zenodo.org/record/4911494), where you can export a citation in your preferred format.
We upload all releases to [Zenodo](https://zenodo.org/record/4911494),
where you can export a citation in your preferred format.
We provide a DOI for each release of LLAMA.
Additionally, consider citing the [LLAMA paper](https://doi.org/10.1002/spe.3077).

License
-------
Expand Down