Skip to content

Environment

hoshi-hiyouga edited this page Dec 24, 2023 · 5 revisions

System platform

Linux

Windows

MacOS

Device

NVIDIA GPU

For NVIDIA GPUs, we recommend using CUDA>=11.8

Install torch with CUDA support and test with

import torch
assert torch.cuda.is_available() is True

FlashAttention-2 requires sm>=8.0 LLM.int8 (8-bit quantization) requires sm>=7.5

https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/

Ascend NPU

NPU requires torch adapter, follow

https://github.com/Ascend/pytorch

If you are using deepspeed, you are also required to install the deepspeed adapter for NPU https://github.com/Ascend/DeepSpeed

AMD GPU

We support torch rocm version

Apple M2

install torch and test mps with

import torch  
assert torch.backends.mps.is_available() is True

Python packages

Clone this wiki locally