Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kernel] Add ICX compiler. #228

Merged
merged 1 commit into from
Feb 21, 2024

Conversation

changqi1
Copy link
Contributor

@changqi1 changqi1 commented Feb 20, 2024

How to build xFT w/ Intel compiler(ICX/ICPX)

Step 1) Install dpc++ compiler:

# Download dpcpp offline package
sh l_dpcpp-cpp-compiler_p_2024.0.0.49524_offline.sh

Step 2) Install conda-forge env:

mkdir -p ~/miniforge3
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh" -O ~/miniforge3/miniforge.sh
bash ~/miniforge3/miniforge.sh -b -u -p ~/miniforge3
rm -rf ~/miniforge3/miniforge.sh
~/miniforge3/bin/conda init bash
~/miniforge3/bin/conda init zsh

# !!! Creat new terminal
conda --version
conda create --name xft python=3.8
conda activate xft

Step 3) Install oneccl and dependencies:

git clone https://github.com/intel/xFasterTransformer.git
cd xFasterTransformer/3rdparty/
sh prepare_oneccl.sh
pip install -r ../requirements.txt

Step 4) Initialize env:

cd xFasterTransformer/
mkdir build && cd build

conda activate xft
source /opt/intel/oneapi/setvars.sh # Or source ~/intel/oneapi/setvars.sh
source ../3rdparty/oneccl/build/_install/env/setvars.sh

export CC=icx
export CXX=icpx
cmake .. -DWITH_PIPELINE_PARALLEL=ON -DWITH_GPU=ON # -DBUILD_WITH_SHARED_LIBS=ON
make -j

@changqi1
Copy link
Contributor Author

changqi1 commented Feb 20, 2024

@a3213105 Please check chatglm2&3 for deleting NormT param.
@pujiang2018 Add WITH_GPU to build w/ ICX

@changqi1 changqi1 changed the title [kernel] Add GPU compiler. [kernel] Add ICX compiler. Feb 21, 2024
ChatGLM2<WeiT, NormT>::ChatGLM2(const std::string &modelPath, const std::string &modelType)
: CommonDecoder<Attention<WeiT, ChatGLM2RotaryEmbedding, NormT, float, float, float, true>,
ChatGLM2MLP<WeiT, float, float, float, NormT, true>>(modelPath, modelType) {
template <typename WeiT>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previous code contains 2 template parameters:
"template <typename WeiT, typename NormT>"
So, here the normalization operator will always be RmsNorm?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, synced w/ Gui Sheng. He answered me it is rms_norm currently in chatglm2/3.
By the way, ICX could compile template template default param.

@changqi1 changqi1 merged commit ead8aec into intel:main Feb 21, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants