Skip to content

Commit

Permalink
update out-of-date URL for Intel optimization guide
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 committed Nov 7, 2023
1 parent 4eb4483 commit 36a8907
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions recipes_source/recipes/tuning_guide.py
Original file line number Diff line number Diff line change
@@ -193,12 +193,15 @@ def fused_gelu(x):
#
# numactl --cpunodebind=N --membind=N python <pytorch_script>

###############################################################################
# More detailed descriptions can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.

###############################################################################
# Utilize OpenMP
# ~~~~~~~~~~~~~~
# OpenMP is utilized to bring better performance for parallel computation tasks.
# ``OMP_NUM_THREADS`` is the easiest switch that can be used to accelerate computations. It determines number of threads used for OpenMP computations.
# CPU affinity setting controls how workloads are distributed over multiple cores. It affects communication overhead, cache line invalidation overhead, or page thrashing, thus proper setting of CPU affinity brings performance benefits. ``GOMP_CPU_AFFINITY`` or ``KMP_AFFINITY`` determines how to bind OpenMP* threads to physical processing units.
# CPU affinity setting controls how workloads are distributed over multiple cores. It affects communication overhead, cache line invalidation overhead, or page thrashing, thus proper setting of CPU affinity brings performance benefits. ``GOMP_CPU_AFFINITY`` or ``KMP_AFFINITY`` determines how to bind OpenMP* threads to physical processing units. Detailed information can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.

###############################################################################
# With the following command, PyTorch run the task on N OpenMP threads.
@@ -283,7 +286,7 @@ def fused_gelu(x):
traced_model(*sample_input)

###############################################################################
# While the JIT fuser for oneDNN Graph also supports inference with ``BFloat16`` datatype,
# While the JIT fuser for oneDNN Graph also supports inference with ``BFloat16`` datatype,
# performance benefit with oneDNN Graph is only exhibited by machines with AVX512_BF16
# instruction set architecture (ISA).
# The following code snippets serves as an example of using ``BFloat16`` datatype for inference with oneDNN Graph:

0 comments on commit 36a8907

Please sign in to comment.