Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Upgrade MKL-DNN dependency to v1.0 (#16555)
Browse files Browse the repository at this point in the history
* [mkldnn-v1.0] Initiate the transition to MKL-DNN v1.0 (#15706)

* update mkldnn to 1.0.1 release

* change makefile

* change cmake

* update ci build and pip package build

* fix typo in mkldnn.mk

* fix build for USE_BLAS=mkl & bump MKL version

* skip mkldnn unit tests

* remove iomp5 from mx_mkldnn_lib

* ci: skip test_mkldnn_install

* retrigger ci

* retrigger ci

* retrigger ci

* [mkldnn-v1.0] Update MKL-DNN to v1.0.2 (#16012)

* bump mkldnn to v1.0.2

* skip quantization unit test

* add useless build flag

* Fixes openblas installation for static build

* empty commit

* [mkldnn-v1.0] Enable base code with new APIs. (#16064)

* fix comments (#8)

* add base code for mkldnn 1.0

* fix comments

* Update mkldnn.mk

* add base code for mkldnn 1.0

* fix build

* fix lint

* fix lint

* [mkldnn-v1.0] Add MKL-DNN Convolution (#16141)

* add mkldnn conv

* revert unnecessary change

* fix testcase fail for cpu: test_convolution_independent_gradients

* fix failed testcase: test_reshape_transpose_6d&&test_weight_async_reorder

* fix comments

* change variable name from weights to weight in mkldnn_conv

* [mkldnn-v1.0] Add MKL-DNN activation (#16195)

* add mkldnn act; pass lint; pass mnist training

* make bwd as private member

* [mkldnn-v1.0] Add MKL-DNN BN (#16199)

* add mkldnn bn

* add static_cast to transform data type

* change mkldnn_args_map_t

* retrigger CI

* add mkldnn lrn (#16223)

* [mkldnn-v1.0] Add MKL-DNN Transpose (#16250)

* add mkldnn transpose

* using mkldnn_args_map_t instead of std::unordered_map<int, mkldnn::memory>

* [mkldnn-v1.0] Add MKL-DNN softmax (#16246)

* add mkldnn softmax

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN FC (#16221)

* add mkldnn fc; pass lint; pass mnist training

* add TODO info for future debug

* [mkldnn-v1.0] Add MKL-DNN  deconv (#16259)

* add mkldnn deconv

* coding style

* trigger CI

* add mkldnn softmax_output (#16222)

* [mkldnn-v1.0] Add MKL-DNN Pooling (#16272)

* add mkldnn pooling

* add workaround for mkldnn v1.0 pooling fwd && bwd workspace mismatch

* code clean

* fix lint error

* trigger CI

* trigger CI

* add extra work_space check and fix some typo

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN reshape&flatten&expand_dims (#16258)

* Add mkldnn 1.0 support for reshape/flatten/expanddims ops

* improve log & modify definition location of args_map_

* fix comments

* rebase code

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Add MKL-DNN int8 activation&pooling&flatten (#16425)

* Add mkldnn quantized activation/pooling/flatten

* int8 flatten

* [mkldnn-1.0] int8 conv quantize dequantize requantize (#16283)

* int8 conv quantize dequantize requantize

Change-Id: Ibd9df97288a95c61d6d85ec3831fd18b626ca283

* Fix lint

* Fix clang build

Change-Id: I9468774d014c852901e4cc3bffabd8a3d8004519

* add mkldnn sum concat (#16263)

* [mkldnn-1.0] mkldnn int8 elemwise_add (#16454)

* add mkldnn int8 elemwise_add

* add workaround to fix format any issue

* code clean

* upgrade int8 bn to MKLDNN1.0 (#16458)

* [mkldnn-v1.0] Fused RNN Op (#16420)

* [mkldnn-v1.0] Add MKL-DNN int8 fc (#16457)

* Add mkldnn_v1.0 int8 fc

* trigger CI

* trigger CI

* [mkldnn-v1.0] Update enabling flag for MKL dropout (#16433)

* use MSHADOW_USE_MKL to determine whther to use mkl optimized dropout

* rebase code

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0 (#16466)

* [mkldnn-1.0] upgrade int8 concat to MKLDNN1.0

* fix lint

* use mkldnn_args_map_t

* update dict usage style

* retrigger CI

* retrigger CI again

* retrigger CI again 2

* [mkldnn-v1.0] Add MKL-DNN slice (#16484)

* change slice to mkldnn v1.0

* fix lint

* [mkldnn-1.0] add mkldnn subgraph fc (#16468)

* add mkldnn subgraph fc

* code clean

* trigger CI

* [mkldnn-v1.0]enable mkldnn concat (#16507)

* enable mkldnn concat

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable mkldnn cpp-test, copy op, concat op (#16503)

* [mkldnn-v1.0] Enable mkldnn test, copy op, concat op

Exclude gpu topology via MXNET_USE_CUDA

nit

default format

Remove whitespace

* Unix-GPU Tensor-RT build timeout, re-trigger CI

* [mkldnn-1.0] add skipped case for mkldnn_v1.0 (#16470)

* add skipped case for mkldnn_v1.0

* enable mkl quantized testcase

* enable skipped testcase

* trigger CI

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-1.0]enable mkldnn elemwise_sum (#16521)

* enable mkldnn elemwise_sum

* trigger CI

* trigger CI

* trigger CI

* [mkldnn-v1.0] Enable more checks for MXNET_USE_MKLDNN (#16520)

* open USE_MKLDNN check

* trigger ci

* ci

* [mkldnn-v1.0]Minor fix for leakyrelu compile flag (#16519)

* change to MXNET_USE_MKLDNN == 100

* trigger

* remove MKL license (#16534)

* change MXNET_USE_MKLDNN from 100 to 1 (#16551)

* re-enable unit tests (#16565)

* [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu (#16545)

Skip `test_rnnrelu_sym`, and add some issue tracking message

Add return

Revert test_rnnrelu_sym to origin

* Add some annotations and log strings, rename mem_desc variables (#16609)

* [mkldnn-v1.0]set fc weight layout as mkldnn v0.2x did (#16593)

* set fc weight layout as mkldnn v0.2x did

* fix lint

* [mkldnn-v1.0] Upgrade to MKL-DNN v1.0.4 patch release (#16592)

* upgrade to mkldnn v1.0.3 patch release

* retrigger ci

* mkldnn v1.0.4 patch release

* [mkldnn-1.0]Rebase to master (#16648)

* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* [mkldnn-v1.0]rebase with master (#16649)

* fixed broken links across multiple files (#16581)

* fix missing docs due to git add issues (#16496)

* Create SECURITY.md (#16573)

* Create SECURITY.md

* Update SECURITY.md

* [Numpy] Support N_D(N>=3) batch_dot (#16586)

* Support N_D(N>=3) batch_dot

* use 1E-4

* fix lint

* remove unnecessary comment

* Update test_numpy_op.py

* Large Vector tests for DGL Ops Part 2 (#16497)

* add hyperbolic, logical, sign and regression tests for large vector

* changed hyperbolic functions into existing trignometric functions

* fix trigo and simple bind needs shape as tuple

* fix logical ops, add with_seed

* fix arcosh in largearray, remove regression from largevector

* [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode (#16597)

* Make MXIsNumpyShape return enum

* address the comment

* Surpress subgraph log in CI (#16607)

Change-Id: Ia2ed6fdbb1d2cb5cc607a8856ca13ee338e27eac

* Fix dequantize memory corruption (#16606)

Change-Id: I51b62a32987bdbcf96f04b1bc6617e66796f648b

* [MKLDNN]Fix reorder2default (#16602)

* Fix reorder2default

Change-Id: I74c87af9535f6264e6d1ea7eaed089a6480a3358

* fix

Change-Id: I6d07b43b520a47e7c78bd4b4b6390f5fb95e6957

* Fix

Change-Id: Id72f25c34291be4711f55569c6d61467edd6113d

* Fix CI

Change-Id: I8c33a82555d5ace2d0b682c1e3eefa13f3a44768

* Run CI

Change-Id: Ie8a6dab80ef91c0337cafbae4e3db277e0c7ebf7

* second round of fixing broken links in multiple files (#16598)

* Python Docstring Convetion (#16550)

* Docstring convetnion for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention for

* Docstring convention

* Revert removing new line

* Remove white space

* [MXNET-1434] Fix a broken link for basic C++ tutorial (#16461)

* Fix for wrong reqs set after switching from training to inference (#16553)

* Debugging reqs

* Move literal strings to const static members

* Fix lint

* julia/docs: more DRY on page rendering (#16396)

* Disables test_bulking_operator_gpu due to flakiness (#16611)

* C Api for simplebind, fix comment for trigoops, add atol to assert (#16585)

* C Api for simplebind, fix comment for trigoops, add atol to assert

* fix build issues

* fix lint and add regression test

* fix indent

* api doc and function name change

* fix lint and add infer shape test

* Imagenet inference to nightly fix (#16599)

* split to cd and shell

* comment

* lots of prints

* copy binary at correct location

* remove comments

* add mkl lib

* update docker run build function

* set nvidia docker true to run imagenet inference on GPU

* Revert "set nvidia docker true to run imagenet inference on GPU"

This reverts commit 98f8eef.
As we don't need GPU for compilation.

* Fix python doc build issue (#16630)

* pin the pip versions

* remove nbconvert comment

* Faster general take (#16615)

* Sped up perf of take op when axis != 0

* Formatting and syntax fixes

* Rename Take to specify axis

* Fix line length lint errors

* [Gluon] Don't serialize shared parameters twice (#16582)

Add deduplicate argument (default of False) to save_parameters.

* Fix index overflow bug in einsum (#16589)

* fix index overflow

* check index overflow

* fix index overflow in einsum path

* fix indent

* reduce NPY_MAXARGS

* safe accumulate

* Move some subgraph verbose to MXNET_SUBGRAPH_VERBOSE=2 (#16622)

* Move subgraph pass log to verbose=2

* Run CI

* add npx reshape (#16640)

* RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)

* fix bad encode (#16641)

* [Perl] - ndarray to native array conversion fix (#16635)

* fixing broken links in multiple files - round 3 (#16634)

* add type switch to weight tensor (#16543)

* numpy doc enhancement (#16637)

* Change NDArray to ndarray for npx ops

Add nonzero

boolean mask supports boolean ndarray

Add argmin op and interoperability test for nonzero

Fix vdot, inner, outter docs

Add nonzero to mx.nd.np

Add docs

Fix

* Fix lint

* Fix

* Fix

* Fix get_constant

* Disable float16 test (#16643)

* Fix GetMKLDNNData for delay alloc (#16618)

* Fix GetMKLDNNData for delay alloc

* Run CI

* Run CI

* Run CI

* Run CI

* Run CI

Change-Id: I7ac2796e0ee8439c92fd2bd7a70a23a359b76b12

* Revert "[mkldnn-1.0]Rebase to master (#16648)"

This reverts commit dea3dd2.

* [mkldnn-v1.0] Minor fix of mkldnn-v1.0 transition (#16644)

mk and rm directory in mkldnn.mk

ndarray.cc redundant whitespace

mkldnn_act rename variables of bwd primitives

mkldnn_rnn.cc iterator -> const_iterator

Use != instead of < for iterator in for-loop

Code comment for explaining the reason why excludes the last layer

* [mkldnn-v1.0]rm int8 sum workaround (#16623)

* rm int8 sum workaround due to mkldnn lib update

* simple dims asignments in mkldnn_quantized_elemwise_add.cc

* make MKLDNN macro simple for imperative_utils.h (#16652)

* fix ci jenkins step groovy (#16659)

* Adopt autograd.record() context to RNNOp (#16657)

* Use memcopy instead of set_handle when num_layer=0, direction=1 (#16663)

* fallback mkldnn fc bwd in imperative mode (#16672)

* disable MKLDNN FC backward

* [mkldnn-v1.0] Must reorder and emplace weights for inference primitives (#16682)

* add default parameter for mkldnn rnn
  • Loading branch information
TaoLv authored and pengzhao-intel committed Oct 31, 2019
1 parent b5d07e3 commit aa1074d
Show file tree
Hide file tree
Showing 93 changed files with 4,205 additions and 4,505 deletions.
2 changes: 1 addition & 1 deletion 3rdparty/mkldnn
18 changes: 7 additions & 11 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,7 @@ mxnet_option(USE_SSE "Build with x86 SSE instruction support" ON IF
mxnet_option(USE_F16C "Build with x86 F16C instruction support" ON) # autodetects support if ON
mxnet_option(USE_LAPACK "Build with lapack support" ON)
mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
mxnet_option(USE_MKLML_MKL "Use MKLDNN variant of MKL (if MKL found)" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) )
mxnet_option(USE_MKLDNN "Use MKLDNN variant of MKL (if MKL found)" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) AND (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64") AND (NOT CMAKE_CROSSCOMPILING))
mxnet_option(USE_MKLDNN "Build with MKL-DNN support" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) AND (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64") AND (NOT CMAKE_CROSSCOMPILING))
mxnet_option(USE_OPERATOR_TUNING "Enable auto-tuning of operators" ON IF NOT MSVC)
mxnet_option(USE_GPERFTOOLS "Build with GPerfTools support" OFF)
mxnet_option(USE_JEMALLOC "Build with Jemalloc support" ON)
Expand Down Expand Up @@ -257,25 +256,22 @@ if(ENABLE_TESTCOVERAGE)
endif()

if(USE_MKLDNN)
include(cmake/DownloadMKLML.cmake)
# CPU architecture (e.g., C5) can't run on another architecture (e.g., g3).
if(NOT MSVC)
set(ARCH_OPT_FLAGS "-mtune=generic")
else()
if(MSVC)
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /EHsc")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /EHsc /Gy")
endif()

set(WITH_TEST OFF CACHE INTERNAL "" FORCE)
set(WITH_EXAMPLE OFF CACHE INTERNAL "" FORCE)
set(ARCH_OPT_FLAGS "" CACHE INTERNAL "" FORCE)
set(MKLDNN_BUILD_TESTS OFF CACHE INTERNAL "" FORCE)
set(MKLDNN_BUILD_EXAMPLES OFF CACHE INTERNAL "" FORCE)
set(MKLDNN_ARCH_OPT_FLAGS "" CACHE INTERNAL "" FORCE)
set(MKLDNN_USE_MKL NONE CACHE INTERNAL "" FORCE)
set(MKLDNN_ENABLE_JIT_PROFILING OFF CACHE INTERNAL "" FORCE)

add_subdirectory(3rdparty/mkldnn)

include_directories(3rdparty/mkldnn/include)
include_directories(${PROJECT_BINARY_DIR}/3rdparty/mkldnn/include)
add_definitions(-DUSE_MKL=1)
add_definitions(-DCUB_MKL=1)
add_definitions(-DMXNET_USE_MKLDNN=1)
list(APPEND mxnet_LINKER_LIBS mkldnn)
endif()
Expand Down
51 changes: 8 additions & 43 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -651,43 +651,8 @@
<none yet>

=======================================================================================

13. MKL BLAS
For details, see, [Intel® Simplified license](https://software.intel.com/en-us/license/intel-simplified-software-license) and MKLDNN_README.md

Copyright (c) 2018 Intel Corporation.

Use and Redistribution. You may use and redistribute the software (the “Software”), without modification, provided the following conditions are met:

* Redistributions must reproduce the above copyright notice and the following terms of use in the Software and in the documentation and/or other materials provided with the distribution.

* Neither the name of Intel nor the names of its suppliers may be used to endorse or promote products derived from this Software without specific prior written permission.

* No reverse engineering, decompilation, or disassembly of this Software is permitted.

Limited patent license. Intel grants you a world-wide, royalty-free, non-exclusive license under patents it now or hereafter owns or controls to make, have made, use, import, offer to sell and sell (“Utilize”) this Software, but solely to the extent that any such patent is necessary to Utilize the Software alone. The patent license shall not apply to any combinations which include this software. No hardware per se is licensed hereunder.

Third party and other Intel programs. “Third Party Programs” are the files listed in the “third-party-programs.txt” text file that is included with the Software and may include Intel programs under separate license terms. Third Party Programs, even if included with the distribution of the Materials, are governed by separate license terms and those license terms solely govern your use of those programs.

DISCLAIMER. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT ARE DISCLAIMED. THIS SOFTWARE IS NOT INTENDED FOR USE IN SYSTEMS OR APPLICATIONS WHERE FAILURE OF THE SOFTWARE MAY CAUSE PERSONAL INJURY OR DEATH AND YOU AGREE THAT YOU ARE FULLY RESPONSIBLE FOR ANY CLAIMS, COSTS, DAMAGES, EXPENSES, AND ATTORNEYS’ FEES ARISING OUT OF ANY SUCH USE, EVEN IF ANY CLAIM ALLEGES THAT INTEL WAS NEGLIGENT REGARDING THE DESIGN OR MANUFACTURE OF THE MATERIALS.

LIMITATION OF LIABILITY. IN NO EVENT WILL INTEL BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. YOU AGREE TO INDEMNIFY AND HOLD INTEL HARMLESS AGAINST ANY CLAIMS AND EXPENSES RESULTING FROM YOUR USE OR UNAUTHORIZED USE OF THE SOFTWARE.

No support. Intel may make changes to the Software, at any time without notice, and is not obligated to support, update or provide training for the Software.

Termination. Intel may terminate your right to use the Software in the event of your breach of this Agreement and you fail to cure the breach within a reasonable period of time.

Feedback. Should you provide Intel with comments, modifications, corrections, enhancements or other input (“Feedback”) related to the Software Intel will be free to use, disclose, reproduce, license or otherwise distribute or exploit the Feedback in its sole discretion without any obligations or restrictions of any kind, including without limitation, intellectual property rights or licensing obligations.

Compliance with laws. You agree to comply with all relevant laws and regulations governing your use, transfer, import or export (or prohibition thereof) of the Software.

Governing law. All disputes will be governed by the laws of the United States of America and the State of Delaware without reference to conflict of law principles and subject to the exclusive jurisdiction of the state or federal courts sitting in the State of Delaware, and each party agrees that it submits to the personal jurisdiction and venue of those courts and waives any objections. The United Nations Convention on Contracts for the International Sale of Goods (1980) is specifically excluded and will not apply to the Software.

*Other names and brands may be claimed as the property of others.

=======================================================================================

14. FindJeMalloc.cmake

13. FindJeMalloc.cmake
For details, see cmake/Modules/FindJeMalloc.cmake

This file is based on https://github.com/STEllAR-GROUP/hpx/blob/master/cmake/FindJemalloc.cmake
Expand Down Expand Up @@ -778,7 +743,7 @@

=======================================================================================

15. FindPythonLibsNew.cmake
14. FindPythonLibsNew.cmake

For details, see 3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/FindPythonLibsNew.cmake

Expand Down Expand Up @@ -817,7 +782,7 @@

=======================================================================================

16. erfinv-inl.h
15. erfinv-inl.h

For details, see /src/operator/contrib/erfinv-inl.h

Expand Down Expand Up @@ -860,7 +825,7 @@

=======================================================================================

17. mersenne.h
16. mersenne.h

For details, see /3rdparty/nvidia_cub/test/mersenne.h

Expand Down Expand Up @@ -909,7 +874,7 @@

=======================================================================================

18. FindEigen3.cmake
17. FindEigen3.cmake

For details, see /3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/FindEigen3.cmake

Expand All @@ -920,7 +885,7 @@

=======================================================================================

19. protoc-gen-mypy.py
18. protoc-gen-mypy.py

For details, see /3rdparty/onnx-tensorrt/third_party/onnx/tools/protoc-gen-mypy.py

Expand All @@ -936,7 +901,7 @@

=======================================================================================

20. rang
19. rang

For details, see /3rdparty/tvm/3rdparty/rang/LICENSE

Expand Down
19 changes: 4 additions & 15 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,6 @@ endif

ifeq ($(USE_MKLDNN), 1)
MKLDNNROOT = $(ROOTDIR)/3rdparty/mkldnn/build/install
MKLROOT = $(ROOTDIR)/3rdparty/mkldnn/build/install
export USE_MKLML = 1
endif

include $(TPARTYDIR)/mshadow/make/mshadow.mk
Expand Down Expand Up @@ -151,14 +149,9 @@ endif

ifeq ($(USE_MKLDNN), 1)
CFLAGS += -DMXNET_USE_MKLDNN=1
CFLAGS += -DUSE_MKL=1
CFLAGS += -I$(ROOTDIR)/src/operator/nn/mkldnn/
ifneq ($(MKLDNNROOT), $(MKLROOT))
CFLAGS += -I$(MKLROOT)/include
LDFLAGS += -L$(MKLROOT)/lib
endif
CFLAGS += -I$(MKLDNNROOT)/include
LDFLAGS += -L$(MKLDNNROOT)/lib -lmkldnn -Wl,-rpath,'$${ORIGIN}'
LDFLAGS += -L$(MKLDNNROOT)/lib -L$(MKLDNNROOT)/lib64 -lmkldnn -Wl,-rpath,'$${ORIGIN}'
endif

# setup opencv
Expand Down Expand Up @@ -604,9 +597,7 @@ lib/libmxnet.so: $(ALLX_DEP)
-Wl,${WHOLE_ARCH} $(filter %libnnvm.a, $^) -Wl,${NO_WHOLE_ARCH}
ifeq ($(USE_MKLDNN), 1)
ifeq ($(UNAME_S), Darwin)
install_name_tool -change '@rpath/libmklml.dylib' '@loader_path/libmklml.dylib' $@
install_name_tool -change '@rpath/libiomp5.dylib' '@loader_path/libiomp5.dylib' $@
install_name_tool -change '@rpath/libmkldnn.0.dylib' '@loader_path/libmkldnn.0.dylib' $@
install_name_tool -change '@rpath/libmkldnn.1.dylib' '@loader_path/libmkldnn.1.dylib' $@
endif
endif

Expand Down Expand Up @@ -698,10 +689,8 @@ rpkg:
cp src/io/image_recordio.h R-package/src
cp -rf lib/libmxnet.so R-package/inst/libs

if [ -e "lib/libmkldnn.so.0" ]; then \
cp -rf lib/libmkldnn.so.0 R-package/inst/libs; \
cp -rf lib/libiomp5.so R-package/inst/libs; \
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
if [ -e "lib/libmkldnn.so.1" ]; then \
cp -rf lib/libmkldnn.so.1 R-package/inst/libs; \
fi

if [ -e "lib/libtvm_runtime.so" ]; then \
Expand Down
2 changes: 0 additions & 2 deletions ci/docker/Dockerfile.build.centos7_cpu
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,6 @@ COPY install/centos7_python.sh /work/
RUN /work/centos7_python.sh
COPY install/centos7_scala.sh /work/
RUN /work/centos7_scala.sh
COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

ARG USER_ID=0
COPY install/centos7_adduser.sh /work/
Expand Down
2 changes: 0 additions & 2 deletions ci/docker/Dockerfile.build.ubuntu_build_cuda
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,6 @@ COPY install/ubuntu_perl.sh /work/
RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh
COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh
COPY install/ubuntu_ar.sh /work/
RUN /work/ubuntu_ar.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_cpu
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,6 @@ RUN /work/ubuntu_gcc8.sh
COPY install/ubuntu_mkl.sh /work/
RUN /work/ubuntu_mkl.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_caffe.sh /work/
RUN /work/ubuntu_caffe.sh

Expand Down
5 changes: 1 addition & 4 deletions ci/docker/Dockerfile.build.ubuntu_cpu_julia
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,6 @@ RUN /work/ubuntu_gcc8.sh
COPY install/ubuntu_mkl.sh /work/
RUN /work/ubuntu_mkl.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_caffe.sh /work/
RUN /work/ubuntu_caffe.sh

Expand All @@ -78,4 +75,4 @@ RUN /work/ubuntu_adduser.sh

COPY runtime_functions.sh /work/

WORKDIR /work/mxnet
WORKDIR /work/mxnet
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_gpu_cu100
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_gpu_cu101
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_gpu_cu80
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_gpu_cu90
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_gpu_cu92
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_nightly_cpu
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_caffe.sh /work/
RUN /work/ubuntu_caffe.sh

Expand Down
3 changes: 0 additions & 3 deletions ci/docker/Dockerfile.build.ubuntu_nightly_gpu
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ RUN /work/ubuntu_perl.sh
COPY install/ubuntu_clang.sh /work/
RUN /work/ubuntu_clang.sh

COPY install/ubuntu_mklml.sh /work/
RUN /work/ubuntu_mklml.sh

COPY install/ubuntu_tvm.sh /work/
RUN /work/ubuntu_tvm.sh

Expand Down
25 changes: 0 additions & 25 deletions ci/docker/install/ubuntu_mklml.sh

This file was deleted.

7 changes: 4 additions & 3 deletions ci/docker/runtime_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -692,6 +692,7 @@ build_ubuntu_cpu_mkldnn_mkl() {
USE_TVM_OP=1 \
USE_BLAS=mkl \
USE_SIGNAL_HANDLER=1 \
USE_INTEL_PATH=/opt/intel/ \
-j$(nproc)
}

Expand Down Expand Up @@ -877,9 +878,9 @@ build_ubuntu_gpu_cmake_mkldnn() {
/work/mxnet

ninja -v
# libmkldnn.so.0 is a link file. We need an actual binary file named libmkldnn.so.0.
cp 3rdparty/mkldnn/src/libmkldnn.so.0 3rdparty/mkldnn/src/libmkldnn.so.0.tmp
mv 3rdparty/mkldnn/src/libmkldnn.so.0.tmp 3rdparty/mkldnn/src/libmkldnn.so.0
# libmkldnn.so.1 is a link file. We need an actual binary file named libmkldnn.so.1.
cp 3rdparty/mkldnn/src/libmkldnn.so.1 3rdparty/mkldnn/src/libmkldnn.so.1.tmp
mv 3rdparty/mkldnn/src/libmkldnn.so.1.tmp 3rdparty/mkldnn/src/libmkldnn.so.1
}

build_ubuntu_gpu_cmake() {
Expand Down
4 changes: 2 additions & 2 deletions ci/jenkins/Jenkins_steps.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ mx_cmake_lib_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, build/libsample_l
mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
// mxnet cmake libraries, in cmake builds we do not produce a libnvvm static library by default.
mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests'
mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, build/3rdparty/mkldnn/src/libmkldnn.so.0'
mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, libsample_lib.so, lib/libiomp5.so, lib/libmkldnn.so.0, lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, build/3rdparty/mkldnn/src/libmkldnn.so.1'
mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, libsample_lib.so, lib/libmkldnn.so.1, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, lib/libonnx_proto.so, lib/libonnx.so'
mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
mx_lib_cpp_examples_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
Expand Down
Loading

0 comments on commit aa1074d

Please sign in to comment.