Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Test clang10 -WError build on CI #17830

Merged
merged 20 commits into from
Mar 18, 2020
Merged

Test clang10 -WError build on CI #17830

merged 20 commits into from
Mar 18, 2020

Conversation

leezu
Copy link
Contributor

@leezu leezu commented Mar 14, 2020

Description

Extension of #17752 for clang

@leezu leezu force-pushed the clangwerror branch 17 times, most recently from 51d1117 to ea7c3da Compare March 16, 2020 17:43
@leezu leezu requested a review from nswamy as a code owner March 16, 2020 20:16
@leezu leezu force-pushed the clangwerror branch 2 times, most recently from b6fc96e to 9ed41bb Compare March 17, 2020 06:47
leezu added 13 commits March 17, 2020 19:52
warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
'float' changes value from 2147483647 to 2147483648
warning: calling a constexpr __host__ function from a __host__ __device__
function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be
used to allow this.
-Wliteral-conversion, -Wabsolute-value, -Wunused-private-field,
-Wimplicit-int-float-conversion
@leezu leezu added the pr-awaiting-review PR is waiting for code review label Mar 18, 2020
@leezu leezu merged commit ab4f7f6 into apache:master Mar 18, 2020
@leezu leezu deleted the clangwerror branch March 18, 2020 04:36
anirudh2290 added a commit to anirudh2290/mxnet that referenced this pull request Mar 27, 2020
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits)
  * impl - FFI for np einsum (apache#17869)
  [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789)
  [Numpy] Kron operator (apache#17323)
  cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878)
  Add simplified HybridBlock.forward without F (apache#17530)
  Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700)
  Use multi-tensor sumSQ in clip_global_norm (apache#17652)
  [Numpy] Add op fmax, fmin, fmod (apache#17567)
  Adding sparse support to MXTensor for custom operators (apache#17569)
  Update 3rdparty/mkldnn to v1.2.2 (apache#17313)
  Dynamic subgraph compile support (apache#17623)
  Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835)
  staticbuild: Fix potential user-assisted execution of arbitrary code  (apache#17860)
  * FFI for np.argmax and np.argmin (apache#17843)
  ffi for roll/rot90 (apache#17861)
  Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797)
  add ffi for full_like, binary (apache#17811)
  HybridBlock.export() to return created filenames (apache#17758)
  Fix SoftReLU fused operator numerical stability (apache#17849)
  CI: Test clang10 cpu & gpu builds with -WError (apache#17830)
  ...
MoisesHer pushed a commit to MoisesHer/incubator-mxnet that referenced this pull request Apr 10, 2020
* Fix Wunused-variable

* Fix Wreturn-std-move

* Fix Wunused-const-variable

* Fix Winconsistent-missing-override

* Fix Wdelete-non-abstract-non-virtual-dtor

* Fix Wrange-loop-construct

* Disable Wpass-failed=transform-warning

warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering

* Fix Wimplicit-int-float-conversion

'float' changes value from 2147483647 to 2147483648

* Fix Wunused-lambda-capture

* Fix Wundefined-var-template

* cuda: --expt-relaxed-constexpr

warning: calling a constexpr __host__ function from a __host__ __device__
function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be
used to allow this.

* Fix Wrange-loop-construct avoiding extra copies

* Fix Wunused-private-field

* Fix Wwritable-strings

* Enable Clang10 -WError checking on CI

* Fix -WError with mkldnn

-Wliteral-conversion, -Wabsolute-value, -Wunused-private-field,
-Wimplicit-int-float-conversion

* Fix shuffle_op.cc

* Fix use of old binutils

* Print traceback on exception in OpWrapperGenerator.py

* USE_CPP_PACKAGE=OFF for gpu clang10 werror build
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
* Fix Wunused-variable

* Fix Wreturn-std-move

* Fix Wunused-const-variable

* Fix Winconsistent-missing-override

* Fix Wdelete-non-abstract-non-virtual-dtor

* Fix Wrange-loop-construct

* Disable Wpass-failed=transform-warning

warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering

* Fix Wimplicit-int-float-conversion

'float' changes value from 2147483647 to 2147483648

* Fix Wunused-lambda-capture

* Fix Wundefined-var-template

* cuda: --expt-relaxed-constexpr

warning: calling a constexpr __host__ function from a __host__ __device__
function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be
used to allow this.

* Fix Wrange-loop-construct avoiding extra copies

* Fix Wunused-private-field

* Fix Wwritable-strings

* Enable Clang10 -WError checking on CI

* Fix -WError with mkldnn

-Wliteral-conversion, -Wabsolute-value, -Wunused-private-field,
-Wimplicit-int-float-conversion

* Fix shuffle_op.cc

* Fix use of old binutils

* Print traceback on exception in OpWrapperGenerator.py

* USE_CPP_PACKAGE=OFF for gpu clang10 werror build
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants