This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
leezu
requested review from
aaronmarkham,
anirudh2290,
marcoabreu and
szha
as code owners
March 14, 2020 04:08
leezu
force-pushed
the
clangwerror
branch
17 times, most recently
from
March 16, 2020 17:43
51d1117
to
ea7c3da
Compare
leezu
force-pushed
the
clangwerror
branch
2 times, most recently
from
March 17, 2020 06:47
b6fc96e
to
9ed41bb
Compare
warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
'float' changes value from 2147483647 to 2147483648
warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
-Wliteral-conversion, -Wabsolute-value, -Wunused-private-field, -Wimplicit-int-float-conversion
szha
approved these changes
Mar 18, 2020
anirudh2290
added a commit
to anirudh2290/mxnet
that referenced
this pull request
Mar 27, 2020
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits) * impl - FFI for np einsum (apache#17869) [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789) [Numpy] Kron operator (apache#17323) cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878) Add simplified HybridBlock.forward without F (apache#17530) Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700) Use multi-tensor sumSQ in clip_global_norm (apache#17652) [Numpy] Add op fmax, fmin, fmod (apache#17567) Adding sparse support to MXTensor for custom operators (apache#17569) Update 3rdparty/mkldnn to v1.2.2 (apache#17313) Dynamic subgraph compile support (apache#17623) Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835) staticbuild: Fix potential user-assisted execution of arbitrary code (apache#17860) * FFI for np.argmax and np.argmin (apache#17843) ffi for roll/rot90 (apache#17861) Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797) add ffi for full_like, binary (apache#17811) HybridBlock.export() to return created filenames (apache#17758) Fix SoftReLU fused operator numerical stability (apache#17849) CI: Test clang10 cpu & gpu builds with -WError (apache#17830) ...
MoisesHer
pushed a commit
to MoisesHer/incubator-mxnet
that referenced
this pull request
Apr 10, 2020
* Fix Wunused-variable * Fix Wreturn-std-move * Fix Wunused-const-variable * Fix Winconsistent-missing-override * Fix Wdelete-non-abstract-non-virtual-dtor * Fix Wrange-loop-construct * Disable Wpass-failed=transform-warning warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering * Fix Wimplicit-int-float-conversion 'float' changes value from 2147483647 to 2147483648 * Fix Wunused-lambda-capture * Fix Wundefined-var-template * cuda: --expt-relaxed-constexpr warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this. * Fix Wrange-loop-construct avoiding extra copies * Fix Wunused-private-field * Fix Wwritable-strings * Enable Clang10 -WError checking on CI * Fix -WError with mkldnn -Wliteral-conversion, -Wabsolute-value, -Wunused-private-field, -Wimplicit-int-float-conversion * Fix shuffle_op.cc * Fix use of old binutils * Print traceback on exception in OpWrapperGenerator.py * USE_CPP_PACKAGE=OFF for gpu clang10 werror build
anirudh2290
pushed a commit
to anirudh2290/mxnet
that referenced
this pull request
May 29, 2020
* Fix Wunused-variable * Fix Wreturn-std-move * Fix Wunused-const-variable * Fix Winconsistent-missing-override * Fix Wdelete-non-abstract-non-virtual-dtor * Fix Wrange-loop-construct * Disable Wpass-failed=transform-warning warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering * Fix Wimplicit-int-float-conversion 'float' changes value from 2147483647 to 2147483648 * Fix Wunused-lambda-capture * Fix Wundefined-var-template * cuda: --expt-relaxed-constexpr warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this. * Fix Wrange-loop-construct avoiding extra copies * Fix Wunused-private-field * Fix Wwritable-strings * Enable Clang10 -WError checking on CI * Fix -WError with mkldnn -Wliteral-conversion, -Wabsolute-value, -Wunused-private-field, -Wimplicit-int-float-conversion * Fix shuffle_op.cc * Fix use of old binutils * Print traceback on exception in OpWrapperGenerator.py * USE_CPP_PACKAGE=OFF for gpu clang10 werror build
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Extension of #17752 for clang