-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Commits on Feb 26, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 0aba131 - Browse repository at this point
Copy the full SHA 0aba131View commit details -
Configuration menu - View commit details
-
Copy full SHA for fefb370 - Browse repository at this point
Copy the full SHA fefb370View commit details
Commits on Feb 27, 2020
-
cmake: remove -mf16c flag for android build (apache#17523)
Co-authored-by: Leonard Lausen <leonard@lausen.nl>
Configuration menu - View commit details
-
Copy full SHA for 55e6987 - Browse repository at this point
Copy the full SHA 55e6987View commit details
Commits on Feb 28, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 0e6ab21 - Browse repository at this point
Copy the full SHA 0e6ab21View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1af06d9 - Browse repository at this point
Copy the full SHA 1af06d9View commit details -
Fix compiler warnings in new FFI (apache#17718)
Introduced in apache#17510
Configuration menu - View commit details
-
Copy full SHA for 4f5cd92 - Browse repository at this point
Copy the full SHA 4f5cd92View commit details -
Configuration menu - View commit details
-
Copy full SHA for b6002fd - Browse repository at this point
Copy the full SHA b6002fdView commit details -
CI: Switch to cmake builds for majority of tests (apache#17645)
The following Makefile based builds are preserved 1) staticbuild scripts 2) Docs builds. Language binding specific build logic requires further changes 3) Jetson build. Jetpack 3.3 toolchain based on Cuda 9.0 causes 'Internal Compiler Error (codegen): "there was an error in verifying the lgenfe output!"' errors with cmake. This seems to be a known issue in Cuda 9.0 and we need to update Jetpack toolchain to work around it. 4) MKL builds. Waiting for fix of apache#17641 All Makefile based builds are marked with a "Makefile" postfix in the title. Improvements to CMake build - Enable -Werror for RelWithDebugInfo build in analogy to "make DEV=1" build - Add USE_LIBJPEG_TURBO to CMake build - Improve finding Python 3 executable Changes to CI setup - Install protobuf and zmq where missing - Install up-to-date CMake on Centos 7 - Don't use RelWithDebInfo on Android builds, as gcc 4.9 throws -Wdelete-non-virtual-dtor Code changes - Disable warnings introduced by GCC7 at via #pragma GCC diagnostic
2Configuration menu - View commit details
-
Copy full SHA for 319e6c1 - Browse repository at this point
Copy the full SHA 319e6c1View commit details
Commits on Feb 29, 2020
-
Configuration menu - View commit details
-
Copy full SHA for f55fd06 - Browse repository at this point
Copy the full SHA f55fd06View commit details -
[Large Tensor] Fix cumsum op (apache#17677)
* Implemented fix and nightly test for cumsum * Changed IType to index_t * Also changed in backward * Reverting to IType * Added type assertion on first element to force evaluation of output NDArray * Reverted to IType in relevant places * Last reversion * Changed type assertion to value check
Configuration menu - View commit details
-
Copy full SHA for 2527553 - Browse repository at this point
Copy the full SHA 2527553View commit details -
[Large Tensor] Implemented LT flag for OpPerf testing (apache#17449)
* Passing large_tensor parameter down * Adding large tensor testing functionality for convolutional operators * Added large tensor test functionality for conv ops * Fixing sizing for conv ops * Added gemm large tensor, print on conv * Updated input for gemm ops and print statements * Fixed deconv large tensor test * Added bias for deconv * Added test functionality for nn_activation and nn_basic ops * Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test * Dropped unnecessary print statements * Fixed lint errors * Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions * Adding docs, changed large_tensor to int64_tensor for clarity * Added warmup/runs to gemm ops, debugging process failure * Resolved merge conficts, added default params and input switching functionality * Dynamic input handling for default inputs, additional custom data for int64 * Fixed RPD issue * Everything through reduction ops working * Passing large_tensor parameter down * Adding large tensor testing functionality for convolutional operators * Added large tensor test functionality for conv ops * Fixing sizing for conv ops * Added gemm large tensor, print on conv * Updated input for gemm ops and print statements * Fixed deconv large tensor test * Added bias for deconv * Added test functionality for nn_activation and nn_basic ops * Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test * Dropped unnecessary print statements * Fixed lint errors * Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions * Adding docs, changed large_tensor to int64_tensor for clarity * Added warmup/runs to gemm ops, debugging process failure * Resolved merge conficts, added default params and input switching functionality * Dynamic input handling for default inputs, additional custom data for int64 * Fixed RPD issue * Everything through reduction ops working * Random sampling & loss ops working * Added indices, depth, ravel_data in default_params * Added indexing ops - waiting for merge on ravel * Added optimizer ops * All misc ops working * All NN Basic ops working * Fixed LT input for ROIPooling * Refactored NN Conv tests * Added test for inline optimizer ops * Dropping extra tests to decrease execution time * Switching to inline tests for RNN to support additional modes * Added state_cell as NDArray param, removed linalg testing for int64 tensor * Cleaned up styling * Fixed conv and deconv tests * Retrigger CI for continuous build * Cleaned up GEMM op inputs * Dropped unused param from default_params
Configuration menu - View commit details
-
Copy full SHA for 95c5189 - Browse repository at this point
Copy the full SHA 95c5189View commit details -
[MKLDNN] apply MKLDNNRun to quantized_act/transpose (apache#17689)
* apply MKLDNNRun to quantized_act/transpose ops * run CI
Configuration menu - View commit details
-
Copy full SHA for 88b3051 - Browse repository at this point
Copy the full SHA 88b3051View commit details -
[MXNET-apache#16167] Refactor Optimizer (apache#17400)
* refactor optimizer * refactor optimizer * fix svrg test * fix rmsprop param naming * fix signum test * fix pylint and perl test * fix perl test and signsgd test * fix * retrigger ci * reduce ci overheads
Configuration menu - View commit details
-
Copy full SHA for f70c7b7 - Browse repository at this point
Copy the full SHA f70c7b7View commit details -
Fix get_started pip instructions (apache#17725)
The class attribute in the respective .md files should match one of the versions listed in the version dropdown specified in get_started.html
Configuration menu - View commit details
-
Copy full SHA for 10a12d5 - Browse repository at this point
Copy the full SHA 10a12d5View commit details
Commits on Mar 2, 2020
-
Support projection feature for LSTM on CPU (Only Inference) (apache#1…
…7702) * Support projection feature for LSTM on CPU * test solution for -Werror=maybe-uninitialized * Check device type when create state * Document the projection feature of LSTM for RNN operator * Minor fix * Re-run CI
Configuration menu - View commit details
-
Copy full SHA for ac77974 - Browse repository at this point
Copy the full SHA ac77974View commit details -
Configuration menu - View commit details
-
Copy full SHA for 2931013 - Browse repository at this point
Copy the full SHA 2931013View commit details