Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

bump up 1.x branch to 1.7 #17740

Closed
wants to merge 16 commits into from
Closed

bump up 1.x branch to 1.7 #17740

wants to merge 16 commits into from

Commits on Feb 26, 2020

  1. flatnonzero (apache#17690)

    Yiyan66 authored Feb 26, 2020
    Configuration menu
    Copy the full SHA
    0aba131 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    fefb370 View commit details
    Browse the repository at this point in the history

Commits on Feb 27, 2020

  1. cmake: remove -mf16c flag for android build (apache#17523)

    Co-authored-by: Leonard Lausen <leonard@lausen.nl>
    michiboo and leezu authored Feb 27, 2020
    Configuration menu
    Copy the full SHA
    55e6987 View commit details
    Browse the repository at this point in the history

Commits on Feb 28, 2020

  1. Configuration menu
    Copy the full SHA
    0e6ab21 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    1af06d9 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    4f5cd92 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    b6002fd View commit details
    Browse the repository at this point in the history
  5. CI: Switch to cmake builds for majority of tests (apache#17645)

    The following Makefile based builds are preserved
    1) staticbuild scripts
    2) Docs builds. Language binding specific build logic requires further changes
    3) Jetson build. Jetpack 3.3 toolchain based on Cuda 9.0 causes 'Internal
       Compiler Error (codegen): "there was an error in verifying the lgenfe
       output!"' errors with cmake. This seems to be a known issue in Cuda 9.0 and
       we need to update Jetpack toolchain to work around it.
    4) MKL builds. Waiting for fix of apache#17641
    
    All Makefile based builds are marked with a "Makefile" postfix in the title.
    
    Improvements to CMake build
    - Enable -Werror for RelWithDebugInfo build in analogy to "make DEV=1" build
    - Add USE_LIBJPEG_TURBO to CMake build
    - Improve finding Python 3 executable
    
    Changes to CI setup
    - Install protobuf and zmq where missing
    - Install up-to-date CMake on Centos 7
    - Don't use RelWithDebInfo on Android builds, as gcc 4.9 throws
      -Wdelete-non-virtual-dtor
    
    Code changes
    - Disable warnings introduced by GCC7 at via #pragma GCC diagnostic
    leezu authored Feb 28, 2020
    2 Configuration menu
    Copy the full SHA
    319e6c1 View commit details
    Browse the repository at this point in the history

Commits on Feb 29, 2020

  1. Configuration menu
    Copy the full SHA
    f55fd06 View commit details
    Browse the repository at this point in the history
  2. [Large Tensor] Fix cumsum op (apache#17677)

    * Implemented fix and nightly test for cumsum
    
    * Changed IType to index_t
    
    * Also changed in backward
    
    * Reverting to IType
    
    * Added type assertion on first element to force evaluation of output NDArray
    
    * Reverted to IType in relevant places
    
    * Last reversion
    
    * Changed type assertion to value check
    connorgoggins authored Feb 29, 2020
    Configuration menu
    Copy the full SHA
    2527553 View commit details
    Browse the repository at this point in the history
  3. [Large Tensor] Implemented LT flag for OpPerf testing (apache#17449)

    * Passing large_tensor parameter down
    
    * Adding large tensor testing functionality for convolutional operators
    
    * Added large tensor test functionality for conv ops
    
    * Fixing sizing for conv ops
    
    * Added gemm large tensor, print on conv
    
    * Updated input for gemm ops and print statements
    
    * Fixed deconv large tensor test
    
    * Added bias for deconv
    
    * Added test functionality for nn_activation and nn_basic ops
    
    * Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test
    
    * Dropped unnecessary print statements
    
    * Fixed lint errors
    
    * Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions
    
    * Adding docs, changed large_tensor to int64_tensor for clarity
    
    * Added warmup/runs to gemm ops, debugging process failure
    
    * Resolved merge conficts, added default params and input switching functionality
    
    * Dynamic input handling for default inputs, additional custom data for int64
    
    * Fixed RPD issue
    
    * Everything through reduction ops working
    
    * Passing large_tensor parameter down
    
    * Adding large tensor testing functionality for convolutional operators
    
    * Added large tensor test functionality for conv ops
    
    * Fixing sizing for conv ops
    
    * Added gemm large tensor, print on conv
    
    * Updated input for gemm ops and print statements
    
    * Fixed deconv large tensor test
    
    * Added bias for deconv
    
    * Added test functionality for nn_activation and nn_basic ops
    
    * Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test
    
    * Dropped unnecessary print statements
    
    * Fixed lint errors
    
    * Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions
    
    * Adding docs, changed large_tensor to int64_tensor for clarity
    
    * Added warmup/runs to gemm ops, debugging process failure
    
    * Resolved merge conficts, added default params and input switching functionality
    
    * Dynamic input handling for default inputs, additional custom data for int64
    
    * Fixed RPD issue
    
    * Everything through reduction ops working
    
    * Random sampling & loss ops working
    
    * Added indices, depth, ravel_data in default_params
    
    * Added indexing ops - waiting for merge on ravel
    
    * Added optimizer ops
    
    * All misc ops working
    
    * All NN Basic ops working
    
    * Fixed LT input for ROIPooling
    
    * Refactored NN Conv tests
    
    * Added test for inline optimizer ops
    
    * Dropping extra tests to decrease execution time
    
    * Switching to inline tests for RNN to support additional modes
    
    * Added state_cell as NDArray param, removed linalg testing for int64 tensor
    
    * Cleaned up styling
    
    * Fixed conv and deconv tests
    
    * Retrigger CI for continuous build
    
    * Cleaned up GEMM op inputs
    
    * Dropped unused param from default_params
    connorgoggins authored Feb 29, 2020
    Configuration menu
    Copy the full SHA
    95c5189 View commit details
    Browse the repository at this point in the history
  4. [MKLDNN] apply MKLDNNRun to quantized_act/transpose (apache#17689)

    * apply MKLDNNRun to quantized_act/transpose ops
    
    * run CI
    wuxun-zhang authored Feb 29, 2020
    Configuration menu
    Copy the full SHA
    88b3051 View commit details
    Browse the repository at this point in the history
  5. [MXNET-apache#16167] Refactor Optimizer (apache#17400)

    * refactor optimizer
    
    * refactor optimizer
    
    * fix svrg test
    
    * fix rmsprop param naming
    
    * fix signum test
    
    * fix pylint and perl test
    
    * fix perl test and signsgd test
    
    * fix
    
    * retrigger ci
    
    * reduce ci overheads
    szhengac authored Feb 29, 2020
    Configuration menu
    Copy the full SHA
    f70c7b7 View commit details
    Browse the repository at this point in the history
  6. Fix get_started pip instructions (apache#17725)

    The class attribute in the respective .md files should match one of the versions
    listed in the version dropdown specified in get_started.html
    leezu authored Feb 29, 2020
    Configuration menu
    Copy the full SHA
    10a12d5 View commit details
    Browse the repository at this point in the history

Commits on Mar 2, 2020

  1. Support projection feature for LSTM on CPU (Only Inference) (apache#1…

    …7702)
    
    * Support projection feature for LSTM on CPU
    
    * test solution for -Werror=maybe-uninitialized
    
    * Check device type when create state
    
    * Document the projection feature of LSTM for RNN operator
    
    * Minor fix
    
    * Re-run CI
    zixuanweeei authored Mar 2, 2020
    Configuration menu
    Copy the full SHA
    ac77974 View commit details
    Browse the repository at this point in the history
  2. bump up 1.x branch to 1.7

    szha committed Mar 2, 2020
    Configuration menu
    Copy the full SHA
    2931013 View commit details
    Browse the repository at this point in the history