Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

bump up 1.x branch to 1.7 #17740

Closed
wants to merge 16 commits into from
Closed

bump up 1.x branch to 1.7 #17740

wants to merge 16 commits into from

Conversation

szha
Copy link
Member

@szha szha commented Mar 2, 2020

Description

bump up 1.x branch to 1.7

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • bump up 1.x branch to 1.7

Comments

Yiyan66 and others added 16 commits February 26, 2020 13:44
Co-authored-by: Leonard Lausen <leonard@lausen.nl>
The following Makefile based builds are preserved
1) staticbuild scripts
2) Docs builds. Language binding specific build logic requires further changes
3) Jetson build. Jetpack 3.3 toolchain based on Cuda 9.0 causes 'Internal
   Compiler Error (codegen): "there was an error in verifying the lgenfe
   output!"' errors with cmake. This seems to be a known issue in Cuda 9.0 and
   we need to update Jetpack toolchain to work around it.
4) MKL builds. Waiting for fix of apache#17641

All Makefile based builds are marked with a "Makefile" postfix in the title.

Improvements to CMake build
- Enable -Werror for RelWithDebugInfo build in analogy to "make DEV=1" build
- Add USE_LIBJPEG_TURBO to CMake build
- Improve finding Python 3 executable

Changes to CI setup
- Install protobuf and zmq where missing
- Install up-to-date CMake on Centos 7
- Don't use RelWithDebInfo on Android builds, as gcc 4.9 throws
  -Wdelete-non-virtual-dtor

Code changes
- Disable warnings introduced by GCC7 at via #pragma GCC diagnostic
* Implemented fix and nightly test for cumsum

* Changed IType to index_t

* Also changed in backward

* Reverting to IType

* Added type assertion on first element to force evaluation of output NDArray

* Reverted to IType in relevant places

* Last reversion

* Changed type assertion to value check
* Passing large_tensor parameter down

* Adding large tensor testing functionality for convolutional operators

* Added large tensor test functionality for conv ops

* Fixing sizing for conv ops

* Added gemm large tensor, print on conv

* Updated input for gemm ops and print statements

* Fixed deconv large tensor test

* Added bias for deconv

* Added test functionality for nn_activation and nn_basic ops

* Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test

* Dropped unnecessary print statements

* Fixed lint errors

* Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions

* Adding docs, changed large_tensor to int64_tensor for clarity

* Added warmup/runs to gemm ops, debugging process failure

* Resolved merge conficts, added default params and input switching functionality

* Dynamic input handling for default inputs, additional custom data for int64

* Fixed RPD issue

* Everything through reduction ops working

* Passing large_tensor parameter down

* Adding large tensor testing functionality for convolutional operators

* Added large tensor test functionality for conv ops

* Fixing sizing for conv ops

* Added gemm large tensor, print on conv

* Updated input for gemm ops and print statements

* Fixed deconv large tensor test

* Added bias for deconv

* Added test functionality for nn_activation and nn_basic ops

* Fixed deconv bias, implemented large tensor test logic for general ops, added default data for large tensor test

* Dropped unnecessary print statements

* Fixed lint errors

* Added large_tensor parameter to existing function descriptions, added descriptions for functions missing descriptions

* Adding docs, changed large_tensor to int64_tensor for clarity

* Added warmup/runs to gemm ops, debugging process failure

* Resolved merge conficts, added default params and input switching functionality

* Dynamic input handling for default inputs, additional custom data for int64

* Fixed RPD issue

* Everything through reduction ops working

* Random sampling & loss ops working

* Added indices, depth, ravel_data in default_params

* Added indexing ops - waiting for merge on ravel

* Added optimizer ops

* All misc ops working

* All NN Basic ops working

* Fixed LT input for ROIPooling

* Refactored NN Conv tests

* Added test for inline optimizer ops

* Dropping extra tests to decrease execution time

* Switching to inline tests for RNN to support additional modes

* Added state_cell as NDArray param, removed linalg testing for int64 tensor

* Cleaned up styling

* Fixed conv and deconv tests

* Retrigger CI for continuous build

* Cleaned up GEMM op inputs

* Dropped unused param from default_params
* apply MKLDNNRun to quantized_act/transpose ops

* run CI
* refactor optimizer

* refactor optimizer

* fix svrg test

* fix rmsprop param naming

* fix signum test

* fix pylint and perl test

* fix perl test and signsgd test

* fix

* retrigger ci

* reduce ci overheads
The class attribute in the respective .md files should match one of the versions
listed in the version dropdown specified in get_started.html
…7702)

* Support projection feature for LSTM on CPU

* test solution for -Werror=maybe-uninitialized

* Check device type when create state

* Document the projection feature of LSTM for RNN operator

* Minor fix

* Re-run CI
@szha szha changed the base branch from master to v1.x March 2, 2020 21:21
@szha szha closed this Mar 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.