This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@stu1130 Can you look into the CI failures on this one ? @mxnet-label-bot Add [Numpy, pr-awaiting-review, Operator] |
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 30, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 31, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 31, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 31, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 31, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Jul 31, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 1, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 1, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 1, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 1, 2019
reminisce
pushed a commit
to reminisce/mxnet
that referenced
this pull request
Aug 1, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 2, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 2, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 2, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 2, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 4, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 4, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 4, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 4, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 5, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 5, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 6, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 6, 2019
haojin2
pushed a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 7, 2019
* numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check
haojin2
added a commit
to haojin2/incubator-mxnet
that referenced
this pull request
Aug 7, 2019
haojin2
added a commit
that referenced
this pull request
Aug 8, 2019
* [Do not review] [Do not merge] New numpy-compatible sum (#14739) * Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument * [numpy] Infra for supporting numpy ops in imperative mode and Gluon APIs (#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc * Enable np op compat check with name prefix (#14897) * [numpy] Numpy dot (#14831) * Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage * numpy-compatible mean (#14859) * [numpy] Some np ops for d2l (#14924) * Add np transpose More ops and namespaces for submodules Add relu and sigmoid Add reshape Fix symbolic name mismatch Add maximum and minimum * Add convenience fluent method * Add ndarray.item() * Fix CI * Fix lint * Fix lint * Fix reshape gpu * Add example * Remove python notebook outputs * Remove notebook output * Add one more example * [numpy] Refactor np modules (#14989) * Refactor * Initial refactoring * Fix notebook * Move numpy op check from backend to frontend * Add homogeneous ndarray check * Fix grouping inhomogeneous types of symbols * Improve error handling of different types of symbols as outputs * Fix test * Fix numpy test * Fix ci * Try to fix gpu ci failure * [numpy] Refactor np module (example runs through) (#15055) * Refactor notebook * notebook working with hybrid block * More refactoring * Remove unnecessary use_np_compat * Use class decorator to initialize numpy ndarrays in parameter.py * Clear notebook outputs * Improve np decorator * Remove npe op from optimizer * Fix CI * Fix functools.wraps issue in Python2 * Fix ci * Change np_compat to np_shape * Temporarily disable test_amp * Numpy-compatible stack (#15027) * numpy stack * migrate to use_np_shape * Numpy Unary Ops (#15010) * Unary Ops * new version of unit tests * [numpy] Fix np branch after rebase (#15086) * Add np_array semantics for Gluon Fix notebook Fix sanity Fix gluon deferred infer shape Add np.random.uniform Add random normal Add boolean comparison ops Add np.ndarray indexing Reformat test ndarray indexing Fix unit tests Add one more test of indexing Fix sanity Enable amp test Add np.arange Revert cython unit test to ctypes Delete unnecessary use_np_shape decorator from test Rebase with numpy branch support range as index Fix python2 range type check Add argmax Disable clojure test * Fix ci * Add np.linalg.norm for ord='fro' * Fix pylint * numpy concatenate (#15104) * [WIP][numpy] Fix for D2L Chapters 2/3/4 (#15139) * Fix * Fix linear regression gluon * More fix * Fix pylint * Fix for chapter 4 * Add np.add mul div mod pow sub and shuffle * Fix model selection, underfitting, overfitting * Fix weight decay * Fix dropout * Fix * Fix chapter 4 * [numpy] Fix d2l performance regression (#15173) * Add np array adapter decorator for layers * Fix performance regression caused by too many conversions between nd.NDArray and np.ndarray * Fix pylint * Fix test backward compatibility issue * Fix test_lambda * Fix (#15188) * fix for chapter6 conv nn (#15224) * [numpy] Fix d2l chapter8 (#15237) * Add np op doc * Fix several issues * Add a N-D dot b 2D support * Simplify array creation api * Add swapaxes * Fix rnn gluon * More fix * Fix pylint * Delete * Fix mp windows * fix for ch11 (#15244) * Numpy-compatible split (#15049) * numpy split * numpy split * unit test * unit test * [numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (#15246) * Add npx batch_dot and topk * Text embedding uses numpy * Fix SoftmaxCrossEntropyLoss with np * Fix sentiment cnn * Fix pylint * Fix dot attention * Fix seq2seq attention * Add np.tile * Fix transformer * Fix ci * Fix ci and rebase * [numpy] Fix d2l chapter 5 (#15264) * Fix parameter initializer * Add np.save and np.load * Fix read-write * Fix lint * Numpy compatible max (#15161) * numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check * Numpy compatible multinomial (#15219) * draft of multinomial * rename to more concise name * finish shape * complete the forward function * complete forward without handle 0 dimension & scalar * handle 0 dimension * add new line * fix lint * fix the build error * fix lint * finish unit test * change the registration * make multinomial support pvals as mx.ndarray * delete newline * fix lint error * support input as list, mx.ndarray, np.ndarray & unit test * fix lint * fix the include error * fix lint * refactor & pass the tensor instead of tuple to kernel * fix lint * updata the doc * address the comment * Numpy compatible linspace (#15256) * draft * finish linspace implementation * finish linspace * delete newline * fix pylint * add more unit test * address comment * add more test case * disable too-many-arguments * resolve confliction * add ctx * numpy-compatible cumsum (#15309) * [numpy] Misc fix for other chapters (#15332) * Add np.prod * Fix ndarray.reshape accepting positional integers as arguments * Rebase * Fix rebase error * Add np.ndarray.flatten * Fix * Add broadcast_to * Add meshgrid and broadcast_arrays * Fix sin, cos, sinh, cosh not supporting scalars * Add more unary ops supporting python scalars * Fix * Fix * Fix ci * Fix sanity * [numpy] Change d2l chapters cv and gan to use numpy (#15368) * Change op name style to lower case underscore * Add ops under image to npx * Add image submodule to npx * Fix split_and_load use np * Fix fine tuning * Fix bbox and anchor * Fix odd * Fix ssd and rcnn * Remove restriction on binary element-wise scalar * Fix gan * Fix sanity * Try to fix website build failure * Add npx.random.seed * Fix doc * add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (#15386) * [numpy] Fix several places in numpy (#15398) * Fix * More fix * [numpy] fix cython (#15418) * add cython support for numpy * stay with original API for backward compatibility * fix after rebase * get rid of coverage in clang60 mkldnn * fix lint issues * fix flaky test and get rid of extra print * remove numpy examples * revert #15309 #15256 #15219 #15161 * remove numpy docs * remove changes to contrib/text/embedding.py * remove numpy changes to gluon peripherals * Revert "remove numpy docs" This reverts commit c104695. * get rid of most operators * Revert "get rid of coverage in clang60 mkldnn" This reverts commit 77dc905. * remove np-compatible from mxnet.image mxnet.initializer * address comments
anirudhacharya
pushed a commit
to anirudhacharya/mxnet
that referenced
this pull request
Aug 20, 2019
* [Do not review] [Do not merge] New numpy-compatible sum (apache#14739) * Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument * [numpy] Infra for supporting numpy ops in imperative mode and Gluon APIs (apache#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc * Enable np op compat check with name prefix (apache#14897) * [numpy] Numpy dot (apache#14831) * Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage * numpy-compatible mean (apache#14859) * [numpy] Some np ops for d2l (apache#14924) * Add np transpose More ops and namespaces for submodules Add relu and sigmoid Add reshape Fix symbolic name mismatch Add maximum and minimum * Add convenience fluent method * Add ndarray.item() * Fix CI * Fix lint * Fix lint * Fix reshape gpu * Add example * Remove python notebook outputs * Remove notebook output * Add one more example * [numpy] Refactor np modules (apache#14989) * Refactor * Initial refactoring * Fix notebook * Move numpy op check from backend to frontend * Add homogeneous ndarray check * Fix grouping inhomogeneous types of symbols * Improve error handling of different types of symbols as outputs * Fix test * Fix numpy test * Fix ci * Try to fix gpu ci failure * [numpy] Refactor np module (example runs through) (apache#15055) * Refactor notebook * notebook working with hybrid block * More refactoring * Remove unnecessary use_np_compat * Use class decorator to initialize numpy ndarrays in parameter.py * Clear notebook outputs * Improve np decorator * Remove npe op from optimizer * Fix CI * Fix functools.wraps issue in Python2 * Fix ci * Change np_compat to np_shape * Temporarily disable test_amp * Numpy-compatible stack (apache#15027) * numpy stack * migrate to use_np_shape * Numpy Unary Ops (apache#15010) * Unary Ops * new version of unit tests * [numpy] Fix np branch after rebase (apache#15086) * Add np_array semantics for Gluon Fix notebook Fix sanity Fix gluon deferred infer shape Add np.random.uniform Add random normal Add boolean comparison ops Add np.ndarray indexing Reformat test ndarray indexing Fix unit tests Add one more test of indexing Fix sanity Enable amp test Add np.arange Revert cython unit test to ctypes Delete unnecessary use_np_shape decorator from test Rebase with numpy branch support range as index Fix python2 range type check Add argmax Disable clojure test * Fix ci * Add np.linalg.norm for ord='fro' * Fix pylint * numpy concatenate (apache#15104) * [WIP][numpy] Fix for D2L Chapters 2/3/4 (apache#15139) * Fix * Fix linear regression gluon * More fix * Fix pylint * Fix for chapter 4 * Add np.add mul div mod pow sub and shuffle * Fix model selection, underfitting, overfitting * Fix weight decay * Fix dropout * Fix * Fix chapter 4 * [numpy] Fix d2l performance regression (apache#15173) * Add np array adapter decorator for layers * Fix performance regression caused by too many conversions between nd.NDArray and np.ndarray * Fix pylint * Fix test backward compatibility issue * Fix test_lambda * Fix (apache#15188) * fix for chapter6 conv nn (apache#15224) * [numpy] Fix d2l chapter8 (apache#15237) * Add np op doc * Fix several issues * Add a N-D dot b 2D support * Simplify array creation api * Add swapaxes * Fix rnn gluon * More fix * Fix pylint * Delete * Fix mp windows * fix for ch11 (apache#15244) * Numpy-compatible split (apache#15049) * numpy split * numpy split * unit test * unit test * [numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (apache#15246) * Add npx batch_dot and topk * Text embedding uses numpy * Fix SoftmaxCrossEntropyLoss with np * Fix sentiment cnn * Fix pylint * Fix dot attention * Fix seq2seq attention * Add np.tile * Fix transformer * Fix ci * Fix ci and rebase * [numpy] Fix d2l chapter 5 (apache#15264) * Fix parameter initializer * Add np.save and np.load * Fix read-write * Fix lint * Numpy compatible max (apache#15161) * numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check * Numpy compatible multinomial (apache#15219) * draft of multinomial * rename to more concise name * finish shape * complete the forward function * complete forward without handle 0 dimension & scalar * handle 0 dimension * add new line * fix lint * fix the build error * fix lint * finish unit test * change the registration * make multinomial support pvals as mx.ndarray * delete newline * fix lint error * support input as list, mx.ndarray, np.ndarray & unit test * fix lint * fix the include error * fix lint * refactor & pass the tensor instead of tuple to kernel * fix lint * updata the doc * address the comment * Numpy compatible linspace (apache#15256) * draft * finish linspace implementation * finish linspace * delete newline * fix pylint * add more unit test * address comment * add more test case * disable too-many-arguments * resolve confliction * add ctx * numpy-compatible cumsum (apache#15309) * [numpy] Misc fix for other chapters (apache#15332) * Add np.prod * Fix ndarray.reshape accepting positional integers as arguments * Rebase * Fix rebase error * Add np.ndarray.flatten * Fix * Add broadcast_to * Add meshgrid and broadcast_arrays * Fix sin, cos, sinh, cosh not supporting scalars * Add more unary ops supporting python scalars * Fix * Fix * Fix ci * Fix sanity * [numpy] Change d2l chapters cv and gan to use numpy (apache#15368) * Change op name style to lower case underscore * Add ops under image to npx * Add image submodule to npx * Fix split_and_load use np * Fix fine tuning * Fix bbox and anchor * Fix odd * Fix ssd and rcnn * Remove restriction on binary element-wise scalar * Fix gan * Fix sanity * Try to fix website build failure * Add npx.random.seed * Fix doc * add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (apache#15386) * [numpy] Fix several places in numpy (apache#15398) * Fix * More fix * [numpy] fix cython (apache#15418) * add cython support for numpy * stay with original API for backward compatibility * fix after rebase * get rid of coverage in clang60 mkldnn * fix lint issues * fix flaky test and get rid of extra print * remove numpy examples * revert apache#15309 apache#15256 apache#15219 apache#15161 * remove numpy docs * remove changes to contrib/text/embedding.py * remove numpy changes to gluon peripherals * Revert "remove numpy docs" This reverts commit c104695. * get rid of most operators * Revert "get rid of coverage in clang60 mkldnn" This reverts commit 77dc905. * remove np-compatible from mxnet.image mxnet.initializer * address comments
access2rohit
pushed a commit
to access2rohit/incubator-mxnet
that referenced
this pull request
Sep 25, 2019
* [Do not review] [Do not merge] New numpy-compatible sum (apache#14739) * Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument * [numpy] Infra for supporting numpy ops in imperative mode and Gluon APIs (apache#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc * Enable np op compat check with name prefix (apache#14897) * [numpy] Numpy dot (apache#14831) * Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage * numpy-compatible mean (apache#14859) * [numpy] Some np ops for d2l (apache#14924) * Add np transpose More ops and namespaces for submodules Add relu and sigmoid Add reshape Fix symbolic name mismatch Add maximum and minimum * Add convenience fluent method * Add ndarray.item() * Fix CI * Fix lint * Fix lint * Fix reshape gpu * Add example * Remove python notebook outputs * Remove notebook output * Add one more example * [numpy] Refactor np modules (apache#14989) * Refactor * Initial refactoring * Fix notebook * Move numpy op check from backend to frontend * Add homogeneous ndarray check * Fix grouping inhomogeneous types of symbols * Improve error handling of different types of symbols as outputs * Fix test * Fix numpy test * Fix ci * Try to fix gpu ci failure * [numpy] Refactor np module (example runs through) (apache#15055) * Refactor notebook * notebook working with hybrid block * More refactoring * Remove unnecessary use_np_compat * Use class decorator to initialize numpy ndarrays in parameter.py * Clear notebook outputs * Improve np decorator * Remove npe op from optimizer * Fix CI * Fix functools.wraps issue in Python2 * Fix ci * Change np_compat to np_shape * Temporarily disable test_amp * Numpy-compatible stack (apache#15027) * numpy stack * migrate to use_np_shape * Numpy Unary Ops (apache#15010) * Unary Ops * new version of unit tests * [numpy] Fix np branch after rebase (apache#15086) * Add np_array semantics for Gluon Fix notebook Fix sanity Fix gluon deferred infer shape Add np.random.uniform Add random normal Add boolean comparison ops Add np.ndarray indexing Reformat test ndarray indexing Fix unit tests Add one more test of indexing Fix sanity Enable amp test Add np.arange Revert cython unit test to ctypes Delete unnecessary use_np_shape decorator from test Rebase with numpy branch support range as index Fix python2 range type check Add argmax Disable clojure test * Fix ci * Add np.linalg.norm for ord='fro' * Fix pylint * numpy concatenate (apache#15104) * [WIP][numpy] Fix for D2L Chapters 2/3/4 (apache#15139) * Fix * Fix linear regression gluon * More fix * Fix pylint * Fix for chapter 4 * Add np.add mul div mod pow sub and shuffle * Fix model selection, underfitting, overfitting * Fix weight decay * Fix dropout * Fix * Fix chapter 4 * [numpy] Fix d2l performance regression (apache#15173) * Add np array adapter decorator for layers * Fix performance regression caused by too many conversions between nd.NDArray and np.ndarray * Fix pylint * Fix test backward compatibility issue * Fix test_lambda * Fix (apache#15188) * fix for chapter6 conv nn (apache#15224) * [numpy] Fix d2l chapter8 (apache#15237) * Add np op doc * Fix several issues * Add a N-D dot b 2D support * Simplify array creation api * Add swapaxes * Fix rnn gluon * More fix * Fix pylint * Delete * Fix mp windows * fix for ch11 (apache#15244) * Numpy-compatible split (apache#15049) * numpy split * numpy split * unit test * unit test * [numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (apache#15246) * Add npx batch_dot and topk * Text embedding uses numpy * Fix SoftmaxCrossEntropyLoss with np * Fix sentiment cnn * Fix pylint * Fix dot attention * Fix seq2seq attention * Add np.tile * Fix transformer * Fix ci * Fix ci and rebase * [numpy] Fix d2l chapter 5 (apache#15264) * Fix parameter initializer * Add np.save and np.load * Fix read-write * Fix lint * Numpy compatible max (apache#15161) * numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check * Numpy compatible multinomial (apache#15219) * draft of multinomial * rename to more concise name * finish shape * complete the forward function * complete forward without handle 0 dimension & scalar * handle 0 dimension * add new line * fix lint * fix the build error * fix lint * finish unit test * change the registration * make multinomial support pvals as mx.ndarray * delete newline * fix lint error * support input as list, mx.ndarray, np.ndarray & unit test * fix lint * fix the include error * fix lint * refactor & pass the tensor instead of tuple to kernel * fix lint * updata the doc * address the comment * Numpy compatible linspace (apache#15256) * draft * finish linspace implementation * finish linspace * delete newline * fix pylint * add more unit test * address comment * add more test case * disable too-many-arguments * resolve confliction * add ctx * numpy-compatible cumsum (apache#15309) * [numpy] Misc fix for other chapters (apache#15332) * Add np.prod * Fix ndarray.reshape accepting positional integers as arguments * Rebase * Fix rebase error * Add np.ndarray.flatten * Fix * Add broadcast_to * Add meshgrid and broadcast_arrays * Fix sin, cos, sinh, cosh not supporting scalars * Add more unary ops supporting python scalars * Fix * Fix * Fix ci * Fix sanity * [numpy] Change d2l chapters cv and gan to use numpy (apache#15368) * Change op name style to lower case underscore * Add ops under image to npx * Add image submodule to npx * Fix split_and_load use np * Fix fine tuning * Fix bbox and anchor * Fix odd * Fix ssd and rcnn * Remove restriction on binary element-wise scalar * Fix gan * Fix sanity * Try to fix website build failure * Add npx.random.seed * Fix doc * add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (apache#15386) * [numpy] Fix several places in numpy (apache#15398) * Fix * More fix * [numpy] fix cython (apache#15418) * add cython support for numpy * stay with original API for backward compatibility * fix after rebase * get rid of coverage in clang60 mkldnn * fix lint issues * fix flaky test and get rid of extra print * remove numpy examples * revert apache#15309 apache#15256 apache#15219 apache#15161 * remove numpy docs * remove changes to contrib/text/embedding.py * remove numpy changes to gluon peripherals * Revert "remove numpy docs" This reverts commit c104695. * get rid of most operators * Revert "get rid of coverage in clang60 mkldnn" This reverts commit 77dc905. * remove np-compatible from mxnet.image mxnet.initializer * address comments
access2rohit
pushed a commit
to access2rohit/incubator-mxnet
that referenced
this pull request
Sep 25, 2019
* [Do not review] [Do not merge] New numpy-compatible sum (apache#14739) * Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument * [numpy] Infra for supporting numpy ops in imperative mode and Gluon APIs (apache#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc * Enable np op compat check with name prefix (apache#14897) * [numpy] Numpy dot (apache#14831) * Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage * numpy-compatible mean (apache#14859) * [numpy] Some np ops for d2l (apache#14924) * Add np transpose More ops and namespaces for submodules Add relu and sigmoid Add reshape Fix symbolic name mismatch Add maximum and minimum * Add convenience fluent method * Add ndarray.item() * Fix CI * Fix lint * Fix lint * Fix reshape gpu * Add example * Remove python notebook outputs * Remove notebook output * Add one more example * [numpy] Refactor np modules (apache#14989) * Refactor * Initial refactoring * Fix notebook * Move numpy op check from backend to frontend * Add homogeneous ndarray check * Fix grouping inhomogeneous types of symbols * Improve error handling of different types of symbols as outputs * Fix test * Fix numpy test * Fix ci * Try to fix gpu ci failure * [numpy] Refactor np module (example runs through) (apache#15055) * Refactor notebook * notebook working with hybrid block * More refactoring * Remove unnecessary use_np_compat * Use class decorator to initialize numpy ndarrays in parameter.py * Clear notebook outputs * Improve np decorator * Remove npe op from optimizer * Fix CI * Fix functools.wraps issue in Python2 * Fix ci * Change np_compat to np_shape * Temporarily disable test_amp * Numpy-compatible stack (apache#15027) * numpy stack * migrate to use_np_shape * Numpy Unary Ops (apache#15010) * Unary Ops * new version of unit tests * [numpy] Fix np branch after rebase (apache#15086) * Add np_array semantics for Gluon Fix notebook Fix sanity Fix gluon deferred infer shape Add np.random.uniform Add random normal Add boolean comparison ops Add np.ndarray indexing Reformat test ndarray indexing Fix unit tests Add one more test of indexing Fix sanity Enable amp test Add np.arange Revert cython unit test to ctypes Delete unnecessary use_np_shape decorator from test Rebase with numpy branch support range as index Fix python2 range type check Add argmax Disable clojure test * Fix ci * Add np.linalg.norm for ord='fro' * Fix pylint * numpy concatenate (apache#15104) * [WIP][numpy] Fix for D2L Chapters 2/3/4 (apache#15139) * Fix * Fix linear regression gluon * More fix * Fix pylint * Fix for chapter 4 * Add np.add mul div mod pow sub and shuffle * Fix model selection, underfitting, overfitting * Fix weight decay * Fix dropout * Fix * Fix chapter 4 * [numpy] Fix d2l performance regression (apache#15173) * Add np array adapter decorator for layers * Fix performance regression caused by too many conversions between nd.NDArray and np.ndarray * Fix pylint * Fix test backward compatibility issue * Fix test_lambda * Fix (apache#15188) * fix for chapter6 conv nn (apache#15224) * [numpy] Fix d2l chapter8 (apache#15237) * Add np op doc * Fix several issues * Add a N-D dot b 2D support * Simplify array creation api * Add swapaxes * Fix rnn gluon * More fix * Fix pylint * Delete * Fix mp windows * fix for ch11 (apache#15244) * Numpy-compatible split (apache#15049) * numpy split * numpy split * unit test * unit test * [numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (apache#15246) * Add npx batch_dot and topk * Text embedding uses numpy * Fix SoftmaxCrossEntropyLoss with np * Fix sentiment cnn * Fix pylint * Fix dot attention * Fix seq2seq attention * Add np.tile * Fix transformer * Fix ci * Fix ci and rebase * [numpy] Fix d2l chapter 5 (apache#15264) * Fix parameter initializer * Add np.save and np.load * Fix read-write * Fix lint * Numpy compatible max (apache#15161) * numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check * Numpy compatible multinomial (apache#15219) * draft of multinomial * rename to more concise name * finish shape * complete the forward function * complete forward without handle 0 dimension & scalar * handle 0 dimension * add new line * fix lint * fix the build error * fix lint * finish unit test * change the registration * make multinomial support pvals as mx.ndarray * delete newline * fix lint error * support input as list, mx.ndarray, np.ndarray & unit test * fix lint * fix the include error * fix lint * refactor & pass the tensor instead of tuple to kernel * fix lint * updata the doc * address the comment * Numpy compatible linspace (apache#15256) * draft * finish linspace implementation * finish linspace * delete newline * fix pylint * add more unit test * address comment * add more test case * disable too-many-arguments * resolve confliction * add ctx * numpy-compatible cumsum (apache#15309) * [numpy] Misc fix for other chapters (apache#15332) * Add np.prod * Fix ndarray.reshape accepting positional integers as arguments * Rebase * Fix rebase error * Add np.ndarray.flatten * Fix * Add broadcast_to * Add meshgrid and broadcast_arrays * Fix sin, cos, sinh, cosh not supporting scalars * Add more unary ops supporting python scalars * Fix * Fix * Fix ci * Fix sanity * [numpy] Change d2l chapters cv and gan to use numpy (apache#15368) * Change op name style to lower case underscore * Add ops under image to npx * Add image submodule to npx * Fix split_and_load use np * Fix fine tuning * Fix bbox and anchor * Fix odd * Fix ssd and rcnn * Remove restriction on binary element-wise scalar * Fix gan * Fix sanity * Try to fix website build failure * Add npx.random.seed * Fix doc * add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (apache#15386) * [numpy] Fix several places in numpy (apache#15398) * Fix * More fix * [numpy] fix cython (apache#15418) * add cython support for numpy * stay with original API for backward compatibility * fix after rebase * get rid of coverage in clang60 mkldnn * fix lint issues * fix flaky test and get rid of extra print * remove numpy examples * revert apache#15309 apache#15256 apache#15219 apache#15161 * remove numpy docs * remove changes to contrib/text/embedding.py * remove numpy changes to gluon peripherals * Revert "remove numpy docs" This reverts commit c104695. * get rid of most operators * Revert "get rid of coverage in clang60 mkldnn" This reverts commit 77dc905. * remove np-compatible from mxnet.image mxnet.initializer * address comments
access2rohit
pushed a commit
to access2rohit/incubator-mxnet
that referenced
this pull request
Sep 25, 2019
* [Do not review] [Do not merge] New numpy-compatible sum (apache#14739) * Add numpy namespace and initial impl of np.sum (not complete) * Clean up * Fix import error * numpy sum * add test and backward data type support * add license to test_numpy_op.py * improve test to reduce flakiness * fix sanity build * extra numeric test and imperative test * add error message for initial argument * [numpy] Infra for supporting numpy ops in imperative mode and Gluon APIs (apache#14758) * Infra of new ndarray and symbol types for numpy operators * Rename * Fix import problem * Refactor * Remove redundant code * Add docstring * More on numpy ndarray and symbol * Override unimplemented methdos for ndarray and _NumpySymbol * Fix built-in methods of ndarray and _NumpySymbol * Fix test and sanity check * Fix pylint * Address cr comments * Add unit tests for ndarray and _NumpySymbol * Add _true_divide * Fix gpu build * Add future import division * More correct way of checking if an output is from a np compat op * Fix gpu build * Fix output ndarray/symbol types with at least one new ndarray/symbol * Modify true_divide doc * Fix flaky copying zero-size arrays via gpus * Fix zero size in gluon hybridize and zeros/ones symbol not creating new symbol type * Fix doc * Enable np op compat check with name prefix (apache#14897) * [numpy] Numpy dot (apache#14831) * Numpy Dot case 1-4 + case 3.5 forward and 0.5 backward * Backward computation and test coverage * numpy-compatible mean (apache#14859) * [numpy] Some np ops for d2l (apache#14924) * Add np transpose More ops and namespaces for submodules Add relu and sigmoid Add reshape Fix symbolic name mismatch Add maximum and minimum * Add convenience fluent method * Add ndarray.item() * Fix CI * Fix lint * Fix lint * Fix reshape gpu * Add example * Remove python notebook outputs * Remove notebook output * Add one more example * [numpy] Refactor np modules (apache#14989) * Refactor * Initial refactoring * Fix notebook * Move numpy op check from backend to frontend * Add homogeneous ndarray check * Fix grouping inhomogeneous types of symbols * Improve error handling of different types of symbols as outputs * Fix test * Fix numpy test * Fix ci * Try to fix gpu ci failure * [numpy] Refactor np module (example runs through) (apache#15055) * Refactor notebook * notebook working with hybrid block * More refactoring * Remove unnecessary use_np_compat * Use class decorator to initialize numpy ndarrays in parameter.py * Clear notebook outputs * Improve np decorator * Remove npe op from optimizer * Fix CI * Fix functools.wraps issue in Python2 * Fix ci * Change np_compat to np_shape * Temporarily disable test_amp * Numpy-compatible stack (apache#15027) * numpy stack * migrate to use_np_shape * Numpy Unary Ops (apache#15010) * Unary Ops * new version of unit tests * [numpy] Fix np branch after rebase (apache#15086) * Add np_array semantics for Gluon Fix notebook Fix sanity Fix gluon deferred infer shape Add np.random.uniform Add random normal Add boolean comparison ops Add np.ndarray indexing Reformat test ndarray indexing Fix unit tests Add one more test of indexing Fix sanity Enable amp test Add np.arange Revert cython unit test to ctypes Delete unnecessary use_np_shape decorator from test Rebase with numpy branch support range as index Fix python2 range type check Add argmax Disable clojure test * Fix ci * Add np.linalg.norm for ord='fro' * Fix pylint * numpy concatenate (apache#15104) * [WIP][numpy] Fix for D2L Chapters 2/3/4 (apache#15139) * Fix * Fix linear regression gluon * More fix * Fix pylint * Fix for chapter 4 * Add np.add mul div mod pow sub and shuffle * Fix model selection, underfitting, overfitting * Fix weight decay * Fix dropout * Fix * Fix chapter 4 * [numpy] Fix d2l performance regression (apache#15173) * Add np array adapter decorator for layers * Fix performance regression caused by too many conversions between nd.NDArray and np.ndarray * Fix pylint * Fix test backward compatibility issue * Fix test_lambda * Fix (apache#15188) * fix for chapter6 conv nn (apache#15224) * [numpy] Fix d2l chapter8 (apache#15237) * Add np op doc * Fix several issues * Add a N-D dot b 2D support * Simplify array creation api * Add swapaxes * Fix rnn gluon * More fix * Fix pylint * Delete * Fix mp windows * fix for ch11 (apache#15244) * Numpy-compatible split (apache#15049) * numpy split * numpy split * unit test * unit test * [numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (apache#15246) * Add npx batch_dot and topk * Text embedding uses numpy * Fix SoftmaxCrossEntropyLoss with np * Fix sentiment cnn * Fix pylint * Fix dot attention * Fix seq2seq attention * Add np.tile * Fix transformer * Fix ci * Fix ci and rebase * [numpy] Fix d2l chapter 5 (apache#15264) * Fix parameter initializer * Add np.save and np.load * Fix read-write * Fix lint * Numpy compatible max (apache#15161) * numpy amax * weird cu file diff * fix the unit test error * fix gpu bug * minor fix * fix lint * remove scalar value check * fix the bug on unit test * fix the case () that breaks the kernel launch * add zero dimension unit test * revert the tuple change * use mshadow maximum * remove test zero * change the macro for now * change the cuda to use mashadow op * fix the broadcast_reduce_op_value.cu wrong kernel * add more logic in shape to detect the invalid situation * change back to type swtich * change to as_nd_ndarray * add missing @npx.use_np_shape * retrigger CI * address the comment * undo algorithm import * remove the numeric gradient check * Numpy compatible multinomial (apache#15219) * draft of multinomial * rename to more concise name * finish shape * complete the forward function * complete forward without handle 0 dimension & scalar * handle 0 dimension * add new line * fix lint * fix the build error * fix lint * finish unit test * change the registration * make multinomial support pvals as mx.ndarray * delete newline * fix lint error * support input as list, mx.ndarray, np.ndarray & unit test * fix lint * fix the include error * fix lint * refactor & pass the tensor instead of tuple to kernel * fix lint * updata the doc * address the comment * Numpy compatible linspace (apache#15256) * draft * finish linspace implementation * finish linspace * delete newline * fix pylint * add more unit test * address comment * add more test case * disable too-many-arguments * resolve confliction * add ctx * numpy-compatible cumsum (apache#15309) * [numpy] Misc fix for other chapters (apache#15332) * Add np.prod * Fix ndarray.reshape accepting positional integers as arguments * Rebase * Fix rebase error * Add np.ndarray.flatten * Fix * Add broadcast_to * Add meshgrid and broadcast_arrays * Fix sin, cos, sinh, cosh not supporting scalars * Add more unary ops supporting python scalars * Fix * Fix * Fix ci * Fix sanity * [numpy] Change d2l chapters cv and gan to use numpy (apache#15368) * Change op name style to lower case underscore * Add ops under image to npx * Add image submodule to npx * Fix split_and_load use np * Fix fine tuning * Fix bbox and anchor * Fix odd * Fix ssd and rcnn * Remove restriction on binary element-wise scalar * Fix gan * Fix sanity * Try to fix website build failure * Add npx.random.seed * Fix doc * add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (apache#15386) * [numpy] Fix several places in numpy (apache#15398) * Fix * More fix * [numpy] fix cython (apache#15418) * add cython support for numpy * stay with original API for backward compatibility * fix after rebase * get rid of coverage in clang60 mkldnn * fix lint issues * fix flaky test and get rid of extra print * remove numpy examples * revert apache#15309 apache#15256 apache#15219 apache#15161 * remove numpy docs * remove changes to contrib/text/embedding.py * remove numpy changes to gluon peripherals * Revert "remove numpy docs" This reverts commit c104695. * get rid of most operators * Revert "get rid of coverage in clang60 mkldnn" This reverts commit 77dc905. * remove np-compatible from mxnet.image mxnet.initializer * address comments
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
numpy amax operator
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments
@haojin2 @reminisce