Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[1.7] MXNet Extension PRs (#17623, #17569, #17762) #18063

Merged
merged 3 commits into from
Apr 16, 2020

Commits on Apr 15, 2020

  1. Dynamic subgraph compile support (apache#17623)

    This PR adds support for passing the NDArrays from the existing optimize_for API down to the reviewSubgraph function in an external library. It also adds a new API for HybridBlock called optimize_for that can partition the model without running a forward pass.
    
    Feature changes
    
        Adds new API to HybridBlock optimize_for that partitions the model but does not call the cachedOp
        Modifies the subgraph library example to optionally require args to be provided
        Adds annotation on subgraph inputs for the name of the original param so that inputs can be mapped and passes annotations to input nodes of subgraphs
        Adds support for tensors in MKLDNN format, calls Reorder2Default
    
    New tests
    
        Adds a new test to partition operators that directly consume params
        add a new model to test where ops to be partitioned have args/params
    
    Bug Fixes
    
        fixes bug in passing ids vector by value instead of by reference
        fixes bug in passing copies of attributes instead of by reference
        fixes bug where _cached_graph was not updated after partitioning
        fixes memory leak where user-specified attributes on subgraph ops were not freed if subgraph was rejected
        fixes problem incorrectly indexing into shape/dtype maps when annotating the graph
    
    Docs
    
        Updates the README doc with the latest changes described above
    samskalicky committed Apr 15, 2020
    Configuration menu
    Copy the full SHA
    6b950bc View commit details
    Browse the repository at this point in the history
  2. Adding sparse support to MXTensor for custom operators (apache#17569)

    * Added enum for sparse storage
    
    * Add structure for Dense and Sparse
    
    * redesign the data structure for MXSparse
    
    * pull out aux data from sparse NDArray
    
    * Added more sparse arguments to API interface
    
    * Passed sparse from c_api to lib_api.h and set in MXTensor
    
    * Fix indent
    
    * fix segfault
    
    * Fix NDArray to MXTensor errors
    
    * Add a sample of sparse(CSR) transpose
    
    * Make CSR transpose temporarily work by hardcoding
    
    * Fixed sparse output size(Refined)
    
    * Add tests for symbolic and stateful ops
    
    * Added a sample for row sparse transpose
    
    * Added real row sparse transpose
    
    * Fix output size issue by adding lambda for CheckAndAlloc()
    
    * Fix mixed storage formats error
    
    * Added infer storage type function
    
    * resolve comments
    
    * Set inferSType as optional function
    
    * Resolve comments
    
    * Add error messages
    
    * Resolve comments
    
    * verify transpose ops results
    
    * fix sanity check
    
    * update MX_LIBRARY_VERSION to 5
    guanxinq authored and samskalicky committed Apr 15, 2020
    Configuration menu
    Copy the full SHA
    9af935f View commit details
    Browse the repository at this point in the history
  3. Custom Operator Random Number Generator Support (apache#17762)

    Add random number generator support for custom operator libraries.
    
    Design: We pass from MXNet the initialized and seeded states, located on CPU and GPU, to custom library. So user could use those seeds to generate deterministic values from a given seed passed to MXNet. Basically this workflow:
    
    mx.random.seed(128)
    r1 = mx.nd.some_custom_random_op(data)
    mx.random.seed(128)
    r2 = mx.nd.some_custom_random_op(data)
    assert (r1 == r2)
    
    This PR does not let custom library generate exactly the same sequence of random numbers comparing to MXNet
    
    This is a continuation of the custom operator project apache#15921 and apache#17270
    rondogency authored and samskalicky committed Apr 15, 2020
    Configuration menu
    Copy the full SHA
    a39936f View commit details
    Browse the repository at this point in the history