-
Notifications
You must be signed in to change notification settings - Fork 6.8k
batchnorm specify channel axis and performance optimizations for batchnorm #6411
Conversation
src/operator/batch_norm-inl.h
Outdated
/*! \brief Parameters for BatchNoram operator */ | ||
struct BatchNormParam : public dmlc::Parameter<BatchNormParam> { | ||
float eps; | ||
float momentum; | ||
bool fix_gamma; | ||
bool use_global_stats; | ||
bool output_mean_var; | ||
int channel_axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
channel_axis -> axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename the argument to axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
src/operator/batch_norm-inl.h
Outdated
@@ -32,13 +32,16 @@ enum BatchNormOpOutputs {kOut, kMean, kVar}; // req, out_data | |||
enum BatchNormOpAuxiliary {kMovingMean, kMovingVar}; // aux_states | |||
} // namespace batchnorm | |||
|
|||
constexpr int DEFAULT_CHANNEL_AXIS = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should go into batchnorm name space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
src/operator/batch_norm-inl.h
Outdated
@@ -207,21 +212,26 @@ class BatchNormProp : public OperatorProperty { | |||
CHECK_EQ(in_shape->size(), 3U) << "Input:[data, gamma, beta]"; | |||
const TShape &dshape = in_shape->at(0); | |||
|
|||
CHECK_GE(param_.channel_axis, -1) << "Invalid channel axis: " << param_.channel_axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use proper parsing: axis = axis + ndim()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
axis = -1 means first dim from right, axis=-2 means second dim from the right. You are only special casing -1 here. We need to support all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, I see. I wasn't aware of this requirement. I was under the impression it was like some other system I was looking at where there's a "first" flag and "last" flag only (Torch, maybe?) -- 0 would be first, -1 would be last. Ok, I will change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
src/operator/batch_norm-inl.h
Outdated
#endif | ||
|
||
template<typename DType> | ||
class BNTensor3 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why define this instead of using mshadow::Tensor<xpu, 3>.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You told me not to use mshadow::Tensor, so I've never inspected it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I? What was the reason? I don't remember
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also don't see how mshadow::Tensor<xpu, 3> knows about channel axis, and along with my member functions, mshadow::Tensor doesn't offer any added value that I can see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I was referring to don't use mshadow's template evaluation features. Like Tensor x; x = 1;
Only using the data structure is fine.
But you can leave it like this for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
src/operator/batch_norm-inl.h
Outdated
|
||
#if defined(__CUDACC__) | ||
#define __bn_hostonly__ __host__ | ||
#define __bn_hostdevinl__ __host__ __device__ __forceinline__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These macros already exists in mshadow.
for example this is MSHADOW_XINLINE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
src/operator/batch_norm-inl.h
Outdated
@@ -39,6 +43,7 @@ struct BatchNormParam : public dmlc::Parameter<BatchNormParam> { | |||
bool fix_gamma; | |||
bool use_global_stats; | |||
bool output_mean_var; | |||
int channel_axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename to axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
* Fix Download Button * Small fix
Most of the fixes should be self evident. For tutorial on pre-trained models, one of the images doesn't exist anymore so selected a new one. Long-term, we should put such images on web-data repo but alas, some other day. For Handwritten digit tutorial, we are missing couple of imports in the test_utils.py that was recently created. Note that: for pre-trained model tutorial, we get a softmax_label warning and the probability scores are not really probabilities. Will deal with that issue in another PR. Testing: I've tried to test all the notebooks with this change and things look fine.
* Formatting fixes * lint fixed * fix
* doc bash for pack, unpack, pack_img and unpack_img * Add comments for labels could be 1d list * Update recordio.py * Update recordio.py * Update recordio.py fixing text * Update recordio.py fixing text * remove empty line
* CSVIter example correction * fix
…#6113) * Update documentation for MXNetDataIter in io.py (apache#6000) * [DOC] Respond to feedback (apache#6113)
1. In the notes section for ndarray, references did not seem clear enough to be referring to mxnet.ndarray or numpy.ndarray. Added the package names as prefixes to make it more obvious. 2. "share the same C++ operator source codes" => "share the same code". Since we don't really need to throw in more details than required. 3. Other relatively minor language changes which will be obvious from the diff. Note that I'm relatively not sure about the need for 1/ since it makes things more verbose. Let me know if it unnecessary and I'll remove it.
* Update documentation for mxnet.ndarray.GridGenerator. Thanks @Lyken17 for apache#6147 * Fix lint error. * Minor fix. * Remove the example.
* Update documentation for deconvolution operation. * Add examples. * Remove the example.
* residual unroll * unroll for residual cell * merge_outputs fix
* Fixing a link in the linear regression tutorial. The link was initally going to mxnet-test.readthedocs.io. Changed it to mxnet.io/api. * More appropriate language.
* bump up version number for release * update version for scala/R/backend
changed PR name
@nswamy wants to merge it immediately, so i'm going to do it now. I also changed the PR title.
Updated Line 107 of 'im2rec.py'. Read an image as binary.
* Change Interface of NDArray & TBlob for DLPack Compatible Fix for cudnn operator Fix cpp tests * Update nnvm * Fix for MKL mem * Fix for windows macro * Bump up version number to 0.10.1 * Update NDArray Save&Load * trigger update * Add test for legacy data load * Use LegacyTShapeLoad * trigger update * Update tensor_blob.h
Unrelated problem: libdc1394 error: Failed to initialize libdc1394 (opencv problem with camera on build machine?) |
…hnorm (apache#6411) * Add channel_axis to batch norm, performance improvements * rearrange tests a bit * rearrange tests a bit * CR changes * cpp package link issue * Fix: MSVC wants all parallel omp to be int * CR comments, expand legal negative axes * lint * lint * Fix download link (apache#6431) * Fix Download Button * Small fix * Add release note (apache#6434) * Fixing tutorials. (apache#6436) Most of the fixes should be self evident. For tutorial on pre-trained models, one of the images doesn't exist anymore so selected a new one. Long-term, we should put such images on web-data repo but alas, some other day. For Handwritten digit tutorial, we are missing couple of imports in the test_utils.py that was recently created. Note that: for pre-trained model tutorial, we get a softmax_label warning and the probability scores are not really probabilities. Will deal with that issue in another PR. Testing: I've tried to test all the notebooks with this change and things look fine. * Formatting fixes (apache#6433) * Formatting fixes * lint fixed * fix * doc bash 2-5, for pack, unpack, pack_img and unpack_img (apache#6140) * doc bash for pack, unpack, pack_img and unpack_img * Add comments for labels could be 1d list * Update recordio.py * Update recordio.py * Update recordio.py fixing text * Update recordio.py fixing text * remove empty line * Improve style (apache#6445) * Correction (apache#6444) * CSVIter example correction * fix * Update documentation for MXNetDataIter in io.py (apache#6000) (apache#6113) * Update documentation for MXNetDataIter in io.py (apache#6000) * [DOC] Respond to feedback (apache#6113) * Fix minor issues with api pages. (apache#6410) 1. In the notes section for ndarray, references did not seem clear enough to be referring to mxnet.ndarray or numpy.ndarray. Added the package names as prefixes to make it more obvious. 2. "share the same C++ operator source codes" => "share the same code". Since we don't really need to throw in more details than required. 3. Other relatively minor language changes which will be obvious from the diff. Note that I'm relatively not sure about the need for 1/ since it makes things more verbose. Let me know if it unnecessary and I'll remove it. * fixing the early stop for maximize = T (apache#5915) close apache#4587 * Update documentation for mxnet.ndarray.GridGenerator. (apache#6430) * Update documentation for mxnet.ndarray.GridGenerator. Thanks @Lyken17 for apache#6147 * Fix lint error. * Minor fix. * Remove the example. * Update documentation for deconvolution operation. (apache#6184) * Update documentation for deconvolution operation. * Add examples. * Remove the example. * skip lines that have %matplotlib (apache#6451) * Fixing some more broken links before v0.10 release (apache#6449) * close apache#4838 (apache#6452) * Fix linear regression (apache#6432) * Fix Linear Regression Tutorial * Small fix * Pre-trained model tutorial fixes. (apache#6453) Before the change on running the tutorial for the first time: "UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])". It also showed probability of >>1 due to incorrect usage of np.argsort(). * Nightly test tutorial (apache#6447) * Add tutorial test * Fix pylint * Small fix * fix member variable name: make them end with underline (apache#6438) * [R] captcha example (apache#6443) * skip lines that have %matplotlib (apache#6459) * Fix cudnn_deconv not guarding no_bias (apache#6456) * Fixing up issues in install guide (apache#6463) * Fixing copy code functionality for bash command (apache#6465) * Residual unroll (apache#6397) * residual unroll * unroll for residual cell * merge_outputs fix * Linear regression Tutorial link (apache#6468) * Fixing a link in the linear regression tutorial. The link was initally going to mxnet-test.readthedocs.io. Changed it to mxnet.io/api. * More appropriate language. * bump up version number for release (apache#6462) * bump up version number for release * update version for scala/R/backend * [R][DOC] update R installation guide (apache#6457) * Use sphinx==1.3.5 in Dockerfile.doc (apache#6470) changed PR name * Add 0.10 release info to README.md and NEWS.md (apache#6471) @nswamy wants to merge it immediately, so i'm going to do it now. I also changed the PR title. * fix batchNorm cpp example (apache#6454) * Update im2rec.py (apache#6473) Updated Line 107 of 'im2rec.py'. Read an image as binary. * Change Interface of NDArray & TBlob for DLPack Compatible (apache#6345) * Change Interface of NDArray & TBlob for DLPack Compatible Fix for cudnn operator Fix cpp tests * Update nnvm * Fix for MKL mem * Fix for windows macro * Bump up version number to 0.10.1 * Update NDArray Save&Load * trigger update * Add test for legacy data load * Use LegacyTShapeLoad * trigger update * Update tensor_blob.h * change 'channel_axis' parameter to 'axis' * Change DEFAULT_CHANNEL_AXIS to DEFAULT_AXIS * wait for dlpack PR to go through * Trigger build
@piiswrong
@madjam
Batch norm takes channel_axis parameter
Optmizations -- CPU batchnorm performance (without SIMD) approaches MKL performance
BatchNormV1Prop 2D: Timing [Forward] 1972.71 ms, avg: 3.94541 ms X 500 passes
BatchNormV1Prop 2D: Timing [Backward] 13659.1 ms, avg: 27.3182 ms X 500 passes
MKL BatchNormProp 2D: Timing [Forward] 99.892 ms, avg: 0.199784 ms X 500 passes
MKL BatchNormProp 2D: Timing [Backward] 121.685 ms, avg: 0.24337 ms X 500 passes
BatchNormProp 2D: Timing [Forward] 174.854 ms, avg: 0.319708 ms X 500 passes
BatchNormProp 2D: Timing [Backward] 166.815 ms, avg: 0.33363 ms X 500 passes
BatchNormV1Prop 2D: Timing [Forward] 5.058 ms, avg: 0.010116 ms X 500 passes
BatchNormV1Prop 2D: Timing [Backward] 14.812 ms, avg: 0.029624 ms X 500 passes
BatchNormProp 2D: Timing [Forward] 1.711 ms, avg: 0.003422 ms X 500 passes
BatchNormProp 2D: Timing [Backward] 1.752 ms, avg: 0.003504 ms X 500 passes