Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster R-CNN (work from Ross Girshick) #4163

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

Austriker
Copy link
Contributor

@Austriker Austriker commented May 17, 2016

Hi,

I have been working on py-faster-rcnn. I would like to port the layers from the fork of caffe to avoid having to versions of caffe to maintain.

I have created a fork of py-faster-rcnn that adds support of python 3 and links to an updated caffe. There is still some work to be done.

The PR adds :

  • smooth l1 loss layer
  • roi pooling layer
  • expose phase in pycaffe (already implemented)
  • dropout scaling at test time (needed for MSRA-trained ZF network)

TYPED_TEST_CASE(ROIPoolingLayerTest, TestDtypes);

TYPED_TEST(ROIPoolingLayerTest, TestGradient) {
typedef typename TypeParam::Dtype Dtype;
Copy link

@happyharrycn happyharrycn May 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is where the compiler is complaining. TypeParam here is the actually data type, since you specified GPUDeviceTest. Remove this line. And change Dtype to TypeParam in line 87 and 88.

@seanbell
Copy link

seanbell commented May 19, 2016

If you want to merge in code from another project, please handle attribution properly, honoring the license from py-faster-rcnn. The BVLC caffe license allows for you to list the various authors using the commit history: https://github.com/BVLC/caffe/blob/master/LICENSE#L11-L16

From what I can tell, the current attribution is not sufficient and doesn't satisfy the original license, which requires that it appear in any copies (https://github.com/rbgirshick/py-faster-rcnn/blob/master/LICENSE#L14-15).

@Austriker
Copy link
Contributor Author

@happyharrycn Thank you !
@seanbell Sorry for that I will update the commit message with the proper copyright !

@Austriker Austriker changed the title [WIP] Faster R-CNN (work from Ross Girshick) Faster R-CNN (work from Ross Girshick) May 19, 2016
@Austriker
Copy link
Contributor Author

Austriker commented May 19, 2016

@seanbell Updated the commit message with the proper LICENSE.
This PR should be good to go !

@seanbell
Copy link

@rbgirshick any comment on trying to merge your code into caffe?

@yossibiton
Copy link

yossibiton commented May 22, 2016

@rbgirshick and caffe developers out there -
why did exactly the faster-rcnn have different caffe fork ?
isn't it possible to extend caffe by writing independent layers without having to build custom version of caffe ? i'm regarding faster-rcnn but my question is general as well

@Austriker
Copy link
Contributor Author

@yossibiton I don't really understand your question. But if you want to add new layers you have to do it in your own fork of caffe before being able to merge it with the main caffe repo.

@yossibiton
Copy link

The question is why forking caffe in order to write one layer, instead of
opening a light project extending caffe which has link to caffe master
branch (via git submodule).

On Sun, May 22, 2016, 20:04 Hugo SERRAT notifications@github.com wrote:

@yossibiton https://github.com/yossibiton I don't really understand
your question. But if you want to add new layers you have to do it in your
own fork of caffe before being able to merge it with the main caffe repo.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#4163 (comment)

@seanbell
Copy link

seanbell commented May 22, 2016

@yossibiton there is no way to make a "light project" extending caffe. Caffe doesn't have a system for adding in your own layers, other than forking the entire project and adding in the layers, editing the central net schema. It would be great to make such a system, but currently one does not exist.

@yossibiton
Copy link

Thanks for the clarification. I'm new to caffe (worked with torch before)
so this is where my questions come from.

On Sun, May 22, 2016, 20:40 Sean Bell notifications@github.com wrote:

@yossibiton https://github.com/yossibiton there is no way to make a
"light project" extending caffe. Caffe doesn't have a system for adding in
your own layers, other than forking the entire project and adding in the
layers yourself, editing both central net schema.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#4163 (comment)

@bhack
Copy link
Contributor

bhack commented May 22, 2016

@yossibiton
Copy link

just a comment -
@Austriker : i used your caffe version and tried to run the demo from official faster_rcnn repo (it's implemented in matlab), and it fails :
`No appropriate method, property, or field 'reshape_as_input' for class 'caffe.Net'.

Error in proposal_im_detect (line 22)
caffe_net.reshape_as_input(net_inputs);

Error in script_faster_rcnn_demo (line 54)
[boxes, scores] = proposal_im_detect(proposal_detection_model.conf_proposal, rpn_net, im);
`

@Austriker
Copy link
Contributor Author

Austriker commented May 24, 2016

@yossibiton I didn't test it yet on faster_rcnn. I am actually working on the python version.

In the PR I didn't include all the changes on the python layer since some of them were already implemented. And I took the commit used by the fork of py-faster-rcnn. I didn't check if it was the same as faster_rcnn.

There is some refactor to do on the python version :
In the _caffe.cpp

-    self_.attr("param_str") = bp::str(
+    self_.attr("param_str_") = bp::str(

So in the python you have to change param_str to param_str_

I had a quick look the fork used by faster_rcnn has some modification for the matlab implementation. I didn't include them.

@rbgirshick
Copy link
Contributor

@seanbell I'm happy with merging, though it comes with the caveat that I might not be very active in terms of helping to maintain the code.

@Austriker
Copy link
Contributor Author

@rbgirshick Great News.

I have tested the PR with an updated version of py-faster-rcnn which works with python3.4 and the code of the PR.
This PR work with the python version of faster-rcnn

@yossibiton I am sorry I don't have matlab so I won't be able to work on the matlab part of the fast-rcnn.

Is it possible to relaunch the travis build ? It failed at the beginning !

@Austriker
Copy link
Contributor Author

Hi,
Following this PR #2670 I have added ROI Backward CPU Support.

@saiprabhakar
Copy link

saiprabhakar commented Jun 2, 2016

Hi,
Is anyone working on implementing cpu version of smooth L1 loss layer?

@Austriker
Copy link
Contributor Author

@saiprabhakar Hi, I am not working on the CPU implementation of the smooth L1 loss layer. Put if you do I would be glad to add it to the PR

@mariolew
Copy link

mariolew commented Jun 8, 2016

@Austriker Hi, I tried to use smooth l1 loss to train a regression network but I always got iteration 0, loss = 0, but the train net output loss is okay. And when I use euclidean loss, it works just fine. I wonder how to use smooth l1 loss to train a regression task and what does iteration loss =0 means?

@koenvandesande
Copy link

Shouldn't you update the LICENSE file itself, instead of putting the license in the commit message?
I.e. modify https://github.com/BVLC/caffe/blob/master/LICENSE ?

@Austriker
Copy link
Contributor Author

@koenvandesande I followed the guideline => The BVLC caffe license allows for you to list the various authors using the commit history: https://github.com/BVLC/caffe/blob/master/LICENSE#L11-L16

@mariolew I never tried it. I have been using the nets included in the py-faster-rcnn : https://github.com/Austriker/py-faster-rcnn/blob/master/models/pascal_voc/VGG16/faster_rcnn_end2end/train.prototxt

@@ -393,8 +393,10 @@ message LayerParameter {
optional ReductionParameter reduction_param = 136;
optional ReLUParameter relu_param = 123;
optional ReshapeParameter reshape_param = 133;
optional ROIPoolingParameter roi_pooling_param = 146;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

146 is now taken by recurrent_param in the latest master. Also, be sure to update the "last added parameter" at the top of the structure.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Austriker Thanks for the faster_rcnn_end2end/tain.prototxt.Can you give me more information such as what is the type of the train set. It is better to give some examples.

@Austriker
Copy link
Contributor Author

@koenvandesande Thanks for the comment ! I have updated the PR !

@zhouphd
Copy link

zhouphd commented Jun 22, 2016

@Austriker

I tried to implement the CPU mode, but this still can not pass the runtest. Could you please give some suggestion? :)

`// ------------------------------------------------------------------
// Fast R-CNN
// Copyright (c) 2015 Microsoft
// Licensed under The MIT License [see fast-rcnn/LICENSE for details]
// Written by Ross Girshick
// ------------------------------------------------------------------

#include "caffe/fast_rcnn_layers.hpp"

namespace caffe {

template
void SmoothL1LossLayer::LayerSetUp(
const vector<Blob>& bottom, const vector<Blob>& top) {
SmoothL1LossParameter loss_param = this->layer_param_.smooth_l1_loss_param();
sigma2_ = loss_param.sigma() * loss_param.sigma();
has_weights_ = (bottom.size() >= 3);
if (has_weights_) {
CHECK_EQ(bottom.size(), 4) << "If weights are used, must specify both "
"inside and outside weights";
}
}

template
void SmoothL1LossLayer::Reshape(
const vector<Blob>& bottom, const vector<Blob>& top) {
LossLayer::Reshape(bottom, top);
CHECK_EQ(bottom[0]->channels(), bottom[1]->channels());
CHECK_EQ(bottom[0]->height(), bottom[1]->height());
CHECK_EQ(bottom[0]->width(), bottom[1]->width());
if (has_weights_) {
CHECK_EQ(bottom[0]->channels(), bottom[2]->channels());
CHECK_EQ(bottom[0]->height(), bottom[2]->height());
CHECK_EQ(bottom[0]->width(), bottom[2]->width());
CHECK_EQ(bottom[0]->channels(), bottom[3]->channels());
CHECK_EQ(bottom[0]->height(), bottom[3]->height());
CHECK_EQ(bottom[0]->width(), bottom[3]->width());
}
diff_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
errors_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
// vector of ones used to sum
ones_.Reshape(bottom[0]->num(), bottom[0]->channels(),
bottom[0]->height(), bottom[0]->width());
for (int i = 0; i < bottom[0]->count(); ++i) {
ones_.mutable_cpu_data()[i] = Dtype(1);
}
}

template
void SmoothL1LossLayer::Forward_cpu(const vector<Blob>& bottom,
const vector<Blob
>& top) {
// NOT_IMPLEMENTED;
int count = bottom[0]->count();
//int num = bottom[0]->num();
const Dtype* in = diff_.cpu_data();
Dtype* out = errors_.mutable_cpu_data();
caffe_set(errors_.count(), Dtype(0), out);

caffe_sub(
count,
bottom[0]->cpu_data(),
bottom[1]->cpu_data(),
diff_.mutable_cpu_data()); // d := b0 - b1
if (has_weights_) {
// apply "inside" weights
caffe_mul(
count,
bottom[2]->cpu_data(),
diff_.cpu_data(),
diff_.mutable_cpu_data()); // d := w_in * (b0 - b1)
}

for (int index = 0;index < count; ++index){
Dtype val = in[index];
Dtype abs_val = abs(val);
if (abs_val < 1.0 / sigma2_) {
out[index] = 0.5 * val * val * sigma2_;
} else {
out[index] = abs_val - 0.5 / sigma2_;
}
}

if (has_weights_) {
// apply "outside" weights
caffe_mul(
count,
bottom[3]->cpu_data(),
errors_.cpu_data(),
errors_.mutable_cpu_data()); // d := w_out * SmoothL1(w_in * (b0 - b1))
}

Dtype loss = caffe_cpu_dot(count, ones_.cpu_data(), errors_.cpu_data());
top[0]->mutable_cpu_data()[0] = loss / bottom[0]->num();
}

template
void SmoothL1LossLayer::Backward_cpu(const vector<Blob>& top,
const vector& propagate_down, const vector<Blob
>& bottom) {
// NOT_IMPLEMENTED;
int count = diff_.count();
//int num = diff_.num();
const Dtype* in = diff_.cpu_data();
Dtype* out = errors_.mutable_cpu_data();
caffe_set(errors_.count(), Dtype(0), out);

for (int index = 0;index < count; ++index){
Dtype val = in[index];
Dtype abs_val = abs(val);
if (abs_val < 1.0 / sigma2_) {
out[index] = sigma2_ * val;
} else {
out[index] = (Dtype(0) < val) - (val < Dtype(0));
}
}

for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_cpu_axpby(
count, // count
alpha, // alpha
diff_.cpu_data(), // x
Dtype(0), // beta
bottom[i]->mutable_cpu_diff()); // y
if (has_weights_) {
// Scale by "inside" weight
caffe_mul(
count,
bottom[2]->cpu_data(),
bottom[i]->cpu_diff(),
bottom[i]->mutable_cpu_diff());
// Scale by "outside" weight
caffe_mul(
count,
bottom[3]->cpu_data(),
bottom[i]->cpu_diff(),
bottom[i]->mutable_cpu_diff());
}
}
}
}

#ifdef CPU_ONLY
STUB_GPU(SmoothL1LossLayer);
#endif

INSTANTIATE_CLASS(SmoothL1LossLayer);
REGISTER_LAYER_CLASS(SmoothL1Loss);

} // namespace caffe
`

@Austriker
Copy link
Contributor Author

Austriker commented Jun 22, 2016

@zhouphd which error do you have at test ?

The tests for the smooth L1 layer is only written for the GPU mode. Could you also share the test code ?

#ifndef CPU_ONLY
template <typename Dtype>
class SmoothL1LossLayerTest : public GPUDeviceTest<Dtype> {
 protected:
  SmoothL1LossLayerTest()
      : blob_bottom_data_(new Blob<Dtype>(10, 5, 1, 1)),
        blob_bottom_label_(new Blob<Dtype>(10, 5, 1, 1)),
        blob_bottom_inside_weights_(new Blob<Dtype>(10, 5, 1, 1)),
        blob_bottom_outside_weights_(new Blob<Dtype>(10, 5, 1, 1)),
        blob_top_loss_(new Blob<Dtype>()) {
    // fill the values
    FillerParameter const_filler_param;
    const_filler_param.set_value(-1.);
    ConstantFiller<Dtype> const_filler(const_filler_param);
    FillerParameter filler_param;
    GaussianFiller<Dtype> filler(filler_param);

    filler.Fill(this->blob_bottom_data_);
    blob_bottom_vec_.push_back(blob_bottom_data_);

    filler.Fill(this->blob_bottom_label_);
    blob_bottom_vec_.push_back(blob_bottom_label_);

    filler.Fill(this->blob_bottom_inside_weights_);
    blob_bottom_vec_.push_back(blob_bottom_inside_weights_);

    filler.Fill(this->blob_bottom_outside_weights_);
    blob_bottom_vec_.push_back(blob_bottom_outside_weights_);

    blob_top_vec_.push_back(blob_top_loss_);
  }
  virtual ~SmoothL1LossLayerTest() {
    delete blob_bottom_data_;
    delete blob_bottom_label_;
    delete blob_bottom_inside_weights_;
    delete blob_bottom_outside_weights_;
    delete blob_top_loss_;
  }

  Blob<Dtype>* const blob_bottom_data_;
  Blob<Dtype>* const blob_bottom_label_;
  Blob<Dtype>* const blob_bottom_inside_weights_;
  Blob<Dtype>* const blob_bottom_outside_weights_;
  Blob<Dtype>* const blob_top_loss_;
  vector<Blob<Dtype>*> blob_bottom_vec_;
  vector<Blob<Dtype>*> blob_top_vec_;
};

TYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypes);

TYPED_TEST(SmoothL1LossLayerTest, TestGradient) {
  LayerParameter layer_param;
  SmoothL1LossParameter* loss_param =
      layer_param.mutable_smooth_l1_loss_param();
  loss_param->set_sigma(2.4);

  const TypeParam kLossWeight = 3.7;
  layer_param.add_loss_weight(kLossWeight);
  SmoothL1LossLayer<TypeParam> layer(layer_param);
  layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);
  GradientChecker<TypeParam> checker(1e-2, 1e-2, 1701);
  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
      this->blob_top_vec_, 0);
  checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
      this->blob_top_vec_, 1);
}
#endif

@zhouphd
Copy link

zhouphd commented Jun 22, 2016

@Austriker
Thanks.
Actually I didn't change the test file, and it is copied from py-faster-rcnn as below.

or this link caffe-fast-rcnn/src/caffe/test/test_smooth_L1_loss_layer.cpp
....

./include/caffe/test/test_gradient_check_util.hpp:175: Failure
The difference between computed_gradient and estimated_gradient is 1.797230923939489, which exceeds threshold_ * scale, where
computed_gradient evaluates to -0.37756950368081582,
estimated_gradient evaluates to -2.1748004276203048, and
threshold_ * scale evaluates to 0.021748004276203048.
debug: (top_id, top_data_id, blob_id, feat_id)=0,0,0,49; feat = 0.023657563288965969; objective+ = 2.8459695033561729; objective- = 2.889465511908579
[ FAILED ] SmoothL1LossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice<double> (20 ms)

...

[ FAILED ] 2 tests, listed below:
[ FAILED ] SmoothL1LossLayerTest/0.TestGradient, where TypeParam = caffe::CPUDevice <float>
[ FAILED ] SmoothL1LossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice <double>

#include #include #include #include`

#include "gtest/gtest.h"

#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/filler.hpp"
//#include "caffe/vision_layers.hpp"
#include "caffe/fast_rcnn_layers.hpp"

#include "caffe/test/test_caffe_main.hpp"
#include "caffe/test/test_gradient_check_util.hpp"

namespace caffe {

//typedef ::testing::Types<GPUDevice, GPUDevice > TestDtypesGPU;

template
class SmoothL1LossLayerTest : public MultiDeviceTest {
typedef typename TypeParam::Dtype Dtype;

protected:
SmoothL1LossLayerTest()
: blob_bottom_data_(new Blob(10, 5, 1, 1)),
blob_bottom_label_(new Blob(10, 5, 1, 1)),
blob_bottom_inside_weights_(new Blob(10, 5, 1, 1)),
blob_bottom_outside_weights_(new Blob(10, 5, 1, 1)),
blob_top_loss_(new Blob()) {
// fill the values
FillerParameter const_filler_param;
const_filler_param.set_value(-1.);
ConstantFiller const_filler(const_filler_param);
FillerParameter filler_param;
GaussianFiller filler(filler_param);

filler.Fill(this->blob_bottom_data_);
blob_bottom_vec_.push_back(blob_bottom_data_);
filler.Fill(this->blob_bottom_label_);
blob_bottom_vec_.push_back(blob_bottom_label_);

//const_filler.Fill(this->blob_bottom_inside_weights_);
filler.Fill(this->blob_bottom_inside_weights_);
blob_bottom_vec_.push_back(blob_bottom_inside_weights_);
//const_filler.Fill(this->blob_bottom_outside_weights_);
filler.Fill(this->blob_bottom_outside_weights_);
blob_bottom_vec_.push_back(blob_bottom_outside_weights_);

blob_top_vec_.push_back(blob_top_loss_);

}
virtual ~SmoothL1LossLayerTest() {
delete blob_bottom_data_;
delete blob_bottom_label_;
delete blob_bottom_inside_weights_;
delete blob_bottom_outside_weights_;
delete blob_top_loss_;
}

Blob* const blob_bottom_data_;
Blob* const blob_bottom_label_;
Blob* const blob_bottom_inside_weights_;
Blob* const blob_bottom_outside_weights_;
Blob* const blob_top_loss_;
vector<Blob> blob_bottom_vec;
vector<Blob> blob_top_vec;
};

//TYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypesGPU);
TYPED_TEST_CASE(SmoothL1LossLayerTest, TestDtypesAndDevices);

TYPED_TEST(SmoothL1LossLayerTest, TestGradient) {
typedef typename TypeParam::Dtype Dtype;
LayerParameter layer_param;
SmoothL1LossParameter* loss_param =
layer_param.mutable_smooth_l1_loss_param();
loss_param->set_sigma(2.4);

const Dtype kLossWeight = 3.7;
layer_param.add_loss_weight(kLossWeight);
SmoothL1LossLayer layer(layer_param);
layer.SetUp(this->blob_bottom_vec_, this->blob_top_vec_);
GradientChecker checker(1e-2, 1e-2, 1701);
checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
this->blob_top_vec_, 0);
checker.CheckGradientExhaustive(&layer, this->blob_bottom_vec_,
this->blob_top_vec_, 1);
}

} // namespace caffe
`

@hgaiser
Copy link
Contributor

hgaiser commented Jul 25, 2016

Might I ask what the progress is for this PR?

@Austriker
Copy link
Contributor Author

@hgaiser it's waiting to be merged.

@saiprabhakar
Copy link

@Austriker I implemented cpu version of smooth_l1_loss and made a pull request to you. You can add it to this PR

@yossibiton
Copy link

hope this PR will be get merged soon, this is very important to many researchers using the faster-rcnn framework

@hgaiser
Copy link
Contributor

hgaiser commented Feb 20, 2017

As an alternative to not having to maintain multiple versions of Caffe for Faster-RCNN, modules can be used with #5294 . This branch shows an example for Faster-RCNN.

@thundo
Copy link

thundo commented May 2, 2017

Any update on this?

@abhigoku10
Copy link

@rbgirshick @Austriker HI i want to train my dataset on CPU can you let me knw where can i find the implemented files of smooth_L1_loss_layer and roi_pooling_layer for cpu mode??? I am getting confused with the above conversation , can u let me knw how to train on CPU

@nnop
Copy link

nnop commented Aug 5, 2017

Would you rebase this PR to the master branch? I found this PR doesn't support NCCL.

@cmpute
Copy link

cmpute commented Dec 19, 2017

Any update?

This commit is a port from the following [fork](https://github.com/rbgirshick/caffe-fast-rcnn/tree/0dcd397b29507b8314e252e850518c5695efbb83)

It adds :
 - smooth l1 loss layer
 - roi pooling layer
 - dropout scaling at test time (needed for MSRA-trained ZF network)

LICENSE :
Faster R-CNN

The MIT License (MIT)

Copyright (c) 2015 Microsoft Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Added tests for ROI Pooling Layer

Author: Ronghang Hu
@Austriker
Copy link
Contributor Author

@Noiredd I have rebased the PR against the latest master.

@jchook
Copy link

jchook commented May 30, 2019

Any reason why this was never merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.