-
Now what I do is here, the full code can be found in this repo: linefit point cloud ground segmentation: namespace nb = nanobind;
using namespace nb::literals;
NB_MODULE(linefit, m) {
nb::class_<GroundSegmentation>(m, "ground_seg")
.def(nb::init<>(), "linefit ground segmentation constructor, param: TODO")
.def(nb::init<const std::string &>(), "linefit ground segmentation constructor, with toml file as param file input.")
// .def("run", nb::overload_cast<std::vector<Eigen::Vector3d> &>(&GroundSegmentation::segment), "points"_a);
.def("run", &GroundSegmentation::segment, "points"_a, nanobind::rv_policy::reference);
}
// part of segment:
std::vector<bool> GroundSegmentation::segment(const std::vector<std::vector<float>> points) {
// TODO: Maybe there is a better way to convert the points to Eigen::Vector3d
std::vector<Eigen::Vector3d> cloud;
for (auto point : points) {
cloud.push_back(Eigen::Vector3d(point[0], point[1], point[2]));
}
//...
} pc_data = np.load(f"{BASE_DIR}/assets/data/kitti0.npy")
groundseg.run(pc_data[:,:3].tolist()) As you can see, every time:
When the N (the number of points) is large, then speed could be slow here. In pybind11, a popular open3d library they also did kind of thing from // - This function is used by Pybind for std::vector<SomeEigenType> constructor.
// This optional constructor is added to avoid too many Python <-> C++ API
// calls when the vector size is large using the default biding method.
// Pybind matches np.float64 array to py::array_t<double> buffer.
// - Directly using templates for the py::array_t<double> and py::array_t<int>
// and etc. doesn't work. The current solution is to explicitly implement
// bindings for each py array types.
template <typename EigenVector>
std::vector<EigenVector> py_array_to_vectors_double(
py::array_t<double, py::array::c_style | py::array::forcecast> array) {
int64_t eigen_vector_size = EigenVector::SizeAtCompileTime;
if (array.ndim() != 2 || array.shape(1) != eigen_vector_size) {
throw py::cast_error();
}
std::vector<EigenVector> eigen_vectors(array.shape(0));
auto array_unchecked = array.mutable_unchecked<2>();
for (auto i = 0; i < array_unchecked.shape(0); ++i) {
// The EigenVector here must be a double-typed eigen vector, since only
// open3d::Vector3dVector binds to py_array_to_vectors_double.
// Therefore, we can use the memory map directly.
eigen_vectors[i] = Eigen::Map<EigenVector>(&array_unchecked(i, 0));
}
return eigen_vectors;
} I'm wondering if nanobind has a better way of processing or I need to rewrite a py_array_to_vectors_double in nanobind also? Thanks in advance. @wjakob 😊 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Why not encode this information using some kind of nd-array with shape |
Beta Was this translation helpful? Give feedback.
-
Since inside cpp code, there are lots of func based on std::vector operators. But I looked again this problem again recently and found a way to do similarly according to nanobind document Eigen convenience type aliases. It did speed up the run, from 0.1s to 0.01s around. NB_MODULE(linefit, m) {
nb::class_<GroundSegmentation>(m, "ground_seg")
.def(nb::init<>(), "linefit ground segmentation constructor, param: ")
.def(nb::init<const std::string &>(), "linefit ground segmentation constructor, with toml file as param file input.")
.def("run", [](GroundSegmentation& self, const nb::ndarray<double>& array) {
if (array.ndim() != 2 || array.shape(1) != 3) {
throw std::runtime_error("Input array must have shape (N, 3)");
}
std::vector<Eigen::Vector3d> points_vec(array.shape(0));
std::memcpy(points_vec.data(), array.data(), array.size() * sizeof(double));
return self.segment(points_vec);
}, "points"_a, nb::rv_policy::reference);
} Thanks for your reply still. Hope this could help people afterward. Some old reference issues from pytbind I attached here: |
Beta Was this translation helpful? Give feedback.
Since inside cpp code, there are lots of func based on std::vector operators.
But I looked again this problem again recently and found a way to do similarly according to nanobind document Eigen convenience type aliases. It did speed up the run, from 0.1s to 0.01s around.