Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Orbbec Astra Cameras #10

Open
Joachim-oYo opened this issue Jul 13, 2018 · 4 comments
Open

Multiple Orbbec Astra Cameras #10

Joachim-oYo opened this issue Jul 13, 2018 · 4 comments

Comments

@Joachim-oYo
Copy link

Hi! I'm trying to use depth output from two orbbec astra cameras connected to my computer. Is there any reason in particular that my fps would be as slow as it is? With the example-multicam app, I get around 1.5 fps with two cameras, which is definitely too slow for my use case. If there are any suggestions as to how I could speed things up, it'd be most appreciated. Thank you!

@mattfelsen
Copy link
Owner

I haven't tested multiple cameras personally - that code was contributed via a pull request. I imagine the slowest part of the code is the depth rescaling, where the raw depth values are converted from their 16bit 0-65535 values to 8bit 0-255 values based on your depth settings. You can disable that by calling cam. enableDepthImage(false) and try passing the raw data into a blob finder, or doing the conversion via some other faster method like in a shader, perhaps with opencv, etc.

@mattfelsen
Copy link
Owner

I also meant to mention that 1.5fps seems extremely slow though. It's possible there's something else going on in your code that is slow. Have you profiled your code? There are a few nice OF addons (ofxTimeMeasurements and ofxProfiler comde to mind), or you can use tools built into Xcode, Visual Studio, etc...

@Joachim-oYo
Copy link
Author

Mm gotcha @mattfelsen, I'll profile my code to see if there is anything else slowing it down. It's true though that I don't actually need the 8-bit converted depth, so that enableDepthImage(false) is a great help!

Right now what I'm actually doing is setting an array of gray ofPixels the size of the camera to be either 255 or 0 brightness, based on the raw depth values (whether they're above or below a given threshold). Currently, I'm passing those normalized pixels into ofxCv::ContourFinder.findContours(). I'm doing this mainly because when I tried to create a cv::Mat(camheight, camwidth, CV_8UC1, astra.getRawDepth()) and pass it into the ContourFinder, it would either show me nothing or throw errors, depeding on the cv depth data type I set (here CV_8UC1). Is that the proper datatype for the raw depth from the astra?

Thanks a ton! Your fast responses are so incredibly helpful.

@mattfelsen
Copy link
Owner

Your problem with the cv::Mat approach is probably due to using the wrong format. Have you tried CV_16UC1 or whatever its equivalent would be? The raw values are 16bit shorts (0-65535) but you're creating a cv::Mat with 8bit values (0-255)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants