-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unit vector data from camera #14
Comments
Unit Vector descriptionThe unit vector matrix contains 3 values [ex, ey, ez] for each pixel, the data layout is [ex_1,ey_1,ez_1, ... ex_N, ey_N, ez_N], where N is the number of pixels. Multiplying a distance measurement by the appropriate component leads to the corresponding cartesian coordinate: [X_i, Y_i, Z_i] = D_i * [ex_i, ey_i ez_i]. The rotational component of the extrinsic calibration, specified by the user, is already applied to the unit vectors. The image resolution for the unit vector depends on the selected binning mode of the camera Chunk TypeThe chunk type used for the unit vector matrix is: 223 Chunk HeaderThe Unit Vector data is prefixed with a header. This header is called chunk header.
Example CodeThis is how the data is assembled on the camera Currently supported Chunk Typestypedef enum ChunkType {
CT_RADIAL_DISTANCE_IMAGE = 100,
CT_NORM_AMPLITUDE_IMAGE = 101,
CT_AMPLITUDE_IMAGE = 103,
CT_CARTESIAN_X_COMPONENT = 200,
CT_CARTESIAN_Y_COMPONENT = 201,
CT_CARTESIAN_Z_COMPONENT = 202,
CT_CARTESIAN_ALL = 203,
CT_UNIT_VECTOR_ALL = 223,
CT_CONFIDENCE_IMAGE = 300,
CT_DIAGNOSTIC = 302,
CT_EXTRINSIC_CALIBRATION = 400,
CT_JSON_MODEL = 500,
CT_SNAPSHOT_IMAGE = 600,
CT_MAX
} ChunkType_t; Available pixel formatstypedef enum PixelFormat {
PF_FORMAT_8U = 0,
PF_FORMAT_8S = 1,
PF_FORMAT_16U = 2,
PF_FORMAT_16S = 3,
PF_FORMAT_32U = 4,
PF_FORMAT_32S = 5,
PF_FORMAT_32F = 6,
PF_FORMAT_64U = 7,
PF_FORMAT_64F = 8,
PF_FORMAT_16U2 = 9,
PF_FORMAT_32F3 = 10,
PF_MAX
} PixelFormat_t; chunkHeader.chunkType = ifm::CT_UNIT_VECTOR_ALL;
chunkHeader.chunkSize = sizeof(ifm::ChunkHeader) + result->width * result->height * 3 * sizeof(float);
chunkHeader.headerSize = sizeof(chunkHeader);
chunkHeader.headerVersion = 1;
chunkHeader.imageWidth = result->width;
chunkHeader.imageHeight = result->height;
chunkHeader.pixelFormat = ifm::PF_FORMAT_32F3; // [eX,eY,eZ] triple
chunkHeader.timestamp = result->timestamp;
chunkHeader.frameCount = getFrameCounter(); How to request the Unit Vector dataYou can request the data either by the
|
The unit vector provided by the latest official ifm firmware has a bug which makes them useless. This will be fixed in the firmware version 1.2.x |
Great info. Thanks @graugans I think we need to think about the best way to deal with this in the library. While I completely appreciate the concerns of @semsitivity there is certainly a whole host of use cases that will continue to want to have the Cartesian data constructed as a PCL point cloud as it is done now (i.e., not manually converting the depth image when needed). That said, I could imagine a few scenarios:
Regardless of what we do, I think we should grab some performance metrics to ensure it is worth the effort. Given that the FrameGrabber is running in a separate thread from the user's algo code, the speed-ups (if any) may not be meaningful. I'm speculating and could only speak definitively to this once we capture some performance data. We should probably let the O3D303's on-board iMX6 be the ground truth architecture for which we grab this performance data as it seems there is consensus among many of us that, ideally, our algo code runs on the camera. Related to this, I am currently planning (unless there are serious objections), of moving the @graugans Quick question since you mention the extrinsics in your note above... Does the O3D303 apply a rotation and translation to the image data based on the extrinsics configured via the JSON interface? I could figure this out empirically, but since you are here, I thought I'd be lazy and just ask :) FWIW, I like having the ability to store the extrinsics on the camera however, it would be nice to have a flag or switch to tell the camera whether or not you actually want it to perform the transform. For example, we have a current use case where we want to do all algo processing in the camera frame and then once we localize our object of interest we will simply transform the object pose based on the extrinsic calibration. |
@tpanzarella You can change the extrinsic calibration by a xml-rpc call to the following object:
The unit of the rotation is in ° not in rad. There is no flag as far as I know, but you can store multiple configurations on the camera and switch between them. We call them application. |
@graugans Right. I guess the question is, does this simply store the data or does the O3D303 firmware apply this transformation prior to returning the data over the socket interface? (sorry, I think in my original question, I was not clear. I said "JSON interface" when I meant "xml-rpc interface". I'm just so used to looking at the JSON serialization of the parameters as a result of running |
@tpanzarella The extrinsic calibration is applied during calculation of x,y,z data. BTW this plugabble framegrabber approach sounds interesting. |
@graugans Thanks for the clarification of the extrinsics. As you note in an earlier comment, the effect I am looking for (i.e., storing the extrinsics on the camera while operating in the camera frame) can be achieved by having multiple applications loaded on the camera with different extrinsic calibration values (i.e., one with the real extrinsics and one with all zeros for the rot and trans). Many thanks again Christian! This is good information to know. I'll keep thinking about the FrameGrabber solution we have been discussing above. Do you have a current target date for the 1.2.x firmware? It sounds like we have until (at least) then to come up with a direction we'd like to go in to support the use case of @semsitivity |
@tpanzarella Last week we did a feature freeze for the 1.2.x branch, but there is still some testing needed. Frankly I am not involved in the official release process. My team and myself provide release candidates when those go online is the decision by the sales team. I guess by the end of October 1.2.x is released. Maybe I can provide the next test candidate for a beta test. |
@tpanzarella, just a comment about the xyzi_image.
|
@semsitivity Your point on keeping the I will work this change into the next release of the code. Tentatively, I'm shooting for next week. I am bogged down with lots of other stuff this week and early next week, but I'll have time at the end of next week to update both Thanks for the comment and technical rationale for this request. |
@semsitivity The camera is using hardware floating point. But we do use the softfp ABI. We are planning to switch to hardfp by the end of the year. The main purpose for using unsigned int 16 for the data is to save Ethernet bandwidth. Internally everything is calculated in float precision. We started with softfp because there was no hardfp GPU userland. We made some Benchmark and the performance gain softfp vs hardfp was ~3-4%. |
@tpanzarella After digging deeper into the whole extrinsic and unit vector business I realized that I made a big mistake and claimed the extrinsic calibration is an application specific parameter. It is not!
The whole idea of extrinsic calibration was for a fixed mounting of the Camera. Anyway I started some internal discussion if it make sense to have such an flag. If there is more benefit for such a flag let me know. Sorry for my misleading assumption about the extrinsic calibration. |
No worries Christian. In fact, I am a bit embarrassed that I did not catch that. It is clear that the extrinsics are configured at the device level as indicated by the JSON serialization. For example this. Thanks for the explicit clarification anyway. In terms of the flag to either apply or not apply the extrinsic transformation, my request is motivated by three concrete concerns:
In summary, I hope the above three examples help further motivate real-world concerns for: 1) keeping the ability to store the extrinsics on the camera, but, 2) provide a boolean flag to indicate whether or not you actually want the transformation applied by the O3D firmware. Let me know if anything is unclear. |
This is not an anwser, but just a remark for @graugans and the information he gave about the extrinsic. |
@semsitivity , @tpanzarella Yes, at the moment the values are in m and rad. But this is more a bug than an intended feature. We are discussing this internally, because the general camera interface is mm and degrees we may fix this in a next release. And we are preparing some sort of application note how to deal with the extrinsic calibration and the unit vectors. There are some points to pay attention off. I hope you can wait a couple of days until we can provide a pre release of the application note to you. |
@tpanzarella , my target was to be able to adapt my algorithm to the device for the end of the month. Today, I have this sequence working great:
I'm happy because, previously, I was applying my self all the transformation. Now I obtain what I was expecting: less CPU consumption + a simplified algorithm, and that's great. |
@graugans Thanks. Take the time you need to sort this out. Speaking for LPR, we can wait until you are ready. |
The o3d3xx PCIC interface can return unit vector matrices related to camera calibration data, and depending the used IR frequency.
Those vector already includes intrinsic and extrinsic transformations that can be applied on distance image to obtain calibrated and eventually customized cartesian 3D coordinates.
It would be important to access this vector matrix for some processing done on distance data only and to be able to compute totally or partially their 3D coordinates.
The text was updated successfully, but these errors were encountered: