这是indexloc提供的服务,不要输入任何密码
Skip to content

Problem building pointcloud data from RGB-D depth images using equations provided in dataset  #9

@fernandaroeg

Description

@fernandaroeg

Hello!
I am trying to build pointcloud2 data in ROS Noetic from the depth images provided in the dataset. In order to do so I am using the equations provided in the dataset documentation:
Screenshot from 2023-05-29 19-27-07
However I am not able to build a pointcloud that has realistic dimensions. I really don't understand why the value of the pixel has to be divided by 6553.5, what does this scaling factor means? and why is it necessary?

If I take out the scaling factor and use the value of the pixel as depth I get an enormous pointcloud much bigger than the map dimensions. If I leave the scaling factor in the equations then the size of the obtained pointcloud is extremely small, less than a meter.

I have been trying experimentally with other scaling values and the equations that give me a more or less realistic size pointcloud are:
Screenshot from 2023-05-29 19-32-11

I am implementing a ROS node in python to do this. The values I am using for the camera intrinsic parameters are the ones provided in the documentation:
cx = 157.3245865
cy = 120.0802295
fx = 286.441384
fy = 271.36999
Any help with the correct implementation of these equations will be greatly appreciated!!!!!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions