Skip to content

intermapping between multiple perspecitve coordinate systems of different cameras and one rectangular coordinate system

In most previous posts, I just forgot to consider multiple cameras (or multiple sensors of same kind) of one oberser (like a car or robot), and because one camera case is only suitable for video genenrative application, here give some simple idea about the case of multiple cameras (or multiple sensors of same kind).

Each camera’s frame or physics pixel frame is based on its own perspective cooridnate system, so each frame of a camera needs to add camera ID to the start or end of the frame to identity which camera the frame is from. Then it’s all OK, and leave it all to neural network of AI model. Because no matter how multiple cameras are configured like same directional or different directional or omnidirectional, after training on labeled and unlabeled frames flow, the AI model can learn the correlation between cameras by itself, haha, this is the real magic that makes AI so great.

There shall be only one single rectangular coordinate system corresponding to all different perspective coordinate system of all cameras of one observer. The AI model can learn the intermapping relation between the different perspective coordinate system and the single rectangular coordinate system through proper training.

Theoretically, no two cameras can have exactly same position and direction, so no two pixels or physics pixels can be exactly completely at same position at same time, which means each pixel or physics pixel must be at different position from all other pixels or physics pixels at the same time.

So a simple idea is: in the single rectangluar coordinate system, map each phyisc pixel of a frame of each camera to an 3D point which is at different position from all other mapping 3D points of all other physics pixels of synced frames of all cameras. In training map all different physics pixels to differently positioned 3D point of cloud in the single rectangular coordinate system, to make the model learn to map them in the same way in reference, in which the difference between positions of the 3D points could be nominal or minimum which is just for making them different.

For fusion of different kinds of sensor signals, a simple idea is: just map secondary signal like mmradar to every physics pixel frame or perspective coordinate system of all basic/first signal like visual camera, or map secondary signal directly to the single rectangular coordinate system of the same observer.

Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *