Although the prime model idea can work on physics pixel’s perspective view and world model both, I still prefer physics pixel’s perspective view much more.
Multiple physics pixel frames can be generated based on multiple caremas of different directions of one observer, and if the multiple cameras or frames can cover the view on all the directions around the observer, this will form a omnidirectional perspective view with observer as center, which can be a perspective view of the world around the oberserver.
As I said previously, all senses or signals of human or sensor are perspetive view in nature, and perspective view has a natural focus hierarchy: the nearer the higher resolution, the farther the lower resolution. So physics pixel model can be omnidirectional 3D perspective view of the world around the oberver with more attention focused on the nearer naturally.
After all, a traditional 3D world model in real time is in fact generated from perspective signal, so the perspective view of physics pixel must be the most accurate, responsive and efficient in all real time applications.
In many applications, a 3D perspective view of physics pixel is enough, because the details far away are not needed or available at all. And make the ai model directly work on perspective frames (3D) is most efficient and accurate in these applications, because the perspective view comes directly from sensors.
By the way, a 3D world model (traditional one not perspective one) can work as a pilot map/geographic application like now in car or plane, and the driver/pilot can be the prime/physics pixel model which drive or pilot by using the map/geographic application for navigation.
Be First to Comment