Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

regular photogrammetry usually means searching for common features in a bunch of photos. if you find the same feature in 3 photos you can triangulate its location in 3d space.

the output of this process is a point cloud which you can then process into a triangle mesh. (google structure from motion).

this OTH is differentiable voxel rendering. so basically optimizing the colors of a bunch of cubes to make it look the pictures. using backpropagation just like you would do it for neural networks.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: