The in-browser demo is very cool! It's not clear from the linked page, but the GitHub repo[0] includes links to sample tile datasets that can be used for the demo.
To the original authors, really needs a default tileset for the demo, with maybe a link to the files. An an progress/upload bar for when you're trying to upload a .zip. Tried, and appeared to stall out for minutes on upload with no response.
It looks great but I think the static-ness of gaussian splats is what's holding it back. You can't easily animate them or properly relight them like with polygonal rendering, so it's hard to combine two differently lit things. If someone came up with some kind of neural relighting for them or something it would be huge.
This is neat and I don't want to be too snarky, but I had to chuckle as the specificity of "However, extending 3DGS to synthesize large-scale or infinite terrains from a single captured exemplar—remains an open challenge.".
There's no link to the paper, so I can only infer, but, if I understand correctly, this is a very simple idea: take a single Gaussian splat "tile" and find a cut when two copies are placed near each other and overlapping, using dynamic programming to vary the size of overlap and where the cut should be. Have a variety of cuts to break a uniform tiling (the Wang tiles part) and now you have different tiles with different nearest neighbor constraints that you can use to tile the plane.
Probably a lot of details to be worked out in how to stitch Gaussian splats together but I imagine it's pretty do-able.
I think one of the problems with Gaussian splatting is generating content. You can take a static picture of something but it's hard to know how to use it for interaction. This is a way to generate 3d textured sheets, like sunflower fields, walls, caves, etc.
Most implementations of 3d gaussian splats are static. They are based on pointclouds and not polygons. As these are captured with images and generated from them, the process has no semantic understanding of its content.
There is no technical way to rig each flower or move vertices like in traditional 3d animation.
It is mainly a pointcloud with no segmentation.
But there are projects working on the semantic part, which could open a way to animate the detected objects individually in future.
where (x_hat,y_hat) are your basis vectors in the plane, z_l is the local z coordinate (subtract the terrain modifier used to move tiles up/down) , and z_h is the height of a flower.
Or if you want to be more advanced, generate some curl noise and use it as a prefactor instead of x,y inside the sin(). And include the corresponding up-down motion as the stalks are constant length.
The gaussian splatting never cease to amaze me. I wonder if it would be OK to proceduraly (not by LLM) generate natural worlds for video games with that...
Procedurally generated game worlds have been a thing since video games started, some of them even garnered some popular appeal, like a lego-looking one about crafting mines or something
Microsoft research had some good publications on generating infinite non-repeating textures by (IIRC) markov-filling an aperiodic tile set, including creating video textures. I tried to find the examples a few months ago but the old URLs I'd bookmarked were no longer working.
But seriously, I didn't realize I wanted this. I was hoping to experiment with just repeating the same tile. This gives me hope that other people will make these techniques approachable.
[1] https://github.com/mxgmn/WaveFunctionCollapse
[2] https://nyh-dolphin.github.io/en/research/n_wfc/
reply