![]() |
|
I could be wrong, but being able to remove the step of estimating the camera position would save a large amount of time. You’re still going to need to train on the images to create the splats
|
![]() |
|
That looks great! I‘ve been playing around with Aframe and OSM building footprints, but this looks so much better. Will have a look at aframe-loader-3dtiles-component.
|
![]() |
|
It can be run real time. Might be 640x480 or 20 fps, but many algorithms out there could never been run on an $10k graphics card or even a computing cluster in real time.
|
![]() |
|
I mean A100's were cutting edge a year or so ago now we're at what H200 and B200 or is it 300's like it may be a year or 2 more but the A100 speed will trickle down to the average consumer as well.
|
![]() |
|
And, from the other end, research demonstrations tend to have a lot of low-hanging fruits wrt. optimization, which will get picked if the result is interesting enough.
|
![]() |
|
There are no models or textures, it's just a point cloud of color blobs. You can convert it to a mesh, but in the process you'd lose the quality and realism that makes it interesting. |
![]() |
|
This does not look significantly better then e.g. cities skylines, especially since they neither zoomed in or out, always showing only a very limited frame Am I missing something? |
![]() |
|
Quick question for anyone that may have more technical insight, is Gaussian Splatting the technology that Unreal Engine has been using to have such jaw dropping demos with their new releases?
|
![]() |
|
It's not an order of magnitude slower. You can easily get 200-400 fps in Unreal or Unity at the moment. 100+FPS in browser? https://current-exhibition.com/laboratorio31/ 900FPS? https://m-niemeyer.github.io/radsplat/ We have 3 decades worth of R&D in traditional engines, it'll take a while for this to catch up in terms of tooling and optimization but when you look where the papers come from (many from Apple and Meta), you see that this is the technology destined to power the MetaVerse/Spatial Compute era both companies are pushing towards. The ability to move content at incredibly low production costs (iphone movie) into 3d environments is going to murder a lot of R&D made in traditional methods. |
![]() |
|
I don't understand the connection you're making between SfM (Structure from Motion) and surface shading. I might be misunderstanding what you're trying to say. Could you elaborate? |
![]() |
|
Right? I'm surprised I don't hear this connection more often. Is it perhaps because photogrammetry algorithms require sharp edges, which the splats don't offer?
|
![]() |
|
Mesh based photogrammetry is a dead end. GS or radiance field representation is just getting started. Not just rendering but potentially a highly compact way to store large 3D scenes.
|
![]() |
|
Nothing comes close to this for realism, it's like looking at a photo. Traditional photogrammetry really struggles with complicated scenes, and reflective or transparent surfaces. |