(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40035552

本文赞扬《基于物理的渲染》(PBRt)一书对路径追踪器的全面方法,并与重力和广义相对论进行了比较。 作者表达了对别名采样技术的热情,欣赏它的优雅和超越图形的适用性。 他们反思了各种引擎组件向基于物理的渲染的融合,并预测了未来的发展。 作者回忆了他们从几十年前编写简单的光线追踪器到最近探索 Spongy 和 SDF raymarching 等更小、更复杂的光线追踪器的旅程。 他们承认对光线追踪图像感到害怕,但发现一旦掌握了算法和技术就非常简单。 他们提到了实时实现光谱渲染的挑战,并询问了现有的资源或尝试。 光谱渲染提高了颜色和材质的准确性,但代价是增加了计算时间。 根据实施选择,它可能会导致显着的性能影响。 作者质疑跟踪偏振和相位对于人类感知的重要性。 最后,他们表示有兴趣研究无线电波等替代频率范围的频谱渲染。

相关文章

原文


Highly recommend Physically Based Rendering. As a book, Pbrt is to the study of path tracers, as Gravity is to the field of general relativity. Wholly encompassing and rigorous.


I can't take time to fiddle with raytracing (however much I'd want to!) but I skimmed the first half of that book and alias sampling is a very elegant and nice technique. Wish I had known about it earlier! It is useful in a far broader context than graphics.


Author here. Waking up to seeing this on the front page with all the wonderful comments made my day! Thank you for sharing and reading


Spectral rendering imho is good example how ray tracing in itself is not the end-game for rendering, it's more just starting point. Occasionally I see sentiment that with real-time ray tracing rendering is a solved problem, but imho it's far from truth.

Afaik most spectral rendering systems do not do (thin-film) interference or other wave-based effects, so that is another frontier. Reality has surprising amount of detail.



The closer rendering comes closer to underlying physical principles, the more game engines will become world simulation engines. Various engine parts commonly seen today will converge towards a common point, where, for example, we'll observe less distinction between physics and rendering layers. I wonder if this trend can be traced to some degree even today. Several orders of compute growth later, we'll look upon current abstractions in the same manner as on the 30 years old state of the art, shaped by technical limitations of the yesteryear, obvious in hindsight. Love the perspective this puts the things into.


I disagree. The goal for most games is not to simulate the real world accurately, but to be a piece of entertainment (or artwork). That sets different requirements than just "world simulation", both from mechanics point of view and from graphics point of view. So engines will for a long time be able to differentiate on how they facilitate such things; expressivity is a tough nut to crack and real-world physics gets you only so far.

Even photorealism is a shifting target as it turns out that photography itself diverges from reality; there is this trend of having games and movies look "cinematic" in a way that is not exactly realistic, or at least not how things appear to human eye. But how scenes appear to human eyes is also tricky question as humans are not just simple mechanical cameras.



Gameplay is directly influenced by the "feel" of the world. I am not strictly talking about photo-realism, but (1) how the world reacts to input (2) how is it consistent.

Physics is not about real world accuracy, but about how consistently stuff interacts (and its side effects like illumination) in the virtual world. There will be a time in the future when the physics engine will become the rendering engine, just because there are infinite gameplay possibilities in such a craft.



It's like a fractal; the closer you look, the more details you notice affecting what you see. It's like we're creeping to 100% physically accurate rendering, but we'll probably never get to 100%, instead we'll just keep adding fractions.


If you want to play with ray tracing implementation, it's surprisingly easy to write one by yourself. There's a great free book (https://raytracing.github.io/books/RayTracingInOneWeekend.ht...) or, if you know a bit of Unity a very nice GPU-based tutorial (https://medium.com/@jcowles/gpu-ray-tracing-in-one-weekend-3...). The Unity version is easier to tinker with, because you have scene preview and other GUI that makes moving camera around so much easier. There are many implementations based of these sources if you don't want to write one from scratch, although doing so is definitely worth it.

I spent some great time playing with the base implementation. Making the rays act as particles* that bend their path to/away from objects, making them "remember" the last angle of bounce and use it in the next material hit etc. Most of them looked bad, but I still got some intuition what I was looking at. Moving the camera by a notch was also very helpful.

A lot of fun, great for a small recreational programming project.

* Unless there's an intersection with an object, then set the maximum length of the ray to some small amount, then shoot many rays from that point around and for each hit apply something similar to the gravity equation. Of course this is slow and just an approximation, but it's easy and you can implement a "black hole" type of object that will bend light in the scene.



when i wrote my very first ray tracer it didn't take me an entire weekend; it's about four pages of c that i wrote in one night

http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra...

since then i've written raytracers in clojure and lua and a raymarcher in js; they can be very small and simple

last night i was looking at Spongy by mentor/TBC https://www.pouet.net/prod.php?which=53871 which is a fractal animation raytracer with fog in 65 machine instructions. the ms-dos executable is 128 bytes

i think it's easy to get overwhelmed by how stunning raytraced images look and decide that the algorithms and data structures to generate them must be very difficult, but actually they're very simple, at least if you already know about three-dimensional vectors. i feel like sdf raymarching is even simpler than the traditional whitted-style raytracer, because it replaces most of the hairy math needed to solve for precise intersections with scene geometry with very simple successive approximation algorithms

the very smallest raytracers like spongy and Oscar Toledo G.'s bootsector raytracer https://github.com/nanochess/RayTracer are often a bit harder to understand than slightly bigger ones, because you have to use a lot of tricks to get that small, and the tricks are harder to understand than a dumber piece of code would be



> when i wrote my very first ray tracer it didn't take me an entire weekend

It’s just a catchy title. You can implement the book in an hour or two, if you’re uncurious, or a month if you like reading the research first. Also maybe there are meaningful differences in the feature set such that it’s better not to try to compare the time taken? The Ray Tracing in One Weekend book does start the reader off with a pretty strong footing in physically based rendering, and includes global illumination, dielectric materials, and depth of field. It also spends a lot of time building an extensible and robust foundation that can scale to a more serious renderer.



The beauty of mathematics and physics in action. I wonder if some of the tweaks made for the sake of beauty could be useful in other means of visualizations.

It also reminds me of a time that I was copying code from a book to make polyphonic music on an Apple II. I got something wrong for sure when I ran it, but instead of harsh noise, I ended up with an eerily beautiful pattern of tones. Whatever happy accident I made fascinated me.



Perhaps create hyperspectral (>>3 channels) images? I was exploring using them for better teaching color to kids by emphasizing spectra. Doing image[1] mouseover pixel spectra for example, to reinforce associations of colors-and-their-spectra. But hyperspectral images are rare, and their cameras traditionally[2] expensive. So how about synthetic hyperspectral images?

Perhaps a very-low-res in-browser renderer might be fast enough for interactively playing with lighting and materials? And perhaps do POV for anomalous color vision, "cataract lens removed - can see UV" humans, dichromat non-primate mammals (mice/dogs), and perhaps tetrachromat zebra fish.

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html [2] an inexpensive multispectral camera using time-multiplexed narrow-band illumination: https://ubicomplab.cs.washington.edu/publications/hypercam/



It's possible to implement this efficiently using light tracing - the final value in the image is the (possibly transformed) contribution from each light source, and since you have the spectrum of the light source you can have the spectrum of the pixel.

Until you encounter significant dispersion or thin film effects, that is, then you need to sample wavelengths for each path, so it becomes (even more of) an approximation.



This won't work, because intermediary objects filter the spectrum of source light. Also in some scenes you can have so many lights contribute to a single pixel, that it's cheaper to save entire spectrum on each pixel. Consider how sky is a huge light that you can't save as a single light source, because different areas of that sky contribute differently - effectively one sky-light is an equivalent of millions of point-lights.


I think the term “distribution ray tracing” was a bit of a mid-point on the timeline of evolution from Whitted ray tracing to today’s path tracing? IIRC distribution ray tracing came from one of Rob Cook’s Siggraph papers. It’s probably worth moving toward path tracing as a more general and unified concept, and also because when googling & researching, it’ll be easier to find tips & tricks.

Yes when combining spectral rendering with refraction, you’ll need to pick a frequency by sampling the distribution. This can get tricky in general, good to build it in incremental steps. True of reflections as well, but up to you whether you want to have frequency-dependent materials in both cases. There are still reasons to use spectral even if you choose to use simplified materials.



I’d love to see more about the artworks the author shares at the end. The idea of creating renders of realities where light works differently from ours is fascinating.


Lovely. Not sure if the author would agree... There was much to love and hate about the nascent "new aesthetic" movement, but this demonstrates the best of that genre.


Does anyone know if someone has attempted real time spectral rendering? I've tried finding information before but have never had any luck.


Real-time is unfortunately a sort of vague term.

If you mean raster rendering pipelines then I don’t believe it’s possible because the nature of the GPU pipelines precludes it. You’d likely need to make use of compute shaders to create it at which point you’ve just written a patthtracer anyway.

If you mean a pathtracer , then real-time becomes wholly dependent on what your parameters are. With a small enough resolution, Mitsuba with Dr.JIT could theoretically start rendering frames after the first one in a reasonable time to be considered realtime.

However the reality is just that even in film, with offline rendering, very few studios find the gains of spectral rendering to be worth the effort. Outside of Wētā with Manuka , nobody else really uses spectral rendering. Animal Logic did for LEGO movie but solely for lens flares.

The workflow change to make things work with a spectral renderer and the very subtle differences are just not worth the high increase in render time



There’s more than one way to implement spectral rendering, and thus multiple different trade offs you can make. Spectral in general is a trade for higher color & material accuracy at the cost of compute time.

All else being equal, if you carry a fixed-size power spectrum along with each ray that is more than 3 elements, instead of an rgb triple, then you really might have up to an n/3 perf. For example using 6 wavelength channels can be up to twice as slow as an rgb renderer. Whether you actually experience n/3 slowdown depends on how much time you spend shading, versus the time to trace the ray, i.e., traverse the BVH. Shading will be slowed down by spectral, but scene traversal won’t, so check Amdahl’s Law.

Problem is, all else is never equal. Spectral math comes with spectral materials that are more compute intensive, and fancier color sampling and integration utilities. Plus often additional color conversions for input and output.

Another way to implement spectral rendering is to carry only a single wavelength per ray path, like a photon does, and ensure the lights’ wavelengths are sampled adequately. This makes a single ray faster than an rgb ray, but it adds a new dimension to your integral, which means new/extra noise, and so takes longer to converge, probably more than 3x longer.



A great spectral ray tracing engine is LuxRender : https://luxcorerender.org/ (the older one, that is - the newer LuxCore renderer does not have full spectral support)

Beyond the effects shown here, there are other benefits to spectral rendering - if done using light tracing, it allows you to change color, spectrum and intensity of light sources after the fact. It also makes indirect lighting much more accurate in many scenes.



> I’ve been curious what happens when some of the laws dictating how light moves are deliberately broken, building cameras out of code in a universe just a little unlike our own. Working with the richness of the full spectrum of light, spectral ray tracing has allowed me to break the rules governing light transport in otherworldly ways.

This reminds me of diagnosing bugs while writing my own raytracer, and attempting to map the buggy output to weird/contrived/silly alternative physics



If you want to go all the way, you have to track not only the wavelength of each ray, but also its polarization and phase. The situations in which these properties actually matter for human perception are rare (e.g., thin films and diffraction gratings), but they exist.


Are there any good resources for spectral ray tracing for other frequencies of light, e.g. radio frequencies?


It's the same thing! What are you trying to do with it?

One thing you'll run into is that there isn't a clear frequency response curve for non-visible, so you need to invent your own frequency to RGB function (false color).

Another thing is that radio waves have much longer wavelengths than visible, so diffractive effects tend to be a lot more important, and ray tracing (spectral or otherwise) doesn't do this well. Modeling diffraction is typically done using something like FDTD.

https://en.wikipedia.org/wiki/Finite-difference_time-domain_...



What are the applications you have in mind?

I'm no RF guy, but I imagine you quickly will have to care about areas where the wavelike properties of EM radiation dominates, in which case ray tracing is not the right tool for the job.

联系我们 contact @ memedata.com