-
-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a supersampling option to LightmapGI #100765
Add a supersampling option to LightmapGI #100765
Conversation
6fbb50c
to
35db729
Compare
Thanks a lot for testing! @SpockBauru Hmm, I'm unable to reproduce this issue on my machine. Can you share the error output? On which OS are you? Could you provide a MRP? The default property bug has been fixed. |
Sure, my system info: The MRP was provided on my last comment. Here is the error: supersampling_error.mp4 |
Haha, I was just about to open a PR for this as well! :) Wondering how necessary the bool is, since it's effectively the same as having the supersampling factor set to 1.0, and there are no other settings tied or hidden by this bool. This ultimately just results in an extra inspector property listed when it is enabled. I personally also think it could make more sense to have this setting placed underneath texel-scale. (Though the general order of the lightmapgi properties don't make much sense as they are now anyways) There is an issue with supersampling higher than 2x: |
35db729
to
b22e6fd
Compare
@@ -39,10 +39,10 @@ | |||
If [code]true[/code], bakes lightmaps to contain directional information as spherical harmonics. This results in more realistic lighting appearance, especially with normal mapped materials and for lights that have their direct light baked ([member Light3D.light_bake_mode] set to [constant Light3D.BAKE_STATIC] and with [member Light3D.editor_only] set to [code]false[/code]). The directional information is also used to provide rough reflections for static and dynamic objects. This has a small run-time performance cost as the shader has to perform more work to interpret the direction information from the lightmap. Directional lightmaps also take longer to bake and result in larger file sizes. | |||
[b]Note:[/b] The property's name has no relationship with [DirectionalLight3D]. [member directional] works with all light types. | |||
</member> | |||
<member name="environment_custom_color" type="Color" setter="set_environment_custom_color" getter="get_environment_custom_color"> | |||
<member name="environment_custom_color" type="Color" setter="set_environment_custom_color" getter="get_environment_custom_color" default="Color(1, 1, 1, 1)"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like they are unrelated to this PR, but these got modified since I changed all property usages to PROPERTY_USAGE_NO_EDITOR
in _validate_property since PROPERTY_USAGE_NONE
causes them to have no default value.
I think it also make sense to have the |
Updated the MRP to be more readable and focus on the issue. Just bake the lightmaps and the bug may trigger: supersampling_mrp.zip |
b22e6fd
to
e974f0c
Compare
The bug with directional lightmaps should now be fixed. |
Tested the Windows artifact on Forward+, Mobile and Compatibility renderers. The directional lightmaps bug is fixed 😄 I didn't find any visual issues related to this PR. (There are some error messages on Compatibility, but they are also on the Master branch so is not related to this PR) |
I'm curious how beneficial this would be. The cost of dilation and denoising isn't very big in the context of a full bake, so the difference in the end with a more complex implementation wouldn't be very significant. Unless I'm mistaken, the time savings here I assume would be at most a matter of a few seconds on baketimes that are in the order of (many) minutes to hours, with a much more complex implementation. I would suggest we go for a simple implementation now, still targeting 4.4, and to explore a more sophisticated approach later if it makes sense to do. On the user side nothing should change between the implementations, so there shouldn't be any compatibility issues with that afaik. I messaged you on Rocketchat sharing the dilation code I wrote for my branch, which you're free to use however you'd like. |
a270b1f
to
c7707d7
Compare
Alright, added the dilation algorithm @lander-vr sent to me (slightly modified). Thanks! Edges no longer have dark seams. In additon, the texel size used for anti-aliasing in the lightmapping compute shader is scaled by the supersampling factor as well - now there's only a slight increase in aliasing proportional to the supersampling factor (caused by the denoiser, but only noticeable at a very high supersampling factor). Ideally, downsampled should take place before denoising/dilating - that's one downside of this approach. |
Tested multiple times this one, but results are results: on the tree MRP the new build (c7707d7) is worse than the old version (e974f0c) on both image clarity and baking times. Here is a comparison between both builds for supersampling factor 1.0 (disabled), 2.0 and 4.0. Everything else is at the default settings. The image is blurrier on all supersampling factors, even disabled:
Also the shadows at the base of the hill seems to be misaligned from the curvature on the new build. As expected, baking times increased, mostly because the denoiser:
Tested without the denoiser, seems like the image degradation is happening before this step:
|
@SpockBauru Thanks for testing again! To be honest, I don't see any misalignment; it looks correct to me. The denoiser range is now also multiplied by the supersampling factor, as suggested by @lander-vr - so the denoiser might also produce a tiny bit blurrier results than before. However, I think this is correct; we should adjust the default base denoiser range to mitigate that. The increase in baking times is strange. I will investigate. Just that you know: I edited your comment without any changes so that I can view your images in their original size again. This is a known GH bug and they might break again in a few hours :( |
Just did some tests as well:
This time increase from the denoiser is probably caused by the multiplied range. The denoiser ranges input is limited to 20 pixels, so maybe for this initial implementation it'd be best to either just ignore it and not do the multiplication, or clamp it to 20. I don't remember if there was a technical reason for that limit, it's been quite a while since I implemented that. The increased cost does make sense though and I should've predicted that, that's my bad! Its visual impact would be pretty small, missing the adjusted denoiser range is definitely not a dealbreaker for an initial implementation of supersampling. So the most pragmatic way forwards right now would be to just not adjust it (or clamp it to 20 as I mentioned before), or resize before the lightmap before denoising, though I'm not sure how easy that'd be. Dark edges along lightmap UV islands are fully resolved and qualitatively I'm getting great results in my tests, no increased aliasing or any of the likes. Non integer factors are a little bit less sharp, as expected, but not very drastically so that's great. |
c7707d7
to
57cd53c
Compare
After thinking about it again I came to the conclusion that the denoiser range shouldn't depend on the lightmap's resolution (since the noise pattern doesn't scale), so removed the adjustment completely. |
This provides increased lightmap quality with less noise, smoother shadows and better small-scale shadow detail. The downside is that this significantly increases bake times and memory usage while baking lightmaps, so this option is disabled by default. Co-authored-by: Hugo Locurcio <[email protected]> Co-authored-by: landervr <[email protected]>
57cd53c
to
054340b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me! I haven't tested locally, but I trust the testing done by others in the comments
Thanks! |
This shader-based approach would avoid the VRAM utilization and texture size limits that "naive" downsampling can encounter, so I think we should attempt it after 4.4. A shader-based approach still allows for high-quality downsampling algorithms. Area downsampling is a popular technique for downsampling from arbitrary supersampling factors, as it has great quality.1 Footnotes
|
Alternative to #93445 incorporating some of the feedback
Co-authored by @Calinou
Supersampling enhances lightmap quality by reducing noise, light leaking and producing smoother shadows with better small-scale detail. However, it significantly increases bake times and memory usage during lightmap generation, so supersampling is disabled by default.
Key differences:
downsampling
tosupersampling
supersampling
(bool) to enable the featuresupersampling_factor
(float) to tweak the size at which the lightmap is rendered before downsamplingFor some reason the issue of the original PR with shadows not getting smoother when using supersampling is gone now:
(likely something got fixed since this one in rebased on current master)
Texel scale 2, supersampling off
Texel scale 2, supersampling on
Test with the Unreal Sun Temple scene, texel scale 1, supersampling on (with factor 4)
(Minor light leaks on the ceiling due to a very low texel density of that mesh.)