-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: added and implemented set_viewport #227
Conversation
a7a82ca
to
76ed448
Compare
that's what happens when not at the mac for the metal tests xD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this contribution!
In an attempt to reduce the redundancy here, do we really need to switch cases on a per-pipeline basis? That seems to be a rather weird case.
Some context here is that this allows me to port our current cascaded shadowmap implementation verbatim which renders the different cascades to a wide texture, (1024*cascades)*1024 requiring separate viewports and pipelines to render the cascades Our material system has separate pipelines for alpha-discarded geometry and non-alpha discarded, so even within the same shadowmap renderpass we are changing both pipeline and viewport+scissor to achieve this Not a deal-breaker of course though as there are ways to do the same using parameters in uniforms to the shader. We could squeeze the clip_pos.x based on which cascade we are rendering Another solution is I could use an array texture and render to the slices using separate texture views and multiple render-passes, but if I remember correctly we at some point measured and viewport- + scissor-changing in a single-pass was faster. Our approach is inspired from The Witness (http://the-witness.net/news/2010/04/graphics-tech-shadow-maps-part-2-save-25-texture-memory-and-possibly-much-more/ https://blog.thomaspoulet.fr/the-witness-frame-part-1/) |
Re-measuring the performance win is ofc in place xD But my first goal is just to port all the features from our current renderer to blade and see, so I will keep this in my fork regardless. Next-up is planar reflection, refraction, water simulation and DDGI irradiance probes! https://idno.se/swap/ |
That makes sense, thank you for the explanation! for material in [opaque_material, transparent_material] {
encoder.bind_pipeline(material.pipeline);
for object in objects.filter_by(material) {
encoder.set_viewport();
encoder.draw();
}
} I'm just sad to see the code duplication here. But that's not something your PR introduces - it was already the case for the scissor rect. Carefully ignored :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll think about how to avoid this redundancy.
Maybe the pipeline encoder can deref to the base pass.
The code-duplication seeems avoidable One benefit too that might arise from a restructure would also be that the different kinds of pipelineencoders can perhaps avoid exposing unrelated functions Right now |
Like, why should I be able to call if let mut cp = encoder.compute("downsample-depth") {
let mut c = cp.with(&pipelines.depth_downsample);
c.set_viewport( // ???
&gpu::Viewport {
x: 0.0,
y: 0.0,
w: 1920.0,
h: 1080.0,
},
0.0..1.0,
);
} If the impl crate::traits::ComputeEncoderTrait for PipelineEncoder<ComputeMarker> {
/// ...
} Alternatively, make the struct ComputePipelineEncoder {
inner: PipelineEncoder,
}
struct RenderPipelineEncoder {
inner: PipelineEncoder,
} |
The main purpose of those types is to be explicit about the semantics. It is not really to protect against mistakes - Blade is an unsafe (lean and mean) graphics API after all. The pipeline context is there because it defines the lifetimes of all bindings. This is the important bit, not the fact of whether or not you can set the scissor in there. |
Absolutely, these are the reasons I am so happy to read the code of blade and use it xD I do not intend to challenge those philosophies. My intention was to propose an improvement to the usage of the library at no extra runtime cost, but it is perhaps orthogonal to blade:s design in ways I haven't considered yet. Let me think about it and examine the code more. |
After some exlporing in the code, I still think that overall, making the It would make the implementations more consistent ( It would not, however, address the redundancy in |
I prototyped a few things, and I'm converging on the idea that the duplication of
So, it's totally cool. I may follow-up with a small refactor but overall we are good. |
Nice! Yeah adding the function to turn them into the native type made it much less duplicated xP Have you considered my other point? After some further coding I found another argument. I wanted to write a utility function to help with the dispatch of my instances but on vulkan/gles the return type is a It would really be nice to make the return-type of |
I understand the pain point you faced. I don't think we can properly expose the "blade_graphics::RenderCommandEncoder" for all backends. Main issue is lifetimes. One backed will have no lifetimes, another backend will need 1, another will need 2. The lifetimes are really an implementation detail of the backend, but they screw up our ability to expose it as the same type.
We do have a public trait - |
Aah yeah I see exactly now. Yeah the lifetimes in there would propagate to the user no no... And didn't know drop specialization didn't work xD very surprising The public trait might be the way to go then |
the module |
No description provided.