Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep Slice #5588

Merged
merged 5 commits into from
Dec 18, 2023
Merged

Deep Slice #5588

merged 5 commits into from
Dec 18, 2023

Conversation

danieldresser-ie
Copy link
Contributor

Pretty well as discussed.

Some minor adjustments the fiddly details - I ended up with "taking 0% of an opaque volume sample omits it", and "point samples on the boundary are included by the near clip, but not the far clip". The justifications for these decisions are commented in the code, with the primary motivation being "do something reasonable that lets me move on".

I'm feeling reasonably good about the core math - the tests are pretty thorough at this point. I'm suspecting that discussion may focus on some of the more peripheral stuff ( are the default values and descriptions reasonable? Are the specialized maxDifferenceGreater and maxDifferenceLess arguments to assertImagesEqual reasonable? I'm going to need them for testing deep support in Resample as well ... the alternative is just reimplement assertImagesEqual in DeepSliceTest and ResampleTest, which isn't really that bad, it's a pretty short function ).

Copy link
Member

@johnhaddon johnhaddon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Daniel!

As expected, my inline comments are mostly peripheral to the core stuff - hopefully my suggestion for the OptionalPlug and default values make sense.

Are the specialized maxDifferenceGreater and maxDifferenceLess arguments to assertImagesEqual reasonable?

The argument list is getting a bit bloated there, and I do find it a bit upsetting to parameterise things in a way that allows logical impossibilities (maxDifferenceGreater = 1, maxDifferenceLess = 1, maxDifference = 2 for example). I also find the failure messages a bit confusing (-0.25 not greater than or equal to -0.0 is a confusing way of saying "maxDifferenceLess was exceeded").

I'm going to need them for testing deep support in Resample as well ... the alternative is just reimplement assertImagesEqual in DeepSliceTest and ResampleTest, which isn't really that bad, it's a pretty short function ).

For ResampleTest, are you also going to be using this to check sample counts? If so, I think I'd propose that we add a specific assertSampleCountsEqual method with just the greater and less arguments. It seems a bit odd to convert sample counts to a floating point image and do a difference and then use a floating point threshold when what we're interested in is a very specific integer range.

I apologise if we discussed this already and I've forgotten, but can you explain the rationale for having a flatten plug on DeepSlice? It introduces quite a bit of complexity into the implementation, but doesn't have the justification that Resample has, because DeepSlice reduces rather than amplifies the number of samples. The logical extension seems to be that almost all the deep nodes would have an optional flattening mode, and that seems questionable from both a maintenance perspective and a "do one thing and do it well" perspective. Plus, if we really cared about this, we'd have to deny ourselves the internal DeepTidy as well, because that's also generating intermediate data. Unless there's a really good justification (as in Resample), I'd be inclined to keep things simple.

Cheers...
John

include/GafferImage/DeepSlice.h Outdated Show resolved Hide resolved
python/GafferImageUI/DeepSliceUI.py Outdated Show resolved Hide resolved
python/GafferImageUI/DeepSliceUI.py Outdated Show resolved Hide resolved
python/GafferImageUI/DeepSliceUI.py Outdated Show resolved Hide resolved
src/GafferImage/DeepSlice.cpp Outdated Show resolved Hide resolved
src/GafferImage/DeepSlice.cpp Outdated Show resolved Hide resolved
Comment on lines 609 to 581
nearClipPlug()->hash( h );
nearClipDepthPlug()->hash( h );
farClipPlug()->hash( h );
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be worth detecting the case where near and far clip are both disabled, and doing a perfect pass-through with the same hash and channelData as the input.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main question here is whether to pass the tidy input, or the raw input? I decided I didn't really like "the output of DeepSlice is always tidy unless near and far clip are both off", so instead I pass the tidy input, which loses a lot of the performance benefit of a passthrough. If we really wanted efficient pass-throughs for deep stuff in general though, we probably need to track tidyness better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing the overriding principle : disabled nodes pass through their input unchanged. Tidying is a part of the process performed by the node, so disabling the node should also disable the tidying.

The hash pass-through could be better too - currently we're calling ImageProcessor::hashChannelData(), appending a few things, and then overwriting with tidyInPlug()->channelDataPlug()->hash(). We should move all that overwritten initialisation after the early-out for the pass through.

// Fill the result CompoundObject

CompoundObjectPtr result = new CompoundObject;
result->members()[ g_accumCountsName ] = std::move( accumCountsData );
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the move worth it here? The only thing its stealing is the reference count, which I wouldn't expect to be important given how much other work we're doing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I've been trying to get into the habit of not incurring extra reference counts. It's true that the impact of that is probably not measurable in this case, but it seems like this is probably the better habit to be in? When you ask if it's worth it, what cost are we asking if it's worth? Typing 11 more characters? I guess my proposed principle is more: when initializing the members of a CompoundObject, you should only do a double-increment of the reference counts if there is some reason you need to do that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you ask if it's worth it, what cost are we asking if it's worth?

I was mostly considering the mental cost to the reader and maintainer - understanding this is a move on the intrusive_ptr not the data, wondering if it actually improved performance, wondering why it's there when it seems unlikely to have done and we don't do this elsewhere (did the writer think they were moving the data?), then having to check that we don't use the moved-from objects afterwards. That was my thought process when reading it anyway...

I guess my proposed principle is more: when initializing the members of a CompoundObject, you should only do a double-increment of the reference counts if there is some reason you need to do that?

I think my proposed principle would be a bit different : do the obvious thing unless there's a clear performance benefit to doing something else. Now we just need to debate what obvious means in this context :)

I don't think it's worth holding up the review right now, but I'd be interested in following up when we next chat.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my thought is that "constructing a CompoundObject or CompoundData" is a pretty common pattern, and we probably should be doing this everywhere we use this pattern, since some of them probably are a lot closer to being performance sensitive. And if we start doing this everywhere we construct a CompoundObject, then doing it here would potentially have low to negative mental cost - we should be asking "why not" any time we see this sort of CompoundObject construction without std::move.

we don't do this elsewhere

My assumption was that the only reason we're not doing this generally was just because it's only fairly recently that we've been able to use std::move.

Glancing at the codebase, I can see several places where it would certainly make sense to me:
https://github.com/GafferHQ/gaffer/blob/main/src/Gaffer/Spreadsheet.cpp#L1172
https://github.com/GafferHQ/gaffer/blob/main/src/GafferScene/Options.cpp#L124

Those are more plausibly performance sensitive than DeepSlice, though admittedly I can't see any smoking gun where the difference in performance would be really significant.

A lot of this does come down purely to perspective - you see it as a mental cost to notice an std::move, I kinda feel like we ought to be more more aware of the reference counting that is happening when a smart pointer assignment happens ... if you see an std::move, then less is actually going to happen, and you should have less to think about.

having to check that we don't use the moved-from objects afterwards

This is another one where it almost feels to me more like reducing mental load than increasing it - an std::move clearly documents intent that the variable will not be used afterwards ( backed by a nice handy segfault if that documentation is wrong )

src/GafferImage/DeepSlice.cpp Outdated Show resolved Hide resolved
python/GafferImageUI/DeepSliceUI.py Outdated Show resolved Hide resolved
@danieldresser-ie
Copy link
Contributor Author

I've now addressed all feedback, and added changelog entries. I've squashed the changes to assertImagesEqual, but left one fix commit with all the changes to DeepSlice, because it felt like it might be easier that way to review the fixes.

Documenting our conversation this morning: you were OK with flatten option in this case, on the basis that it doesn't set precedent for needing it everywhere, just that this is a case where it is commonly used ( all known use cases involve flattening ), and the performance difference is noticeable. I wasn't sure if you'd be happier if I defaulted it off though? I had defaulted it on because it is faster, and more convenient in the use cases I can think of ( which are mostly about inspection ), and it should be pretty easy to understand to turn it off if someone finds a use case that needs it off. But thinking about platonic nodes, rather than use cases, it might make sense to default it off, and I'm happy to do that if you prefer.

And also from the conversation this morning: rather than maxDifferenceGreater, you were OK with specifying a range to the existing parameter, maxDifference = ( -0.3, 0.1 ). This isn't very discoverable, but works fine for the use cases where I need it, and I'm probably the main person writing image tests anyway.

@johnhaddon
Copy link
Member

added changelog entries

Thanks - could you prefix the assertImagesEqual() one with with ImageTestCase : please, and add backtick formatting on the function.

I wasn't sure if you'd be happier if I defaulted it off though? ... thinking about platonic nodes, rather than use cases, it might make sense to default it off

Yeah, I think that does make sense - let's do that.

Thanks for the quick update. I've resolved all the review comments except the pass-through one, which I do feel strongly should omit the tidy. Feel free to squash and merge when that's sorted...

Cheers...
John

@danieldresser-ie
Copy link
Contributor Author

Addressed feedback and squashed ( aside from std::move discussion ). Haven't hit merge yet in case you want to glance at it one last time, but should be good to go.

@johnhaddon
Copy link
Member

Thanks for the update - looks like the channelDataHash() is still passing through the tidied input though (while the compute is passing through the raw in is we discussed)? Could you sort that and merge?

I could not detect any reason for this copy - I can't see anything that would invalidate the string held by the context during the lifetime of this variable.
These images are intended to make it easy to exercise all cases in deep processing code.

Color values are uniform random, alpha values are a mixture of uniform random values, but also: values clustered very near 1, values of 1, and values of 0 ( these are the values most likely to trigger special cases in deep processing )

Depth values range from 0 to 4.  The difference in the 4 images are how depth values are generated. "Points" means Z and ZBack are set the same, whereas "Volumes" means that ZBack is allowed to range higher than Z. "Float" means that the depths are uniformly distributed, whereas "Int" means depths are limited to integers ( making it more likely that doing a deep merge will produce exact collisions ).

I've put the script used to generate these images in contrib - we could save 170K of test file size by generating these on the fly, but we try to avoid having a dependency on OSL in other tests, and having constant test images makes sense.
This allows using a different tolerance for pixels where A > B and pixels where A < B.
@danieldresser-ie
Copy link
Contributor Author

Merging after fixing the last bug John caught, and one stray comment in the header that Murray thankfully caught right before I uploaded.

@danieldresser-ie danieldresser-ie merged commit c880dd7 into GafferHQ:main Dec 18, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants