-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove deprecations #353
Remove deprecations #353
Conversation
grudge/models/euler.py
Outdated
return self.mass.array_context | ||
return ( | ||
self.mass.array_context | ||
if isinstance(self.mass, DOFArray) | ||
# NOTE: ConservedEulerField also holds the fluxes sometimes... | ||
else self.mass[0].array_context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what the status quo is on this, where the variables and the flux share the same container. Is it ok to get the array context like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I definitely do not love the way this looks. 🙂
The root cause of the mess, I think, is that we haven't decided the fate of inducer/arraycontext#162.
IMO, this can go one of two ways that are somewhat sane:
- Either: We say "array containers do not know their array context", this problem here goes away,
DOFArray.array_context
disappears, and we have to contend with large-scale interface breakage, because many places rely on fishing array contexts out ofDOFArray
s. - Or: we double down on array containers knowing their array context, in which case this struct should receive its own stored
array_context
field that is set byfreeze
andthaw
. This depends on registration withwith_array_context
, which could conceivably be automated bywith_container_arithmetic
. Note that this is all quite inefficient: For eachfreeze
andthaw
, we traverse the whole container hierarchy twice. Once to do the freeze/thaw, and once again to update all the buried array contexts.
I'm not loving our choices here, but I don't think that continuing without picking one of the two is a great option. We'll just get more mess like the above. I'm somewhat leaning towards the first. How do you all feel about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, for this particular instance, I've pushed a change that switches this to explicit-actx: 587690a. This generally feels like the right thing to do, IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, this seems like a bigger change and I'm more than happy to revert that commit for this PR.
That said, I'm also mostly in favor of the first option, at least for the case where the callee needs an array context and is forced to fish one out of whatever container it received.
However, how is the first option supposed to work with arithmetic? From what I recall that _cls_has_array_context_attr
thing was introduced so that the arithmetic knows the array types of the context and can broadcast across object arrays and whatnot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, that would have to go, too (with reasonable deprecation periods, of course). I'm currently knee-deep in with_container_arithmetic
to make inducer/arraycontext#190 happen, and I'm hating my life. All this semi-contradictory "well if it's this kind of array, do something special" nonsense. The more we can simplify the logic there, the better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more we can simplify the logic there, the better.
Hard agree on that! That chunk of code has always been a black box of spaghetti to me 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm possibly misunderstanding, but isn't this what get_container_context_recursively
is for? Does the extra traversal really contribute much time? Seems like it would be negligible compared to all of the other processing going on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not that get_container_context_recursively
is necessarily costly time-wise (I don't know---it might be?). It's that it can fail: All items in an array container can be, more or less, anything, including scalars. As a result, it can always fail. As a result, any code relying on it is not much better than the spaghetti above, at least IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not that
get_container_context_recursively
is necessarily costly time-wise (I don't know---it might be?). It's that it can fail: All items in an array container can be, more or less, anything, including scalars. As a result, it can always fail. As a result, any code relying on it is not much better than the spaghetti above, at least IMO.
Failure = get_container_context_recursively
returns None
? I guess I might be the only fool waving a flag that says he prefers the 2nd option; double down.
d0b588c
to
3993350
Compare
grudge/array_context.py
Outdated
def __call__(self): | ||
actx = super().__call__() | ||
if actx.allocator is not None: | ||
return actx | ||
|
||
from pyopencl.tools import ImmediateAllocator, MemoryPool | ||
alloc = MemoryPool(ImmediateAllocator(actx.queue)) | ||
|
||
return self.actx_class( | ||
actx.queue, | ||
allocator=alloc, | ||
force_device_scalars=self.force_device_scalars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a memory pool to the tests, since the grudge PyOpenCLArrayContext
recommended it. Not sure if this is a good idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should be OK, as long as that array context gets freed.
3993350
to
b28d938
Compare
@inducer This should be good for a look now. I left two questions above, but otherwise it seems ok. The remaining (5k-ish, down from 20k-ish) warnings are about
Not quite sure what to do about those, since they're not simple ports, so left them alone 😁 |
b28d938
to
fe4d0b1
Compare
fe4d0b1
to
eba596d
Compare
86a15ea
to
f44e6d2
Compare
@inducer The examples run on the CI seems to be taking about ~20min more than before (mostly on the Euler vortex). It doesn't seem to appreciate the changes for some reason.. Previous run on this PR: |
Thanks for pointing that out. That "previous run" was 33m, the most recent time on main was 35m... optimistically, this is just noise. |
There's quite a lot of them in the latest version.
This was mainly motivated by porting from the meshmode
flatten
to the arraycontext one, so that we can remove all the container traversal stuff from there, but went on and cleaned some other things up too.Probably best viewed commit-by-commit, since some of them are straightforward.