Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove deprecations #353

Merged
merged 16 commits into from
Jul 16, 2024
Merged

Remove deprecations #353

merged 16 commits into from
Jul 16, 2024

Conversation

alexfikl
Copy link
Collaborator

@alexfikl alexfikl commented Jul 12, 2024

There's quite a lot of them in the latest version.

This was mainly motivated by porting from the meshmode flatten to the arraycontext one, so that we can remove all the container traversal stuff from there, but went on and cleaned some other things up too.

Probably best viewed commit-by-commit, since some of them are straightforward.

Comment on lines 79 to 86
return self.mass.array_context
return (
self.mass.array_context
if isinstance(self.mass, DOFArray)
# NOTE: ConservedEulerField also holds the fluxes sometimes...
else self.mass[0].array_context)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the status quo is on this, where the variables and the flux share the same container. Is it ok to get the array context like this?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I definitely do not love the way this looks. 🙂

The root cause of the mess, I think, is that we haven't decided the fate of inducer/arraycontext#162.

IMO, this can go one of two ways that are somewhat sane:

  • Either: We say "array containers do not know their array context", this problem here goes away, DOFArray.array_context disappears, and we have to contend with large-scale interface breakage, because many places rely on fishing array contexts out of DOFArrays.
  • Or: we double down on array containers knowing their array context, in which case this struct should receive its own stored array_context field that is set by freeze and thaw. This depends on registration with with_array_context, which could conceivably be automated by with_container_arithmetic. Note that this is all quite inefficient: For each freeze and thaw, we traverse the whole container hierarchy twice. Once to do the freeze/thaw, and once again to update all the buried array contexts.

I'm not loving our choices here, but I don't think that continuing without picking one of the two is a great option. We'll just get more mess like the above. I'm somewhat leaning towards the first. How do you all feel about this?

cc @majosm @kaushikcfd @MTCam

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, for this particular instance, I've pushed a change that switches this to explicit-actx: 587690a. This generally feels like the right thing to do, IMO.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, this seems like a bigger change and I'm more than happy to revert that commit for this PR.

That said, I'm also mostly in favor of the first option, at least for the case where the callee needs an array context and is forced to fish one out of whatever container it received.

However, how is the first option supposed to work with arithmetic? From what I recall that _cls_has_array_context_attr thing was introduced so that the arithmetic knows the array types of the context and can broadcast across object arrays and whatnot.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that would have to go, too (with reasonable deprecation periods, of course). I'm currently knee-deep in with_container_arithmetic to make inducer/arraycontext#190 happen, and I'm hating my life. All this semi-contradictory "well if it's this kind of array, do something special" nonsense. The more we can simplify the logic there, the better.

Copy link
Collaborator Author

@alexfikl alexfikl Jul 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more we can simplify the logic there, the better.

Hard agree on that! That chunk of code has always been a black box of spaghetti to me 😢

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm possibly misunderstanding, but isn't this what get_container_context_recursively is for? Does the extra traversal really contribute much time? Seems like it would be negligible compared to all of the other processing going on.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not that get_container_context_recursively is necessarily costly time-wise (I don't know---it might be?). It's that it can fail: All items in an array container can be, more or less, anything, including scalars. As a result, it can always fail. As a result, any code relying on it is not much better than the spaghetti above, at least IMO.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not that get_container_context_recursively is necessarily costly time-wise (I don't know---it might be?). It's that it can fail: All items in an array container can be, more or less, anything, including scalars. As a result, it can always fail. As a result, any code relying on it is not much better than the spaghetti above, at least IMO.

Failure = get_container_context_recursively returns None? I guess I might be the only fool waving a flag that says he prefers the 2nd option; double down.

grudge/models/em.py Outdated Show resolved Hide resolved
@alexfikl alexfikl force-pushed the fix-deprecations branch 2 times, most recently from d0b588c to 3993350 Compare July 13, 2024 08:14
Comment on lines 513 to 522
def __call__(self):
actx = super().__call__()
if actx.allocator is not None:
return actx

from pyopencl.tools import ImmediateAllocator, MemoryPool
alloc = MemoryPool(ImmediateAllocator(actx.queue))

return self.actx_class(
actx.queue,
allocator=alloc,
force_device_scalars=self.force_device_scalars)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a memory pool to the tests, since the grudge PyOpenCLArrayContext recommended it. Not sure if this is a good idea?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be OK, as long as that array context gets freed.

@alexfikl alexfikl marked this pull request as ready for review July 13, 2024 12:07
@alexfikl
Copy link
Collaborator Author

alexfikl commented Jul 13, 2024

@inducer This should be good for a look now. I left two questions above, but otherwise it seems ok. The remaining (5k-ish, down from 20k-ish) warnings are about

  • Some SVM InconsistentOpenCLQueueWarning warnings.
  • Some warnings about the number of arguments in various kernels.
  • Warnings about some unevaluated arguments to functions.
  • Some unused inames.
  • Some warnings about accessing Discretization.mpi_communicator.

Not quite sure what to do about those, since they're not simple ports, so left them alone 😁

grudge/op.py Show resolved Hide resolved
@inducer inducer enabled auto-merge (rebase) July 16, 2024 17:45
@alexfikl
Copy link
Collaborator Author

@inducer The examples run on the CI seems to be taking about ~20min more than before (mostly on the Euler vortex). It doesn't seem to appreciate the changes for some reason..

Previous run on this PR:
https://github.com/inducer/grudge/actions/runs/9925754509/job/27491794872?pr=353

@inducer inducer merged commit 7181e8b into inducer:main Jul 16, 2024
6 checks passed
@alexfikl alexfikl deleted the fix-deprecations branch July 16, 2024 19:43
@inducer
Copy link
Owner

inducer commented Jul 23, 2024

Thanks for pointing that out. That "previous run" was 33m, the most recent time on main was 35m... optimistically, this is just noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants