You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently resources are exclusively owned by a single device. This model is fine in many cases but it does not cover cases where multiple devices may share a memory space between them. This ended up being a weird limiter on our CPU cores architecture (see #39), but it will hit is when we try to model memory locations that can be accessed by both the CPU and GPU as well as managing memory use by different NUMA domains.
The text was updated successfully, but these errors were encountered:
@arthurp mentioned in #39 (comment) that this will require some changes to both the device model and scheduler. I agree. @arthurp if you have any additional thoughts beyond this it'd be nice if you could write them up at some point for whoever ends up tackling this.
Currently resources are exclusively owned by a single device. This model is fine in many cases but it does not cover cases where multiple devices may share a memory space between them. This ended up being a weird limiter on our CPU cores architecture (see #39), but it will hit is when we try to model memory locations that can be accessed by both the CPU and GPU as well as managing memory use by different NUMA domains.
The text was updated successfully, but these errors were encountered: