-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update resource managing #125
base: main
Are you sure you want to change the base?
Conversation
Main concern is that I think @sestephens73 had a bunch of reasons for moving resource allocation for mapping. Want to make sure we can resolve those before moving it back. |
yes please point me out. regardless of that this pr works, the main bottleneck was that updating resources at mapping is bad. |
let me also add @yinengy for the review. |
if len(task.req.environment.placement) > 1: | ||
raise NotImplementedError("Multidevice not supported") | ||
for device in task.req.environment.placement: | ||
self._available_resources.register_parray_move(parray, device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems wrong. The data movement tasks already exist and are already run before this code executes. This is only updating the information once the "parent" task is starting to be run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait it also happens at creation, I'm not sure then. Why is it registered twice onto the same device?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be a mistake. Then is it fine just to remove this registeration at here?
@@ -1804,6 +1797,19 @@ def _map_tasks(self): | |||
task: Optional[Task] = self._dequeue_spawned_task() | |||
if task: | |||
if not task.assigned: | |||
_new_launched_task = [] | |||
for n_task in self._launched_tasks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looping over all launched tasks for every mapping decision seems really expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, this should be also refined. but the point of this PR is that we should update resource at launching, not mapping.
task_state = n_task.state | ||
if isinstance(task_state, TaskCompleted): | ||
for dev in n_task.req.devices: | ||
self._available_resources.deallocate_resources(dev, n_task.req.resources) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this doesn't handle Data Movement Tasks? Their resources are still bundled with their "parents", but their resources are no longer allocated 'ahead' of time. Only compute tasks are being tracked and deallocated. This means that they currently are considered to take 'no resources' and can be over-scheduled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, maybe not. Seem like in this Data Movement tasks are still allocated at mapping time and freed when the "parent" completes. Sorry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although (sorry for the stream of consciousness comments on this), doesn't this mean it can still deadlock in the same way as the current mapper? Since it can allocate a bunch of data movement tasks out of order (as there is no guaranteed order to the mapping phase) and prevent the next needed dependency from being mapped (and running) due to being out of resources?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. still deadlock happens. Ok, let's consider this PR as the immediate solution for the performance. I think your pointing out could not be resolved by this PR. Let's consider that issue on the separate PR with more discussion.
for name, v in resources.items(): | ||
if not self._update_resource(d, name, v * multiplier, block): | ||
success = False | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this function no longer uses the monitor, it should not be named "atomically"
@@ -1126,48 +1123,45 @@ def check_resources_availability(self, d: Device, resources: ResourceDict): | |||
:param resources: The resources to deallocate. | |||
""" | |||
logger.debug("[ResourcePool] Acquiring monitor in check_resources_availability()") | |||
with self._monitor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure removing this monitor is safe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose it is if only the scheduling thread makes this call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked it by printing a thread and it was called only by the scheduler. But let me double check this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can also be called by any thread that creates a PArray:
Line 52 in d215415
get_scheduler_context().scheduler._available_resources.track_parray(self) |
@@ -1126,48 +1123,45 @@ def check_resources_availability(self, d: Device, resources: ResourceDict): | |||
:param resources: The resources to deallocate. | |||
""" | |||
logger.debug("[ResourcePool] Acquiring monitor in check_resources_availability()") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete this line too if deleting the monitor
if amount > dres[name]: | ||
is_available = False | ||
logger.debug("Resource check for %d %s on device %r: %s", amount, name, d, "Passed" if is_available else "Failed") | ||
logger.debug("[ResourcePool] Releasing monitor in check_resources_availability()") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete this too if removing the monitor
assert dres[res] >= 0, "{}.{} was over allocated".format(dev, res) | ||
# TODO(lhc): Due to floating point, it doesn't work. | ||
#assert dres[res] <= dev.resources[res], "{}.{} was over deallocated".format(dev, res) | ||
#assert dres[res] >= 0, "{}.{} was over allocated".format(dev, res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gross. Can we make these integers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one possible way is use the default vcu sum as the number of threads or something bigger numbers, not 1.
@dialecticDolt is it fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also gross. We might add the resources back up and exceed the original amount bc of rounding errors and the order that tasks complete matters too.
Probably using: https://docs.python.org/3/library/fractions.html ? might be the cleanest way to handle this.
This PR updates two main things:
With this update, on 2000 independent tasks + with GIL + 76000 microsec for each task + depth 100 (5MB),
it took 7 sec for each iteration.
Without this, it took more than 130 sec for each iteration.