Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update resource managing #125

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update resource managing #125

wants to merge 1 commit into from

Conversation

nicelhc13
Copy link
Contributor

This PR updates two main things:

  1. replace lock-based resource tracking with lock-free one by making the scheduler manage.
  2. move resource updates from mapping to launching; it blocked task scheduling critically.

With this update, on 2000 independent tasks + with GIL + 76000 microsec for each task + depth 100 (5MB),
it took 7 sec for each iteration.

Without this, it took more than 130 sec for each iteration.

@nicelhc13 nicelhc13 requested review from wlruys and sestephens73 May 3, 2022 19:39
@wlruys
Copy link
Contributor

wlruys commented May 3, 2022

Main concern is that I think @sestephens73 had a bunch of reasons for moving resource allocation for mapping. Want to make sure we can resolve those before moving it back.

@nicelhc13
Copy link
Contributor Author

yes please point me out. regardless of that this pr works, the main bottleneck was that updating resources at mapping is bad.

@nicelhc13
Copy link
Contributor Author

let me also add @yinengy for the review.

@nicelhc13 nicelhc13 requested a review from yinengy May 3, 2022 20:08
if len(task.req.environment.placement) > 1:
raise NotImplementedError("Multidevice not supported")
for device in task.req.environment.placement:
self._available_resources.register_parray_move(parray, device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong. The data movement tasks already exist and are already run before this code executes. This is only updating the information once the "parent" task is starting to be run.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait it also happens at creation, I'm not sure then. Why is it registered twice onto the same device?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be a mistake. Then is it fine just to remove this registeration at here?

@@ -1804,6 +1797,19 @@ def _map_tasks(self):
task: Optional[Task] = self._dequeue_spawned_task()
if task:
if not task.assigned:
_new_launched_task = []
for n_task in self._launched_tasks:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looping over all launched tasks for every mapping decision seems really expensive.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, this should be also refined. but the point of this PR is that we should update resource at launching, not mapping.

task_state = n_task.state
if isinstance(task_state, TaskCompleted):
for dev in n_task.req.devices:
self._available_resources.deallocate_resources(dev, n_task.req.resources)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this doesn't handle Data Movement Tasks? Their resources are still bundled with their "parents", but their resources are no longer allocated 'ahead' of time. Only compute tasks are being tracked and deallocated. This means that they currently are considered to take 'no resources' and can be over-scheduled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, maybe not. Seem like in this Data Movement tasks are still allocated at mapping time and freed when the "parent" completes. Sorry.

Copy link
Contributor

@wlruys wlruys May 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although (sorry for the stream of consciousness comments on this), doesn't this mean it can still deadlock in the same way as the current mapper? Since it can allocate a bunch of data movement tasks out of order (as there is no guaranteed order to the mapping phase) and prevent the next needed dependency from being mapped (and running) due to being out of resources?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. still deadlock happens. Ok, let's consider this PR as the immediate solution for the performance. I think your pointing out could not be resolved by this PR. Let's consider that issue on the separate PR with more discussion.

for name, v in resources.items():
if not self._update_resource(d, name, v * multiplier, block):
success = False
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this function no longer uses the monitor, it should not be named "atomically"

@@ -1126,48 +1123,45 @@ def check_resources_availability(self, d: Device, resources: ResourceDict):
:param resources: The resources to deallocate.
"""
logger.debug("[ResourcePool] Acquiring monitor in check_resources_availability()")
with self._monitor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure removing this monitor is safe?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose it is if only the scheduling thread makes this call.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked it by printing a thread and it was called only by the scheduler. But let me double check this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can also be called by any thread that creates a PArray:

get_scheduler_context().scheduler._available_resources.track_parray(self)

@@ -1126,48 +1123,45 @@ def check_resources_availability(self, d: Device, resources: ResourceDict):
:param resources: The resources to deallocate.
"""
logger.debug("[ResourcePool] Acquiring monitor in check_resources_availability()")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete this line too if deleting the monitor

if amount > dres[name]:
is_available = False
logger.debug("Resource check for %d %s on device %r: %s", amount, name, d, "Passed" if is_available else "Failed")
logger.debug("[ResourcePool] Releasing monitor in check_resources_availability()")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete this too if removing the monitor

assert dres[res] >= 0, "{}.{} was over allocated".format(dev, res)
# TODO(lhc): Due to floating point, it doesn't work.
#assert dres[res] <= dev.resources[res], "{}.{} was over deallocated".format(dev, res)
#assert dres[res] >= 0, "{}.{} was over allocated".format(dev, res)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gross. Can we make these integers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one possible way is use the default vcu sum as the number of threads or something bigger numbers, not 1.
@dialecticDolt is it fine?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also gross. We might add the resources back up and exceed the original amount bc of rounding errors and the order that tasks complete matters too.
Probably using: https://docs.python.org/3/library/fractions.html ? might be the cleanest way to handle this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants