-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create dispatch system for executors #3263
base: main
Are you sure you want to change the base?
Conversation
…ionExecutorCache instances are consistently named executor_cache.
@@ -845,13 +845,6 @@ bool Fusion::hasDynamicTransform() { | |||
return !ir_utils::getTVsWithDynamicTransform(this).empty(); | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just moved this function to executor.cpp as it wasn't used anywhere else.
@@ -326,17 +326,15 @@ SegmentProfiler::SegmentProfiler(uint32_t id, bool cupti_disabled) | |||
output_bytes_(0), | |||
kernel_profile_state_(ProfilerState::Ready) {} | |||
|
|||
void SegmentProfiler::startCompile(int device) { | |||
device_ = device; | |||
void SegmentProfiler::startCompile() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separated out set device as a separate function. KernelExecutor knows device on compilation since runtime information is needed for it, the other executors set it in run.
csrc/fusion_profiler.h
Outdated
@@ -22,6 +22,7 @@ namespace nvfuser { | |||
//! \enum ProfilerState | |||
//! \brief An enum used to represent the state of a profiling state machine | |||
enum class ProfilerState { | |||
None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just added this to initialize the state on construction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt this is needed. ProfilerState::Ready seems to be a good initial state already -- all reset* functions set the state to that. cc @kevinstephano
inputs, | ||
user_sched.fusion_id_, | ||
user_sched.device_id_); | ||
user_sched.scheduled_fusion.get(), inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need Ryan's advice here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdspring1 another place I could use your help, please see the comment below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdspring1 Could you take a look here?
…idevice/executor.[cpp,h] and rename to HostIrExecutor.
I am not sure to understand why there is specific challenge here. Don't we just need to accumulate accros the host for-loop iteration? HostIr Executor can support that through a HostUnit with aliased I/O, or we could also easily add support for |
@csarofeen I merged #3349. I did |
!test |
!test |
Maybe -- I'm unsure what it buys us to "leave the GEMM as a bulked HostUnit". Before host IR lowering, we already know which IterDomains in loop domains are host-parallelized. So I believe host IR lowering can run a compute-at-map analysis to generate a host for-loop regardless of how many HostUnits/segments the loop body contains. |
Yes. I was talking about the mechanism to generate that HostUnit/Fusion with I/O aliases. It's easy to device-lower a sum into a loop of additions, because it's put in one IrContainer, the kernel. However, host IR lowering has to deal with multiple containers. (I wouldn't be surprised at all if you know how to implement this -- I am just unsure myself)
Yes but suboptimal -- inputs to |
I mean that the scheduler will be applied in a hierarchical way. We will need to "host" schedule "AG+GEMM" on the one hand, and also to schedule the GEMM on the other hand. So the two scheduler need to be kind of composable, and be applied one after the other on overlapping segments. As long as segmentation and scheduling are tied together, this also holds for segmentation.
By definition, if it's a host operation, we will need to produce the data on global memory. That is true for any host op, also in the case it is a HostUnit with aliased I/O. I think that the There is no Host lowering today so creating this HostUnit with aliased I/O is not implemented, however, it doesn't seem hard to implement, unless I'm missing something... |
I think the HostUnit alternative gives fewer global reads/writes when the addition can be fused to the preceding kernel.
With the HostUnit alternative, each iteration reads With |
However here the accumulation across iteration is still done on a globally allocated buffer ( I think what you are describing here corresponds to fusing operations within the host for-loop's body, say into one HostUnit, which gives the classical benefit of fusing kernels. However, my point is that fusing across iterations is not possible, and the I/O of the for-loop's body (here, |
Agreed! I wasn't trying to argue about that. |
@samnordmann @wujingyue this conversation seems pretty great, it'd be wonderful if you could capture it in a design doc. |
std::vector<at::Tensor> outputs) { | ||
FUSER_PERF_SCOPE("ExprEvalExecutor::run"); | ||
|
||
if (isProfilerEnabled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@csarofeen Don't we need to set the current device here like line 242?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fusion Profiler is just a logging system, it coordinates/accumulates information based on group_id_. As long as we set device correctly once it's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that, but when this run
function is first called, is it guaranteed to have the profiler have the correct device already set?
FusionProfiler::segment(group_id_).stopKernel(); | ||
FusionProfiler::segment(group_id_).setDevice(args.getDeviceIndex()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@csarofeen Why is setDevice
done after the kernel execution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No particular reason, either way should be functional.
@Priya2698 Could you check the benchmark profiling with this PR? There should be no performance change, but since there's a lot of code changes, we should make sure everything works as expected. |
Are you interested in a complete sweep or only the host benchmarking? We can run a complete sweep on the CI. |
Please do a complete sweep just in case. Either A100 or H100. Not necessary to check both. |
Got it, we will need to use CI resources then since the runs time out due to dlcluster time limits. |
!test --pybench-full |
…r user scheduling. (#3357) The goal is to set `fusion_id` and `device_id` when creating `KernelExecutor` for `UserSchedule. Previously, it was set during `FusionExecutor::compileFusion`. This PR is stacked on `executor_dispatch` **Changes to `UserSchedule` cache system:** **Current:** The map key is the integer value of input arguments. The vector is of size `device id`. `std::unordered_map<size_t, std::vector<UserSchedule>> user_def_schedules;` **New:** The key to first map is the integer value of input arguments. The key to second map is of `device`. **Why?** We can set the the `fusion_id` and `device_id` in the constructor of `UserSchedule` and `KernelExecutor`.
Separate out
ExprEvalExecutor
andHostIrExecutor
from what's now calledKernelExecutor
. Create a dispatch system for them as compile and run are simpler for the former two.Also renamed instances of
FusionExecutorCache
toexecutor_cache
,KernelExecutor
toke
,ExprEvalExecutor
toeee
, andHostIrExecutor
tohire
. It makes this PR large, but was critical to refactor all the instances of these classes.For review focus on the following files:
csrc/host_ir/executor.[cpp,h]
csrc/runtime/executor.[cpp,h]
csrc/runtime/executor_abstract.h
csrc/runtime/executor_dispatch.[cpp,h]
csrc/runtime/fusion_executor_cache.cpp
csrc/runtime/fusion_kernel_runtime.[cpp,h]
Remaining files are just renaming. I would break this into multiple PRs, but it would be difficult to do at this point.