Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable grid reductions within loops. #1681

Closed
wants to merge 1 commit into from
Closed

Conversation

csarofeen
Copy link
Owner

Allow re-entrant grid reductions, allowing them to be placed inside loops without turning the kernel cooperative.

…oops without turning the kernel cooperative.
Copy link
Collaborator

@naoyam naoyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good so far. Some early comments.

Comment on lines +202 to +203
const nvfuser_index_t entrance_ind,
const nvfuser_index_t n_entrances) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can't imagine these would need 64 bits...

if (fl->isTrivial()) {
continue;
}
if (fl->iter_domain()->isThread()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fl->isTrivial() should include this condition.

(grouped_rop->isAllreduce() && is_within_a_loop ? 2 : 1)),
output->dtype(),
false);
});

const auto sync_buffer = ir_utils::allocGlobalBufferForGridComm(
getGridSyncBufferSize(out_domain), DataType::Int, true);
getGridSyncBufferSize(out_domain, for_loops_), DataType::Int, true);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GroupedReduction doesn't support reentrance, so this is not necessary right now.

@@ -271,12 +347,25 @@ void IndexLowering::handleGridReduction(

const auto reduce_buffer = ir_utils::allocGlobalBufferForGridComm(
getGridCommWorkBufferSize(
out_domain, rop->isAllreduce() && is_within_a_loop ? 2 : 1),
out_domain,
rop->isAllreduce() ? std::vector<kir::ForLoop*>() : for_loops_,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if we could refactor the code on the conditional processing of when to expand the buffer.

return buffer_size;
}

Val* getGridSyncBufferSize(const TensorDomain* td) {
Val* getGridSyncBufferSize(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to expand the sync buffer?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we shouldn't need to. Because we wait until all iterations are done to start cleaning any of them up. Maybe that's a reason it's slow, I think we should use multiple sync buffers for each reduction!

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if it would make a difference, but seems like it's low risk.

@csarofeen
Copy link
Owner Author

Closing in favor of #1698

@csarofeen csarofeen closed this May 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants