Skip to content

Commit

Permalink
add option to run group_fusion
Browse files Browse the repository at this point in the history
Summary: easily add inductor option to enable `group_fusion` so we can iterate on its implementation

Reviewed By: xuzhao9

Differential Revision: D48134482

fbshipit-source-id: a80e0278dc6b9c92e6b0052a75ebc8261d6e1188
  • Loading branch information
chaekit authored and facebook-github-bot committed Aug 8, 2023
1 parent f882312 commit 8a0f5e3
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions torchbenchmark/util/backends/torchdynamo.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,11 @@ def parse_torchdynamo_args(model: 'torchbenchmark.util.model.BenchmarkModel', dy
type=distutils.util.strtobool,
default="false",
)
parser.add_argument(
"--torchinductor_enable_group_fusion",
action='store_true',
help="enable group fusion in Inductor"
)
parser.add_argument(
"--dynamo_disable_optimizer_step",
type=distutils.util.strtobool,
Expand Down Expand Up @@ -79,6 +84,8 @@ def apply_torchdynamo_args(model: 'torchbenchmark.util.model.BenchmarkModel', ar
torchinductor.config.triton.mm = "triton"
# currently can't pass correctness with use_bmm = True
# torchinductor.config.triton.use_bmm = True
if args.torchinductor_enable_group_fusion:
torchinductor.config.group_fusion = True

# used for correctness checks, to avoid triton rand() behaving differently from torch rand().
torchinductor.config.fallback_random = bool(args.torchinductor_fallback_random)
Expand Down

0 comments on commit 8a0f5e3

Please sign in to comment.