Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Add Torch-TRT IR pass-through argument #1947

Closed
wants to merge 1 commit into from

Conversation

gs-olive
Copy link
Contributor

  • Fix issue where selected IR was not being propagated to backend

@gs-olive gs-olive temporarily deployed to docker-s3-upload September 28, 2023 21:28 — with GitHub Actions Inactive
@gs-olive gs-olive temporarily deployed to docker-s3-upload September 28, 2023 21:28 — with GitHub Actions Inactive
@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@gs-olive
Copy link
Contributor Author

Hi @xuzhao9 - thanks for your help getting these fixes merged quickly. If you could manually add a run of Torch-TRT to test these, it would be much appreciated

@xuzhao9
Copy link
Contributor

xuzhao9 commented Sep 29, 2023

@gs-olive Workflow: https://github.com/pytorch/benchmark/actions/runs/6355839393

Note that I can only start a workflow when the branch is on pytorch/benchmark, not on forked branch.

@facebook-github-bot
Copy link
Contributor

@xuzhao9 merged this pull request in 3f11b81.

@gs-olive
Copy link
Contributor Author

gs-olive commented Sep 29, 2023

@xuzhao9 Understood - thank you for running the pipeline. I am noticing this error showing up from within the load_model_isolated code, and I want to make sure I am invoking it correctly. The error is this:

  File "/runner/_work/benchmark/benchmark/benchmark/torchbenchmark/util/experiment/instantiator.py", line 39, in load_model_isolated
    task.make_model_instance(test=config.test, device=config.device, batch_size=config.batch_size, extra_args=config.extra_args)
  File "/runner/_work/benchmark/benchmark/benchmark/components/_impl/tasks/base.py", line 278, in inner
    self.worker.run(src)
  File "/runner/_work/benchmark/benchmark/benchmark/components/_impl/workers/subprocess_worker.py", line 155, in run
    self._run(snippet)
  File "/runner/_work/benchmark/benchmark/benchmark/components/_impl/workers/subprocess_worker.py", line 320, in _run
    subprocess_rpc.SerializedException.raise_from(
  File "/runner/_work/benchmark/benchmark/benchmark/components/_impl/workers/subprocess_rpc.py", line 458, in raise_from
    raise e from ChildTraceException(traceback_str)
NotImplementedError: The instance variable 'model' does not exist or is not type 'torch.nn.Module', implement your own `set_module()` function.

I am calling this function here:

for model_name in list_models():
config = TorchBenchModelConfig(
name=model_name,
test="eval",
device="cuda",
batch_size=parsed_args["bs"],
extra_args=[
"--backend",
]
+ unknown_args,
)
try:
Model = load_model_isolated(config=config)

I adapted this usage from other userbenchmarks which also use this function, and I am wondering if you have any suggestions on the usage or how I might improve it to resolve this error?

@gs-olive
Copy link
Contributor Author

gs-olive commented Oct 3, 2023

Hi @xuzhao9 - I did some additional debugging and addressed the issues I found in #1957. It is mostly related to catching errors regarding models which cannot be found or loaded.

@xuzhao9
Copy link
Contributor

xuzhao9 commented Oct 4, 2023

@gs-olive This is because the Background_Matting model needs to implement its own set_module function, looking into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants