Skip to content

Commit

Permalink
Remove attention_is_all_you_need and pytorch_struct (#1833)
Browse files Browse the repository at this point in the history
Summary:
This is a follow-up of #1831

Pull Request resolved: #1833

Reviewed By: janeyx99

Differential Revision: D48432770

Pulled By: xuzhao9

fbshipit-source-id: 2ba8dff02d2ab0df17703427abad126708252762
  • Loading branch information
xuzhao9 authored and facebook-github-bot committed Aug 17, 2023
1 parent 987ed87 commit 2299f88
Show file tree
Hide file tree
Showing 4 changed files with 3 additions and 7 deletions.
4 changes: 2 additions & 2 deletions torchbenchmark/models/ADDING_MODELS.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ the build steps from install.py. Avoid getting too fancy trying to be cross-pla
compatible (e.g., Windows/Mac/etc., or using package managers like yum/dnf) - if it's
not easy to build, there may be easier models to target.

[Example install.py](attention_is_all_you_need_pytorch/install.py)
[Example install.py](BERT_pytorch/install.py)

### Mini-dataset
By the time install.py script runs, a miniature version of the dataset is expected to be
Expand Down Expand Up @@ -98,7 +98,7 @@ Important: be deliberate about support for cpu/gpu and jit/no-jit. In the case
your model is instantiated in an unsupported configuration, the convention is to return
a model object from \_\_init\_\_ but raise NotImplementedError() from all its methods.

See the [BenchmarkModel API](https://github.com/pytorch/benchmark/blob/master/torchbenchmark/util/model.py) to get started. The [attention is all you need](attention_is_all_you_need_pytorch/__init__.py) benchmark can serve as a good example.
See the [BenchmarkModel API](https://github.com/pytorch/benchmark/blob/master/torchbenchmark/util/model.py) to get started. The [BERT_pytorch](BERT_pytorch/__init__.py) benchmark can serve as a good example.

### JIT
As an optional step, make whatever modifications necessary to the model code to enable it to script or trace. If doing this,
Expand Down
1 change: 0 additions & 1 deletion torchbenchmark/models/nvidia_deeprecommender/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@

import torch

from torchbenchmark.models.attention_is_all_you_need_pytorch.train import train
from ...util.model import BenchmarkModel
from torchbenchmark.tasks import RECOMMENDATION
from typing import Tuple
Expand Down
1 change: 0 additions & 1 deletion torchbenchmark/util/env_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@
# Need lower tolerance on GPU. GPU kernels have non deterministic kernels for these models.
REQUIRE_HIGHER_TOLERANCE = {
"alexnet",
"attention_is_all_you_need_pytorch",
"densenet121",
"hf_Albert",
"vgg16",
Expand Down
4 changes: 1 addition & 3 deletions userbenchmark/optim/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,6 @@ def get_unstable_models() -> Set[str]:
'LearningToPaint',
'Super_SloMo',
'alexnet',
'attention_is_all_you_need_pytorch',
'basic_gnn_edgecnn',
'basic_gnn_gcn',
'basic_gnn_gin',
Expand Down Expand Up @@ -205,7 +204,6 @@ def get_unstable_models() -> Set[str]:
'phlippe_resnet',
'pytorch_CycleGAN_and_pix2pix',
'pytorch_stargan',
'pytorch_struct',
'pytorch_unet',
'resnet152',
'resnet18',
Expand Down Expand Up @@ -285,7 +283,7 @@ def get_unstable_models() -> Set[str]:
# torch.compile()'d optimizer.step() has too many arguments in C++
# See GH issue: https://github.com/pytorch/pytorch/issues/97361
{'model': m, 'device': 'cpu', 'func_str': 'pt2_', 'defaults': []} for m in [
'BERT_pytorch', 'Background_Matting', 'Super_SloMo', 'attention_is_all_you_need_pytorch',
'BERT_pytorch', 'Background_Matting', 'Super_SloMo',
'densenet121', 'detectron2_fasterrcnn_r_101_c4', 'detectron2_fasterrcnn_r_101_dc5',
'detectron2_fasterrcnn_r_101_fpn', 'detectron2_fasterrcnn_r_50_fpn', 'detectron2_maskrcnn',
'detectron2_maskrcnn_r_101_c4', 'detectron2_maskrcnn_r_101_fpn',
Expand Down

0 comments on commit 2299f88

Please sign in to comment.