-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TVM] Model converters, tvm_pytorch features and inference tvm #436
[TVM] Model converters, tvm_pytorch features and inference tvm #436
Conversation
src/inference/tvm_auxiliary.py
Outdated
@@ -116,3 +119,6 @@ def prepare_output(result, task, output_names, not_softmax=False): | |||
else: | |||
result = softmax(result.asnumpy()) | |||
return {output_names[0]: result} | |||
if task == 'detection': | |||
print(result.asnumpy()) | |||
return {output_names[0]: result.asnumpy()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Предлагаю все изменения с моделями детектирования вынести в отдельный пулл-реквест.
@FenixFly, @n-berezina-nn, посмотрите, пожалуйста, текущие изменения, чтобы мы могли залить в основную ветку. |
target = self.tvm.target.Target(target_str) | ||
dev = self.tvm.cpu(0) | ||
else: | ||
raise ValueError('Another devices are not supported at this moment') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise ValueError('Another devices are not supported at this moment') | |
raise ValueError(f'Device {device} is not supported. Supported devices: CPU') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Исправлено
dest='input_name') | ||
parser.add_argument('-d', '--device', | ||
help='Specify the target device to infer on CPU or ' | ||
'NVIDIA_GPU (CPU by default)', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Судя по дальнейшему коду поддерживается только CPU. Если это так, то тогда имеет смысл убрать упоминание об NVIDIA GPU из help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Исправлено
scripted_model = self.torch.jit.trace(model, input_data).eval() | ||
return scripted_model | ||
else: | ||
log.info(f'Loading model {model_name} from module') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
большая вложенность кода. возможно имеет смысл разбить на несколько вспомогательных методов.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Разбил на методы
|
||
elif ((model_path is not None) and (weights is not None)): | ||
log.info(f'Deserializing network from file ({model_path}, {weights})') | ||
with warnings.catch_warnings(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Зачем нужно искуственно подавлять warnings? они часто содержат полезную информацию и не влияют на результат. Если для этого нет причин, кроме кажущейся красоты логов, я бы предложила это убрать.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Убрал подавление warnings
Предлагаю использовать один универсальный ланчер tvm с входным параметром типа конвертера, так как в этих ланчерах мало уникальных изменений |
65fa7c9
@spartanezka, @FenixFly, @n-berezina-nn, посмотрите, пожалуйста, еще раз. Спасибо! |
@ismukhin, поняла, что надо еще обновить |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Разобрались с проблемой для модели resnet-50-pytorch
при запуске вывода на пачке >1.
TODO:
ONNX
PyTorch
MXNet
Caffe
TVM
Caffe
(Python3.7.x
)