-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
部署出错 #1
Comments
class NAMChannelAttention(nn.Module):
def __init__(self, channels):
super(NAMChannelAttention, self).__init__()
self.channels = channels
self.bn2 = nn.BatchNorm2d(self.channels, affine=True)
def forward(self, x):
residual = x
x = self.bn2(x)
weight_bn = self.bn2.weight.data.abs() / torch.sum(self.bn2.weight.data.abs())
# x = x.permute(0, 2, 3, 1).contiguous()
x = x.permute(0, 2, 3, 1)
x = torch.mul(weight_bn, x)
# x = x.permute(0, 3, 1, 2).contiguous()
x = x.permute(0, 3, 1, 2)
x = torch.sigmoid(x) * residual
return x 这是我自己写的,你可以对照着看看吧 |
您好,我换成您的代码训练后部署还是出现同样的错误,我想可能是引用addNAMChannel函数的时候我的格式不对,以下是我引用的方法:auto conv22 = addNAMChannel(network, weightMap, *conv21->getOutput(0), "model.22"); |
这个是函数声明: ILayer* addNAMChannel(INetworkDefinition* network, std::map<std::string, Weights>& weightMap, ITensor& input, std::string lname, float eps) 我自己的调用方式: auto nam7 = addNAMChannel(network, weightMap, *ir6_2->getOutput(0), "model.7.", 1e-5); 你看看你你的调用是少了最后一个eps了吧。 |
请问一下,tensorrtx项目中还有别的东西要改吗,我还是没有找到错误原因 |
您好,我现在找到问题了是我的lname不对,但是又遇到了新的问题转成engine文件后识别不出目标,没有框出现 |
那这个问题,情况就太多了,可能环境本身,可能我也不知道了 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
您好 请问有对应Tensorrtx-NAM注意力的原始python注意力代码吗,我将NAM嵌入到yolov7后部署Tensorrtx出现如下报错:nvinfer1::IScaleLayer* addBatchNorm2d(nvinfer1::INetworkDefinition*, std::map<std::__cxx11::basic_string, nvinfer1::Weights>&, nvinfer1::ITensor&, std::__cxx11::string, float): Assertion `scale_1' failed.
The text was updated successfully, but these errors were encountered: