You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(gs_model) root@autodl-container-f34d45a126-5b9fd9ad:~/data_user/ysl/DeViLoc# python evaluate.py configs/cambridge.yml --ckpt_path pretrained/deviloc_weights.ckpt
ERROR:albumentations.check_version:Error fetching version info
Traceback (most recent call last):
File "/root/miniconda3/envs/gs_model/lib/python3.9/site-packages/albumentations/check_version.py", line 29, in fetch_version_info
with opener.open(url, timeout=2) as response:
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 1350, in do_open
r = h.getresponse()
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 1377, in getresponse
response.begin()
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 339, in begin
self.headers = self.msg = parse_headers(self.fp)
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 236, in parse_headers
headers = _read_headers(fp)
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 216, in _read_headers
line = fp.readline(_MAXLINE + 1)
File "/root/miniconda3/envs/gs_model/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "/root/miniconda3/envs/gs_model/lib/python3.9/ssl.py", line 1275, in recv_into
return self.read(nbytes, buffer)
File "/root/miniconda3/envs/gs_model/lib/python3.9/ssl.py", line 1133, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
{'batch_size': 1,
'ckpt_path': 'pretrained/deviloc_weights.ckpt',
'covis_clustering': False,
'dataset': 'aachen',
'main_cfg_path': 'configs/cambridge.yml',
'num_workers': 4,
'out_dir': 'deviloc_outputs',
'out_file': 'aachen_v11_eval.txt'}
2024-08-17 16:31:26.794 | INFO | main::220 - Args and config initialized!
2024-08-17 16:31:26.794 | INFO | main::221 - Do covisibily clustering: False
Traceback (most recent call last):
File "/root/data_user/ysl/DeViLoc/evaluate.py", line 227, in
model = Dense2D3DMatcher(config=config["model"])
File "/root/data_user/ysl/DeViLoc/deviloc/models/model.py", line 26, in init
self.matcher.load_state_dict(pretrained_matcher["state_dict"])
File "/root/data_user/ysl/DeViLoc/third_party/feat_matcher/TopicFM/src/models/topic_fm.py", line 90, in load_state_dict
return super().load_state_dict(state_dict, *args, **kwargs)
File "/root/miniconda3/envs/gs_model/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for TopicFM:
Missing key(s) in state_dict: "backbone.conv1.weight", "backbone.layer1.dwconv.weight", "backbone.layer1.dwconv.bias", "backbone.layer1.pwconv1.weight", "backbone.layer1.pwconv1.bias", "backbone.layer1.norm2.gamma", "backbone.layer1.norm2.beta", "backbone.layer1.pwconv2.weight", "backbone.layer1.pwconv2.bias", "backbone.layer2.0.0.weight", "backbone.layer2.1.dwconv.weight", "backbone.layer2.1.dwconv.bias", "backbone.layer2.1.pwconv1.weight", "backbone.layer2.1.pwconv1.bias", "backbone.layer2.1.norm2.gamma", "backbone.layer2.1.norm2.beta", "backbone.layer2.1.pwconv2.weight", "backbone.layer2.1.pwconv2.bias", "backbone.layer3.0.0.weight", "backbone.layer3.1.dwconv.weight", "backbone.layer3.1.dwconv.bias", "backbone.layer3.1.pwconv1.weight", "backbone.layer3.1.pwconv1.bias", "backbone.layer3.1.norm2.gamma", "backbone.layer3.1.norm2.beta", "backbone.layer3.1.pwconv2.weight", "backbone.layer3.1.pwconv2.bias", "backbone.layer4.0.0.weight", "backbone.layer4.1.dwconv.weight", "backbone.layer4.1.dwconv.bias", "backbone.layer4.1.pwconv1.weight", "backbone.layer4.1.pwconv1.bias", "backbone.layer4.1.norm2.gamma", "backbone.layer4.1.norm2.beta", "backbone.layer4.1.pwconv2.weight", "backbone.layer4.1.pwconv2.bias", "backbone.layer3_outconv2.0.dwconv.weight", "backbone.layer3_outconv2.0.dwconv.bias", "backbone.layer3_outconv2.0.pwconv1.weight", "backbone.layer3_outconv2.0.pwconv1.bias", "backbone.layer3_outconv2.0.norm2.gamma", "backbone.layer3_outconv2.0.norm2.beta", "backbone.layer3_outconv2.0.pwconv2.weight", "backbone.layer3_outconv2.0.pwconv2.bias", "backbone.layer3_outconv2.1.dwconv.weight", "backbone.layer3_outconv2.1.dwconv.bias", "backbone.layer3_outconv2.1.pwconv1.weight", "backbone.layer3_outconv2.1.pwconv1.bias", "backbone.layer3_outconv2.1.norm2.gamma", "backbone.layer3_outconv2.1.norm2.beta", "backbone.layer3_outconv2.1.pwconv2.weight", "backbone.layer3_outconv2.1.pwconv2.bias", "backbone.layer2_outconv2.0.dwconv.weight", "backbone.layer2_outconv2.0.dwconv.bias", "backbone.layer2_outconv2.0.pwconv1.weight", "backbone.layer2_outconv2.0.pwconv1.bias", "backbone.layer2_outconv2.0.norm2.gamma", "backbone.layer2_outconv2.0.norm2.beta", "backbone.layer2_outconv2.0.pwconv2.weight", "backbone.layer2_outconv2.0.pwconv2.bias", "backbone.layer2_outconv2.1.dwconv.weight", "backbone.layer2_outconv2.1.dwconv.bias", "backbone.layer2_outconv2.1.pwconv1.weight", "backbone.layer2_outconv2.1.pwconv1.bias", "backbone.layer2_outconv2.1.norm2.gamma", "backbone.layer2_outconv2.1.norm2.beta", "backbone.layer2_outconv2.1.pwconv2.weight", "backbone.layer2_outconv2.1.pwconv2.bias", "backbone.layer1_outconv2.0.dwconv.weight", "backbone.layer1_outconv2.0.dwconv.bias", "backbone.layer1_outconv2.0.pwconv1.weight", "backbone.layer1_outconv2.0.pwconv1.bias", "backbone.layer1_outconv2.0.norm2.gamma", "backbone.layer1_outconv2.0.norm2.beta", "backbone.layer1_outconv2.0.pwconv2.weight", "backbone.layer1_outconv2.0.pwconv2.bias", "backbone.layer1_outconv2.1.dwconv.weight", "backbone.layer1_outconv2.1.dwconv.bias", "backbone.layer1_outconv2.1.pwconv1.weight", "backbone.layer1_outconv2.1.pwconv1.bias", "backbone.layer1_outconv2.1.norm2.gamma", "backbone.layer1_outconv2.1.norm2.beta", "backbone.layer1_outconv2.1.pwconv2.weight", "backbone.layer1_outconv2.1.pwconv2.bias", "fine_net.encoder_layers.2.mlp1.0.weight", "fine_net.encoder_layers.2.mlp1.0.bias", "fine_net.encoder_layers.2.mlp1.2.weight", "fine_net.encoder_layers.2.mlp1.2.bias", "fine_net.encoder_layers.2.mlp2.0.weight", "fine_net.encoder_layers.2.mlp2.0.bias", "fine_net.encoder_layers.2.mlp2.2.weight", "fine_net.encoder_layers.2.mlp2.2.bias", "fine_net.encoder_layers.2.norm1.weight", "fine_net.encoder_layers.2.norm1.bias", "fine_net.encoder_layers.2.norm2.weight", "fine_net.encoder_layers.2.norm2.bias", "fine_net.encoder_layers.3.mlp1.0.weight", "fine_net.encoder_layers.3.mlp1.0.bias", "fine_net.encoder_layers.3.mlp1.2.weight", "fine_net.encoder_layers.3.mlp1.2.bias", "fine_net.encoder_layers.3.mlp2.0.weight", "fine_net.encoder_layers.3.mlp2.0.bias", "fine_net.encoder_layers.3.mlp2.2.weight", "fine_net.encoder_layers.3.mlp2.2.bias", "fine_net.encoder_layers.3.norm1.weight", "fine_net.encoder_layers.3.norm1.bias", "fine_net.encoder_layers.3.norm2.weight", "fine_net.encoder_layers.3.norm2.bias".
Unexpected key(s) in state_dict: "backbone.layer0.conv.weight", "backbone.layer0.norm.weight", "backbone.layer0.norm.bias", "backbone.layer0.norm.running_mean", "backbone.layer0.norm.running_var", "backbone.layer0.norm.num_batches_tracked", "backbone.layer1.0.conv.weight", "backbone.layer1.0.norm.weight", "backbone.layer1.0.norm.bias", "backbone.layer1.0.norm.running_mean", "backbone.layer1.0.norm.running_var", "backbone.layer1.0.norm.num_batches_tracked", "backbone.layer1.1.conv.weight", "backbone.layer1.1.norm.weight", "backbone.layer1.1.norm.bias", "backbone.layer1.1.norm.running_mean", "backbone.layer1.1.norm.running_var", "backbone.layer1.1.norm.num_batches_tracked", "backbone.layer2.0.conv.weight", "backbone.layer2.0.norm.weight", "backbone.layer2.0.norm.bias", "backbone.layer2.0.norm.running_mean", "backbone.layer2.0.norm.running_var", "backbone.layer2.0.norm.num_batches_tracked", "backbone.layer2.1.conv.weight", "backbone.layer2.1.norm.weight", "backbone.layer2.1.norm.bias", "backbone.layer2.1.norm.running_mean", "backbone.layer2.1.norm.running_var", "backbone.layer2.1.norm.num_batches_tracked", "backbone.layer3.0.conv.weight", "backbone.layer3.0.norm.weight", "backbone.layer3.0.norm.bias", "backbone.layer3.0.norm.running_mean", "backbone.layer3.0.norm.running_var", "backbone.layer3.0.norm.num_batches_tracked", "backbone.layer3.1.conv.weight", "backbone.layer3.1.norm.weight", "backbone.layer3.1.norm.bias", "backbone.layer3.1.norm.running_mean", "backbone.layer3.1.norm.running_var", "backbone.layer3.1.norm.num_batches_tracked", "backbone.layer4.0.conv.weight", "backbone.layer4.0.norm.weight", "backbone.layer4.0.norm.bias", "backbone.layer4.0.norm.running_mean", "backbone.layer4.0.norm.running_var", "backbone.layer4.0.norm.num_batches_tracked", "backbone.layer4.1.conv.weight", "backbone.layer4.1.norm.weight", "backbone.layer4.1.norm.bias", "backbone.layer4.1.norm.running_mean", "backbone.layer4.1.norm.running_var", "backbone.layer4.1.norm.num_batches_tracked", "backbone.layer3_outconv2.0.conv.weight", "backbone.layer3_outconv2.0.norm.weight", "backbone.layer3_outconv2.0.norm.bias", "backbone.layer3_outconv2.0.norm.running_mean", "backbone.layer3_outconv2.0.norm.running_var", "backbone.layer3_outconv2.0.norm.num_batches_tracked", "backbone.layer3_outconv2.1.weight", "backbone.layer2_outconv2.0.conv.weight", "backbone.layer2_outconv2.0.norm.weight", "backbone.layer2_outconv2.0.norm.bias", "backbone.layer2_outconv2.0.norm.running_mean", "backbone.layer2_outconv2.0.norm.running_var", "backbone.layer2_outconv2.0.norm.num_batches_tracked", "backbone.layer2_outconv2.1.weight", "backbone.layer1_outconv2.0.conv.weight", "backbone.layer1_outconv2.0.norm.weight", "backbone.layer1_outconv2.0.norm.bias", "backbone.layer1_outconv2.0.norm.running_mean", "backbone.layer1_outconv2.0.norm.running_var", "backbone.layer1_outconv2.0.norm.num_batches_tracked", "backbone.layer1_outconv2.1.weight".
size mismatch for backbone.layer3_outconv.weight: copying a param with shape torch.Size([384, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 384, 1, 1]).
size mismatch for backbone.layer2_outconv.weight: copying a param with shape torch.Size([256, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for backbone.layer1_outconv.weight: copying a param with shape torch.Size([192, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 128, 1, 1]).
size mismatch for backbone.norm_outlayer1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for backbone.norm_outlayer1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.0.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.0.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.1.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.1.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.detector.0.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.detector.0.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.1.weight: copying a param with shape torch.Size([1, 128]) from checkpoint, the shape in current model is torch.Size([1, 96]).
i test several times,but the problem seems to arise in the weight, network size mismatch
The text was updated successfully, but these errors were encountered:
Hi @luocha0107 , I updated the code of TopicFM in the dev_2 branch. So the pre-trained weights do not match with the TopicFM architecture. Let's switch TopicFM from the dev_2 to the main branch.
cd third_party/feat_matcher/TopicFM && git checkout main
thank you for reply.I have solved this problem.
Now,I want to do the test and inference part, that is, input the image and give the image pose, how do I do that.
(gs_model) root@autodl-container-f34d45a126-5b9fd9ad:~/data_user/ysl/DeViLoc# python evaluate.py configs/cambridge.yml --ckpt_path pretrained/deviloc_weights.ckpt
ERROR:albumentations.check_version:Error fetching version info
Traceback (most recent call last):
File "/root/miniconda3/envs/gs_model/lib/python3.9/site-packages/albumentations/check_version.py", line 29, in fetch_version_info
with opener.open(url, timeout=2) as response:
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/root/miniconda3/envs/gs_model/lib/python3.9/urllib/request.py", line 1350, in do_open
r = h.getresponse()
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 1377, in getresponse
response.begin()
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 339, in begin
self.headers = self.msg = parse_headers(self.fp)
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 236, in parse_headers
headers = _read_headers(fp)
File "/root/miniconda3/envs/gs_model/lib/python3.9/http/client.py", line 216, in _read_headers
line = fp.readline(_MAXLINE + 1)
File "/root/miniconda3/envs/gs_model/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "/root/miniconda3/envs/gs_model/lib/python3.9/ssl.py", line 1275, in recv_into
return self.read(nbytes, buffer)
File "/root/miniconda3/envs/gs_model/lib/python3.9/ssl.py", line 1133, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
{'batch_size': 1,
'ckpt_path': 'pretrained/deviloc_weights.ckpt',
'covis_clustering': False,
'dataset': 'aachen',
'main_cfg_path': 'configs/cambridge.yml',
'num_workers': 4,
'out_dir': 'deviloc_outputs',
'out_file': 'aachen_v11_eval.txt'}
2024-08-17 16:31:26.794 | INFO | main::220 - Args and config initialized!
2024-08-17 16:31:26.794 | INFO | main::221 - Do covisibily clustering: False
Traceback (most recent call last):
File "/root/data_user/ysl/DeViLoc/evaluate.py", line 227, in
model = Dense2D3DMatcher(config=config["model"])
File "/root/data_user/ysl/DeViLoc/deviloc/models/model.py", line 26, in init
self.matcher.load_state_dict(pretrained_matcher["state_dict"])
File "/root/data_user/ysl/DeViLoc/third_party/feat_matcher/TopicFM/src/models/topic_fm.py", line 90, in load_state_dict
return super().load_state_dict(state_dict, *args, **kwargs)
File "/root/miniconda3/envs/gs_model/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for TopicFM:
Missing key(s) in state_dict: "backbone.conv1.weight", "backbone.layer1.dwconv.weight", "backbone.layer1.dwconv.bias", "backbone.layer1.pwconv1.weight", "backbone.layer1.pwconv1.bias", "backbone.layer1.norm2.gamma", "backbone.layer1.norm2.beta", "backbone.layer1.pwconv2.weight", "backbone.layer1.pwconv2.bias", "backbone.layer2.0.0.weight", "backbone.layer2.1.dwconv.weight", "backbone.layer2.1.dwconv.bias", "backbone.layer2.1.pwconv1.weight", "backbone.layer2.1.pwconv1.bias", "backbone.layer2.1.norm2.gamma", "backbone.layer2.1.norm2.beta", "backbone.layer2.1.pwconv2.weight", "backbone.layer2.1.pwconv2.bias", "backbone.layer3.0.0.weight", "backbone.layer3.1.dwconv.weight", "backbone.layer3.1.dwconv.bias", "backbone.layer3.1.pwconv1.weight", "backbone.layer3.1.pwconv1.bias", "backbone.layer3.1.norm2.gamma", "backbone.layer3.1.norm2.beta", "backbone.layer3.1.pwconv2.weight", "backbone.layer3.1.pwconv2.bias", "backbone.layer4.0.0.weight", "backbone.layer4.1.dwconv.weight", "backbone.layer4.1.dwconv.bias", "backbone.layer4.1.pwconv1.weight", "backbone.layer4.1.pwconv1.bias", "backbone.layer4.1.norm2.gamma", "backbone.layer4.1.norm2.beta", "backbone.layer4.1.pwconv2.weight", "backbone.layer4.1.pwconv2.bias", "backbone.layer3_outconv2.0.dwconv.weight", "backbone.layer3_outconv2.0.dwconv.bias", "backbone.layer3_outconv2.0.pwconv1.weight", "backbone.layer3_outconv2.0.pwconv1.bias", "backbone.layer3_outconv2.0.norm2.gamma", "backbone.layer3_outconv2.0.norm2.beta", "backbone.layer3_outconv2.0.pwconv2.weight", "backbone.layer3_outconv2.0.pwconv2.bias", "backbone.layer3_outconv2.1.dwconv.weight", "backbone.layer3_outconv2.1.dwconv.bias", "backbone.layer3_outconv2.1.pwconv1.weight", "backbone.layer3_outconv2.1.pwconv1.bias", "backbone.layer3_outconv2.1.norm2.gamma", "backbone.layer3_outconv2.1.norm2.beta", "backbone.layer3_outconv2.1.pwconv2.weight", "backbone.layer3_outconv2.1.pwconv2.bias", "backbone.layer2_outconv2.0.dwconv.weight", "backbone.layer2_outconv2.0.dwconv.bias", "backbone.layer2_outconv2.0.pwconv1.weight", "backbone.layer2_outconv2.0.pwconv1.bias", "backbone.layer2_outconv2.0.norm2.gamma", "backbone.layer2_outconv2.0.norm2.beta", "backbone.layer2_outconv2.0.pwconv2.weight", "backbone.layer2_outconv2.0.pwconv2.bias", "backbone.layer2_outconv2.1.dwconv.weight", "backbone.layer2_outconv2.1.dwconv.bias", "backbone.layer2_outconv2.1.pwconv1.weight", "backbone.layer2_outconv2.1.pwconv1.bias", "backbone.layer2_outconv2.1.norm2.gamma", "backbone.layer2_outconv2.1.norm2.beta", "backbone.layer2_outconv2.1.pwconv2.weight", "backbone.layer2_outconv2.1.pwconv2.bias", "backbone.layer1_outconv2.0.dwconv.weight", "backbone.layer1_outconv2.0.dwconv.bias", "backbone.layer1_outconv2.0.pwconv1.weight", "backbone.layer1_outconv2.0.pwconv1.bias", "backbone.layer1_outconv2.0.norm2.gamma", "backbone.layer1_outconv2.0.norm2.beta", "backbone.layer1_outconv2.0.pwconv2.weight", "backbone.layer1_outconv2.0.pwconv2.bias", "backbone.layer1_outconv2.1.dwconv.weight", "backbone.layer1_outconv2.1.dwconv.bias", "backbone.layer1_outconv2.1.pwconv1.weight", "backbone.layer1_outconv2.1.pwconv1.bias", "backbone.layer1_outconv2.1.norm2.gamma", "backbone.layer1_outconv2.1.norm2.beta", "backbone.layer1_outconv2.1.pwconv2.weight", "backbone.layer1_outconv2.1.pwconv2.bias", "fine_net.encoder_layers.2.mlp1.0.weight", "fine_net.encoder_layers.2.mlp1.0.bias", "fine_net.encoder_layers.2.mlp1.2.weight", "fine_net.encoder_layers.2.mlp1.2.bias", "fine_net.encoder_layers.2.mlp2.0.weight", "fine_net.encoder_layers.2.mlp2.0.bias", "fine_net.encoder_layers.2.mlp2.2.weight", "fine_net.encoder_layers.2.mlp2.2.bias", "fine_net.encoder_layers.2.norm1.weight", "fine_net.encoder_layers.2.norm1.bias", "fine_net.encoder_layers.2.norm2.weight", "fine_net.encoder_layers.2.norm2.bias", "fine_net.encoder_layers.3.mlp1.0.weight", "fine_net.encoder_layers.3.mlp1.0.bias", "fine_net.encoder_layers.3.mlp1.2.weight", "fine_net.encoder_layers.3.mlp1.2.bias", "fine_net.encoder_layers.3.mlp2.0.weight", "fine_net.encoder_layers.3.mlp2.0.bias", "fine_net.encoder_layers.3.mlp2.2.weight", "fine_net.encoder_layers.3.mlp2.2.bias", "fine_net.encoder_layers.3.norm1.weight", "fine_net.encoder_layers.3.norm1.bias", "fine_net.encoder_layers.3.norm2.weight", "fine_net.encoder_layers.3.norm2.bias".
Unexpected key(s) in state_dict: "backbone.layer0.conv.weight", "backbone.layer0.norm.weight", "backbone.layer0.norm.bias", "backbone.layer0.norm.running_mean", "backbone.layer0.norm.running_var", "backbone.layer0.norm.num_batches_tracked", "backbone.layer1.0.conv.weight", "backbone.layer1.0.norm.weight", "backbone.layer1.0.norm.bias", "backbone.layer1.0.norm.running_mean", "backbone.layer1.0.norm.running_var", "backbone.layer1.0.norm.num_batches_tracked", "backbone.layer1.1.conv.weight", "backbone.layer1.1.norm.weight", "backbone.layer1.1.norm.bias", "backbone.layer1.1.norm.running_mean", "backbone.layer1.1.norm.running_var", "backbone.layer1.1.norm.num_batches_tracked", "backbone.layer2.0.conv.weight", "backbone.layer2.0.norm.weight", "backbone.layer2.0.norm.bias", "backbone.layer2.0.norm.running_mean", "backbone.layer2.0.norm.running_var", "backbone.layer2.0.norm.num_batches_tracked", "backbone.layer2.1.conv.weight", "backbone.layer2.1.norm.weight", "backbone.layer2.1.norm.bias", "backbone.layer2.1.norm.running_mean", "backbone.layer2.1.norm.running_var", "backbone.layer2.1.norm.num_batches_tracked", "backbone.layer3.0.conv.weight", "backbone.layer3.0.norm.weight", "backbone.layer3.0.norm.bias", "backbone.layer3.0.norm.running_mean", "backbone.layer3.0.norm.running_var", "backbone.layer3.0.norm.num_batches_tracked", "backbone.layer3.1.conv.weight", "backbone.layer3.1.norm.weight", "backbone.layer3.1.norm.bias", "backbone.layer3.1.norm.running_mean", "backbone.layer3.1.norm.running_var", "backbone.layer3.1.norm.num_batches_tracked", "backbone.layer4.0.conv.weight", "backbone.layer4.0.norm.weight", "backbone.layer4.0.norm.bias", "backbone.layer4.0.norm.running_mean", "backbone.layer4.0.norm.running_var", "backbone.layer4.0.norm.num_batches_tracked", "backbone.layer4.1.conv.weight", "backbone.layer4.1.norm.weight", "backbone.layer4.1.norm.bias", "backbone.layer4.1.norm.running_mean", "backbone.layer4.1.norm.running_var", "backbone.layer4.1.norm.num_batches_tracked", "backbone.layer3_outconv2.0.conv.weight", "backbone.layer3_outconv2.0.norm.weight", "backbone.layer3_outconv2.0.norm.bias", "backbone.layer3_outconv2.0.norm.running_mean", "backbone.layer3_outconv2.0.norm.running_var", "backbone.layer3_outconv2.0.norm.num_batches_tracked", "backbone.layer3_outconv2.1.weight", "backbone.layer2_outconv2.0.conv.weight", "backbone.layer2_outconv2.0.norm.weight", "backbone.layer2_outconv2.0.norm.bias", "backbone.layer2_outconv2.0.norm.running_mean", "backbone.layer2_outconv2.0.norm.running_var", "backbone.layer2_outconv2.0.norm.num_batches_tracked", "backbone.layer2_outconv2.1.weight", "backbone.layer1_outconv2.0.conv.weight", "backbone.layer1_outconv2.0.norm.weight", "backbone.layer1_outconv2.0.norm.bias", "backbone.layer1_outconv2.0.norm.running_mean", "backbone.layer1_outconv2.0.norm.running_var", "backbone.layer1_outconv2.0.norm.num_batches_tracked", "backbone.layer1_outconv2.1.weight".
size mismatch for backbone.layer3_outconv.weight: copying a param with shape torch.Size([384, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 384, 1, 1]).
size mismatch for backbone.layer2_outconv.weight: copying a param with shape torch.Size([256, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for backbone.layer1_outconv.weight: copying a param with shape torch.Size([192, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 128, 1, 1]).
size mismatch for backbone.norm_outlayer1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for backbone.norm_outlayer1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.0.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.0.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.0.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.1.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.encoder_layers.1.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.encoder_layers.1.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.mlp2.0.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.detector.0.mlp2.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.mlp2.2.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([96, 96]).
size mismatch for fine_net.detector.0.mlp2.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.0.norm2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for fine_net.detector.1.weight: copying a param with shape torch.Size([1, 128]) from checkpoint, the shape in current model is torch.Size([1, 96]).
i test several times,but the problem seems to arise in the weight, network size mismatch
The text was updated successfully, but these errors were encountered: