Replies: 2 comments 4 replies
-
Hi @vikashg , May I know what this Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
It would be worth looking at the function code for inception_v3 to see if anything done there would make a difference to what you're doing. You might have to dig farther into the implementation details of the model as well. One thing to try is load a pretrain model, rerandomise the weights, then load your weights again, if the result is good then something else with the network's state is relevant. Does your saved model state cover all the weights of the model or only the ones you've fine tuned? If the latter then loading a non-pretrained model would give you something with random weights with only some restored to what you saved when you do |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
I came across an issue that I had overlooked in the past. I wonder if anyone of you faced a similar thing and/or know an explanation
So, I am using a pretrained model and I am finetuning it and then saving it. Nothing crazy here.
However, when it comes to doing predictions using this model, I load it as follows
If I do this, I get very bad predictions almost 50 percent accuracy
However, if I turn the
pretrained_flag
toTrue
, my answers are correct. I wonder why that is. My understanding was that using the linemodel=models.inception_v3(pretrained_flag=False)
, I am getting the model architecture definition and whatever weights are contained are overwritten usingmodel.load_state_dict(torch.load(saved_model_filename))
so it should not matter if I am using the pretrained flag to be 0 or 1.Am I missing something here ? Please advice
Thanks soo much everyone.
Beta Was this translation helpful? Give feedback.
All reactions