You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bash scripts/DINO_train.sh /home/slam/Downloads/REU_Srivatsa/radar_img --pretrain_model_path /home/slam/Downloads/REU_Srivatsa/checkpoint0011_4scale.pth --finetune_ignore label_enc.weight class_embed
when i give the above command and modify the config file as per the class in my dataset i am facing the below error of torch size mismatch
RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is torch.Size([15, 256]).
size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
and if i finetune without changing config file i am getting very least ap of 0.26 and i am training for dino_4scale model
pls reslove my error asap
The text was updated successfully, but these errors were encountered:
bash scripts/DINO_train.sh /home/slam/Downloads/REU_Srivatsa/radar_img --pretrain_model_path /home/slam/Downloads/REU_Srivatsa/checkpoint0011_4scale.pth --finetune_ignore label_enc.weight class_embed
when i give the above command and modify the config file as per the class in my dataset i am facing the below error of torch size mismatch
RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is torch.Size([15, 256]).
size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([11]).
and if i finetune without changing config file i am getting very least ap of 0.26 and i am training for dino_4scale model
pls reslove my error asap
The text was updated successfully, but these errors were encountered: