Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Underwhelimg results when testing with personal images (+ bug fix) #2

Open
lrizzello opened this issue Aug 2, 2021 · 7 comments
Open

Comments

@lrizzello
Copy link

lrizzello commented Aug 2, 2021

Hello,

First of all, thank you for sharing your code.
I ran into a few problems when trying to run your eval.py code, it would seem like the value of config.gdw_size under the condition if (config.net_size == "s"): should be 1024 instead of 512, without which the model will crash when loaded.

That being fixed, I tried running the model on some of my images and I'm running into some pretty underwhelming results. I haven't tried reproducing your results with existing datasets, but I have no doubt your reported metrics are accurate, so I must be missing a step on my end. This is what I have tried so far

import logging
import os
import sys

import torch

import backbones.mixnetm as mx
from utils.utils_callbacks import CallBackVerification
from utils.utils_logging import init_logging
import numpy as np
from scipy.spatial.distance import cdist

sys.path.append('/root/xy/work_dir/xyface/')
from config import config as cfg
from cv2 import imread, resize


if __name__ == "__main__":
    img1 = imread("o1.jpg")
    img2 = imread("o2.jpg")
    img1 = resize(img1, (112, 112)) / 255
    img2 = resize(img2, (112, 112)) / 255
    img1 = (img1 - 0.5) / 0.5
    img2 = (img2 - 0.5) / 0.5
    backbone = mx.mixnet_s(embedding_size=cfg.embedding_size, width_scale=cfg.scale, gdw_size=cfg.gdw_size).to("cpu")
    backbone.load_state_dict(torch.load(os.path.join('147836backbone.pth'), map_location=torch.device('cpu')))
    model = torch.nn.DataParallel(backbone)
    model.eval()
    img1_embeddings = model(
        torch.tensor(np.expand_dims(np.moveaxis(img1, -1, 0), axis=0)).type(torch.float)).detach().cpu().numpy()
    img2_embeddings = model(
        torch.tensor(np.expand_dims(np.moveaxis(img2, -1, 0), axis=0)).type(torch.float)).detach().cpu().numpy()
    cos_sim = 1 - cdist(img1_embeddings, img2_embeddings, 'cosine')

o1.jpg and o2.jpg are faces detected and aligned by MTCNN, as you did in your paper. As you can see, I reduce the RGB values to the [0,1] range and normalize them with mean and std 0.5, as I saw you did elsewhere in your code.
I am not sure what is missing, could you provide me with some assistance, please?

You can find an example of two faces below, I get the cos_sim 0.92102719 when comparing them, which seems too high.
https://drive.google.com/drive/folders/1YUY3ZkxPQFemWzQ8byHdNFUidxw5OOKp?usp=sharing

@hillaric
Copy link

hillaric commented Aug 5, 2021

I meet the same question, do you have a solution?

@lrizzello
Copy link
Author

not yet, unfortunaly

@fdbtrs
Copy link
Owner

fdbtrs commented Aug 8, 2021

You need to align the testing data according to the https://github.com/fdbtrs/mixfacenets/blob/main/utils/align_trans.py

@hillaric
Copy link

Can you show the result for two images, I also align the testing data, but nothing change. You should have a try, which is more effective than your word.

@xiakj
Copy link

xiakj commented Dec 9, 2021

similar bug. mark

@sellaziz
Copy link

sellaziz commented Apr 7, 2022

Did you manage to fix the issue ?

I tried different weights (from the linked dropbox) and got different results(~0.5217 with 295672backbone.pth ), which makes me wondering which are the backbone weights that allows to get the same results as the author.

Thank you for your help

@davidmcarreira
Copy link

Did you manage to fix the issue ?

I tried different weights (from the linked dropbox) and got different results(~0.5217 with 295672backbone.pth ), which makes me wondering which are the backbone weights that allows to get the same results as the author.

Thank you for your help

I also tried to replicate the results for the LFW evaluation and so far, the best results if ~0.58... Did you fixed the issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants