Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello!I want #258

Open
SZW-zjbd opened this issue Oct 17, 2023 · 5 comments
Open

Hello!I want #258

SZW-zjbd opened this issue Oct 17, 2023 · 5 comments
Labels
bug Something isn't working

Comments

@SZW-zjbd
Copy link

Hello! Thanks for taking interest in EGG. We are glad you're here to help us improve it!
Please fill this template as thoroughly as possible, that would make things much easier for us :)

Expected Behavior

Current Behavior

Steps to Reproduce

Detailed Description

Possible Implementation

@SZW-zjbd SZW-zjbd added the bug Something isn't working label Oct 17, 2023
@SZW-zjbd
Copy link
Author

I would like to ask where the paper(Cross-Domain Image Captioning with Discriminative Finetuning) code link is. Why the link is empty? Please provide me with this code if possible!Thank u!!

@robertodessi
Copy link
Contributor

Hi,

The code for the paper is here https://github.com/robertodessi/EGG/tree/rll_refactor/egg/zoo/emergent_captioner
It's just in my private branch, I'll try to merge as soon as I can but I lost push rights since I am not longer at Meta.

The command to launch it and reproduce the clipcap experiments in the paper is

python -m egg.zoo.emergent_captioner.finetuning.train \
    --dataset_dir <PATH TO YOUR COCO DIRECTORY> \
    --clipcap_model_path <PATH TO A CLIPCAP CHECKPOINTS, YOU CAN GET IT FROM THEIR REPO> \
    --baseline mean \
    --n_epochs 20 \
    --batch_size 100 \
    --lr 1e-7 \
    --max_length 20

Hope this helps.

@JuneCly
Copy link

JuneCly commented Nov 6, 2023

Hi,

The code for the paper is here https://github.com/robertodessi/EGG/tree/rll_refactor/egg/zoo/emergent_captioner It's just in my private branch, I'll try to merge as soon as I can but I lost push rights since I am not longer at Meta.

The command to launch it and reproduce the clipcap experiments in the paper is

python -m egg.zoo.emergent_captioner.finetuning.train \
    --dataset_dir <PATH TO YOUR COCO DIRECTORY> \
    --clipcap_model_path <PATH TO A CLIPCAP CHECKPOINTS, YOU CAN GET IT FROM THEIR REPO> \
    --baseline mean \
    --n_epochs 20 \
    --batch_size 100 \
    --lr 1e-7 \
    --max_length 20

Hope this helps.

Hi! I'm very insterested in your preety work! Could you please provide the pre-trained checkpoint file of the clipcap model? Thank u!!!

@nidongpinyinme
Copy link

Hi,

The code for the paper is here https://github.com/robertodessi/EGG/tree/rll_refactor/egg/zoo/emergent_captioner It's just in my private branch, I'll try to merge as soon as I can but I lost push rights since I am not longer at Meta.

The command to launch it and reproduce the clipcap experiments in the paper is

python -m egg.zoo.emergent_captioner.finetuning.train \
    --dataset_dir <PATH TO YOUR COCO DIRECTORY> \
    --clipcap_model_path <PATH TO A CLIPCAP CHECKPOINTS, YOU CAN GET IT FROM THEIR REPO> \
    --baseline mean \
    --n_epochs 20 \
    --batch_size 100 \
    --lr 1e-7 \
    --max_length 20

Hope this helps.

sorry to bother you, but I really don't know where to find those files that hard negatived in your utlis.py files
DATASET2NEG_PATHS = { "flickr": ( "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/flickr/train_flickr.emb.pt", "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/flickr/train_flickr.nns.pt", ), "coco": ( "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/coco/train_coco.emb.pt", "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/coco/train_coco.nns.pt", ), "conceptual": ( "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/conceptual/train_conceptual.emb.pt", "/private/home/rdessi/EGG/egg/zoo/emergent_captioner/hard_negatives/conceptual/train_conceptual.nns.pt", ), }

@robertodessi
Copy link
Contributor

Hi, thanks for your interest in the paper.
For the hard negatives you need to recompute them and you can use the script here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants