forked from open-mmlab/mmpretrain
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Feature] Support multiple multi-modal algorithms and inferencers. (o…
…pen-mmlab#1561) * [Feat] Migrate blip caption to mmpretrain. (open-mmlab#50) * Migrate blip caption to mmpretrain * minor fix * support train * [Feature] Support OFA caption task. (open-mmlab#51) * [Feature] Support OFA caption task. * Remove duplicated files. * [Feature] Support OFA vqa task. (open-mmlab#58) * [Feature] Support OFA vqa task. * Fix lint. * [Feat] Add BLIP retrieval to mmpretrain. (open-mmlab#55) * init * minor fix for train * fix according to comments * refactor * Update Blip retrieval. (open-mmlab#62) * [Feature] Support OFA visual grounding task. (open-mmlab#59) * [Feature] Support OFA visual grounding task. * minor add TODO --------- Co-authored-by: yingfhu <[email protected]> * [Feat] Add flamingos coco caption and vqa. (open-mmlab#60) * first init * init flamingo coco * add vqa * minor fix * remove unnecessary modules * Update config * Use `ApplyToList`. --------- Co-authored-by: mzr1996 <[email protected]> * [Feature]: BLIP2 coco retrieval (open-mmlab#53) * [Feature]: Add blip2 retriever * [Feature]: Add blip2 all modules * [Feature]: Refine model * [Feature]: x1 * [Feature]: Runnable coco ret * [Feature]: Runnable version * [Feature]: Fix lint * [Fix]: Fix lint * [Feature]: Use 364 img size * [Feature]: Refactor blip2 * [Fix]: Fix lint * refactor files * minor fix * minor fix --------- Co-authored-by: yingfhu <[email protected]> * Remove * fix blip caption inputs (open-mmlab#68) * [Feat] Add BLIP NLVR support. (open-mmlab#67) * first init * init flamingo coco * add vqa * add nlvr * refactor nlvr * minor fix * minor fix * Update dataset --------- Co-authored-by: mzr1996 <[email protected]> * [Feature]: BLIP2 Caption (open-mmlab#70) * [Feature]: Add language model * [Feature]: blip2 caption forward * [Feature]: Reproduce the results * [Feature]: Refactor caption * refine config --------- Co-authored-by: yingfhu <[email protected]> * [Feat] Migrate BLIP VQA to mmpretrain (open-mmlab#69) * reformat * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * change * refactor code --------- Co-authored-by: yingfhu <[email protected]> * Update RefCOCO dataset * [Fix] fix lint * [Feature] Implement inference APIs for multi-modal tasks. (open-mmlab#65) * [Feature] Implement inference APIs for multi-modal tasks. * [Project] Add gradio demo. * [Improve] Update requirements * Update flamingo * Update blip * Add NLVR inferencer * Update flamingo * Update hugging face model register * Update ofa vqa * Update BLIP-vqa (open-mmlab#71) * Update blip-vqa docstring (open-mmlab#72) * Refine flamingo docstring (open-mmlab#73) * [Feature]: BLIP2 VQA (open-mmlab#61) * [Feature]: VQA forward * [Feature]: Reproduce accuracy * [Fix]: Fix lint * [Fix]: Add blank line * minor fix --------- Co-authored-by: yingfhu <[email protected]> * [Feature]: BLIP2 docstring (open-mmlab#74) * [Feature]: Add caption docstring * [Feature]: Add docstring to blip2 vqa * [Feature]: Add docstring to retrieval * Update BLIP-2 metafile and README (open-mmlab#75) * [Feature]: Add readme and docstring * Update blip2 results --------- Co-authored-by: mzr1996 <[email protected]> * [Feature] BLIP Visual Grounding on MMPretrain Branch (open-mmlab#66) * blip grounding merge with mmpretrain * remove commit * blip grounding test and inference api * refcoco dataset * refcoco dataset refine config * rebasing * gitignore * rebasing * minor edit * minor edit * Update blip-vqa docstring (open-mmlab#72) * rebasing * Revert "minor edit" This reverts commit 639cec757c215e654625ed0979319e60f0be9044. * blip grounding final * precommit * refine config * refine config * Update blip visual grounding --------- Co-authored-by: Yiqin Wang 王逸钦 <[email protected]> Co-authored-by: mzr1996 <[email protected]> * Update visual grounding metric * Update OFA docstring, README and metafiles. (open-mmlab#76) * [Docs] Update installation docs and gradio demo docs. (open-mmlab#77) * Update OFA name * Update Visual Grounding Visualizer * Integrate accelerate support * Fix imports. * Fix timm backbone * Update imports * Update README * Update circle ci * Update flamingo config * Add gradio demo README * [Feature]: Add scienceqa (open-mmlab#1571) * [Feature]: Add scienceqa * [Feature]: Change param name * Update docs * Update video --------- Co-authored-by: Hubert <[email protected]> Co-authored-by: yingfhu <[email protected]> Co-authored-by: Yuan Liu <[email protected]> Co-authored-by: Yiqin Wang 王逸钦 <[email protected]> Co-authored-by: Rongjie Li <[email protected]>
- Loading branch information
1 parent
770eb8e
commit 6847d20
Showing
142 changed files
with
17,961 additions
and
414 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
# data settings | ||
|
||
data_preprocessor = dict( | ||
type='MultiModalDataPreprocessor', | ||
mean=[122.770938, 116.7460125, 104.09373615], | ||
std=[68.5005327, 66.6321579, 70.32316305], | ||
to_rgb=True, | ||
) | ||
|
||
train_pipeline = [ | ||
dict(type='LoadImageFromFile'), | ||
dict( | ||
type='RandomResizedCrop', | ||
scale=384, | ||
interpolation='bicubic', | ||
backend='pillow'), | ||
dict(type='RandomFlip', prob=0.5, direction='horizontal'), | ||
dict(type='CleanCaption', keys='gt_caption'), | ||
dict( | ||
type='PackInputs', | ||
algorithm_keys=['gt_caption'], | ||
meta_keys=['image_id'], | ||
), | ||
] | ||
|
||
test_pipeline = [ | ||
dict(type='LoadImageFromFile'), | ||
dict( | ||
type='Resize', | ||
scale=(384, 384), | ||
interpolation='bicubic', | ||
backend='pillow'), | ||
dict(type='PackInputs', meta_keys=['image_id']), | ||
] | ||
|
||
train_dataloader = dict( | ||
batch_size=32, | ||
num_workers=5, | ||
dataset=dict( | ||
type='COCOCaption', | ||
data_root='data/coco', | ||
ann_file='annotations/coco_karpathy_train.json', | ||
pipeline=train_pipeline), | ||
sampler=dict(type='DefaultSampler', shuffle=True), | ||
persistent_workers=True, | ||
drop_last=True, | ||
) | ||
|
||
val_dataloader = dict( | ||
batch_size=16, | ||
num_workers=5, | ||
dataset=dict( | ||
type='COCOCaption', | ||
data_root='data/coco', | ||
ann_file='annotations/coco_karpathy_val.json', | ||
pipeline=test_pipeline, | ||
), | ||
sampler=dict(type='DefaultSampler', shuffle=False), | ||
persistent_workers=True, | ||
) | ||
|
||
val_evaluator = dict( | ||
type='COCOCaption', | ||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json', | ||
) | ||
|
||
# # If you want standard test, please manually configure the test dataset | ||
test_dataloader = val_dataloader | ||
test_evaluator = val_evaluator |
Oops, something went wrong.