Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to implement cross-modal referencing? #62

Open
hzdzkjdxyjs opened this issue Mar 25, 2024 · 2 comments
Open

How to implement cross-modal referencing? #62

hzdzkjdxyjs opened this issue Mar 25, 2024 · 2 comments

Comments

@hzdzkjdxyjs
Copy link

Your works are very inspirational to my work, and I would like to ask how you express cross-modal referential relations of regional information. For example, how do you cross-modally ask questions through coordinates?
For example, the BLIP model fuses image and text information after tokenization. How do you perform such fusion?
Thank you very much for taking the time out of your busy schedule to look at my question.:)

@ajpsifadiosf
Copy link

I have had this question before, but my view of this problem is that the current MLLM prefers a question-and-answer format. Specialized cross modal fusion modules like the previous ALBEF and BLIP are no longer mainstream. In fact, there seems to be no downstream tasks done by shikra like image text retrieval. ....

@Yonggie
Copy link

Yonggie commented Jun 19, 2024

I've got the same question. It seems the BLIP2 are the end of the work on the this path

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants