You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your works are very inspirational to my work, and I would like to ask how you express cross-modal referential relations of regional information. For example, how do you cross-modally ask questions through coordinates?
For example, the BLIP model fuses image and text information after tokenization. How do you perform such fusion?
Thank you very much for taking the time out of your busy schedule to look at my question.:)
The text was updated successfully, but these errors were encountered:
I have had this question before, but my view of this problem is that the current MLLM prefers a question-and-answer format. Specialized cross modal fusion modules like the previous ALBEF and BLIP are no longer mainstream. In fact, there seems to be no downstream tasks done by shikra like image text retrieval. ....
Your works are very inspirational to my work, and I would like to ask how you express cross-modal referential relations of regional information. For example, how do you cross-modally ask questions through coordinates?
For example, the BLIP model fuses image and text information after tokenization. How do you perform such fusion?
Thank you very much for taking the time out of your busy schedule to look at my question.:)
The text was updated successfully, but these errors were encountered: