Skip to content
View lixin4ever's full-sized avatar
🍉
I may be slow to respond before the due date of ACL.
🍉
I may be slow to respond before the due date of ACL.

Organizations

@dmlc @textmine

Block or report lixin4ever

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. DAMO-NLP-SG/VideoLLaMA2 DAMO-NLP-SG/VideoLLaMA2 Public

    VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs

    Python 953 62

  2. DAMO-NLP-SG/Video-LLaMA DAMO-NLP-SG/Video-LLaMA Public

    [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

    Python 2.8k 264

  3. DAMO-NLP-SG/CLEX DAMO-NLP-SG/CLEX Public

    [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models

    Python 74 11

  4. DAMO-NLP-SG/VCD DAMO-NLP-SG/VCD Public

    [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding

    Python 223 11

  5. DAMO-NLP-SG/DiGIT DAMO-NLP-SG/DiGIT Public

    [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective

    Python 50 2

  6. DAMO-NLP-SG/Inf-CLIP DAMO-NLP-SG/Inf-CLIP Public

    The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A super memory-efficiency CLIP training scheme.

    Python 211 9