Skip to content

Commit

Permalink
Update clip.md
Browse files Browse the repository at this point in the history
  • Loading branch information
boba-and-beer authored Jan 26, 2021
1 parent b033819 commit 93297fe
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions vectorhub/bi_encoders/text_image/torch/clip.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ model_id: "text_image/clip"
model_name: "CLIP"
vector_length: "512 (default)"
release_date: "2021-01-01"
paper: "https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf)"
paper: "https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf"
repo: https://github.com/openai/CLIP
installation: "pip install vectorhub[clip]"
category: text-image
short_description: CLIP aims to test the ability of models to generalize arbitrary image classification tasks in a zero-shot manner.
Expand All @@ -19,8 +20,6 @@ model.encode_image("A purple V")
model.encode_text('https://getvectorai.com/assets/hub-logo-with-text.png')
```

Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.

## Description

The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
Expand Down

0 comments on commit 93297fe

Please sign in to comment.