Skip to content

Experimental vision-language model designed for generating image captions using a lightweight architecture.

Notifications You must be signed in to change notification settings

rotem154154/edge_vlm

Repository files navigation

Edge Vision-Language Model (Moondream)

This repository contains the Moondream vision-language model, designed to generate captions for images. It utilizes a lightweight, experimental vision encoder and a language model for generating descriptions of input images.

Website Hugging Face Model Hugging Face Spaces

Installation

  1. Clone the repository:

    git clone https://huggingface.co/irotem98/edge_vlm
    cd edge_vlm
  2. Install the required dependencies:

    pip install -r requirements.txt

Usage

Here is a simple example to load the model, preprocess an image, and generate a caption:

from model import MoondreamModel
import torch

# Load the model and tokenizer
model = MoondreamModel.load_model()
tokenizer = MoondreamModel.load_tokenizer()

# Load and preprocess an image
image_path = 'img.jpg'  # Replace with your image path
image = MoondreamModel.preprocess_image(image_path)

# Generate the caption
caption = MoondreamModel.generate_caption(model, image, tokenizer)
print('Generated Caption:', caption)

About

Experimental vision-language model designed for generating image captions using a lightweight architecture.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages