This repository provides a wrapper for using CLIP embeddings for few shot classification.
app.py
to write the apprequirements.txt
to specify python dependenciesContainerfile
to containerize the app and specify system dependencies- an empty
LICENSE
file to replace with an actual license information of the app - this
README.md
file with basic instructions of app installation and execution - some GH actions workflow for issue/bug-report management
- a GH actions workflow to build and upload app images upon any push of a git tag
Modify this file as needed to provide proper instructions for your users.
Generally, an CLAMS app requires
- Python3 with the
clams-python
module installed; to run the app locally. docker
; to run the app in a Docker container (as a HTTP server).- A HTTP client utility (such as
curl
); to invoke and execute analysis.
From the project directory, run the following in your terminal to build the Docker image from the included Dockerfile:
docker build . -f Dockerfile -t <app_name>
Alternatively, the app maybe already be available on docker hub.
docker pull <app_name>
Then to create a Docker container using that image, run:
docker run -v /path/to/data/directory:/data -p <port>:5000 <app_name>
where /path/to/data/directory is the location of your media files or MMIF objects and is the host port number you want your container to be listening to. The HTTP inside the container will be listening to 5000 by default.
Once the app is running as a HTTP server, to invoke the app and get automatic annotations, simply send a POST request to the app with a MMIF input as request body.
MMIF input files can be obtained from outputs of other CLAMS apps, or you can create an empty MMIF only with source media locations using clams source
command. See the help message for a more detailed instructions.
clams source --help
(Make sure you installed the same clams-python
package version specified in the requirements.txt
.)