Skip to content

Commit

Permalink
Merge pull request #32 from clamsproject/31-symlink-getrid
Browse files Browse the repository at this point in the history
dynamic symlinking
  • Loading branch information
keighrim authored Mar 11, 2024
2 parents bbacf1a + 2015e6d commit 10eb0e2
Show file tree
Hide file tree
Showing 36 changed files with 140 additions and 154 deletions.
1 change: 0 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
static/tmp*
*~
__pycache__
.git
Expand Down
3 changes: 0 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,6 @@ gdrive_shared*/
tags
.tags

# static archival files
static/tmp*

# VSCode
.devcontainer
devcontainer.json
121 changes: 49 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ This application creates an HTML server that visualizes annotation components in
- Interactive, searchable MMIF tree view with [JSTree](https://www.jstree.com/).
- Embedded [Universal Viewer](https://universalviewer.io/) (assuming file refers to video and/or image document).


The application also includes tailored visualizations depending on the annotations present in the input MMIF:
| Visualization | Supported CLAMS apps |
|---|---|
Expand All @@ -16,9 +15,7 @@ The application also includes tailored visualizations depending on the annotatio
| Named entity annotations with [displaCy.](https://explosion.ai/demos/displacy-ent) | [SPACY](https://github.com/clamsproject/app-spacy-wrapper) | |
| Screenshots & HTML5 video navigation of TimeFrames | [Chyron text recognition](https://github.com/clamsproject/app-chyron-text-recognition), [Slate detection](https://github.com/clamsproject/app-slatedetection), [Bars detection](https://github.com/clamsproject/app-barsdetection) |



Requirements:
## Requirements:

- A command line interface.
- Git (to get the code).
Expand All @@ -31,7 +28,9 @@ To get this code if you don't already have it:
$ git clone https://github.com/clamsproject/mmif-visualizer
```

## Quick start
## Startup

### Quick start

If you just want to get the server up and running quickly, the repository contains a shell script `start_visualizer.sh` to immediately launch the visualizer in a container. You can invoke it with the following command:

Expand All @@ -42,72 +41,78 @@ If you just want to get the server up and running quickly, the repository contai
* The **required** `data_directory` argument should be the absolute or relative path of the media files on your machine which the MMIF files reference.
* The **optional** `mount_directory` argument should be specified if your MMIF files point to a different directory than where your media files are stored on the host machine. For example, if your video, audio, and text data is stored locally at `/home/archive` but your MMIF files refer to `/data/...`, you should set this variable to `/data`. (If this variable is not set, the mount directory will default to the data directory)

For example, if your media files are stored at `/llc_data` and your MMIF files specify the document location as `"location": "file:///data/...`, you can start the visualizer with the following command:
For example, if your media files are stored at `/my_data` and your MMIF files specify the document location as `"location": "file:///data/...`, you can start the visualizer with the following command:
```
./start_visualizer.sh /llc_data /data
./start_visualizer.sh /my_data /data
```

The server can then be accessed at `http://localhost:5000/upload`

## Running the server in a container

Download or clone this repository and build an image using the `Dockerfile` (you may use another name for the -t parameter, for this example we use `clams-mmif-visualizer` throughout). **NOTE**: if using podman, just substitute `docker` for `podman` in the following commands.
The server can then be accessed at `http://localhost:5001/upload`

```bash
$ docker build . -f Containerfile -t clams-mmif-visualizer
```
The following is breakdown of the script's functionality:

In these notes we assume that the data are in a local directory named `/Users/Shared/archive` with sub directories `audio`, `image`, `text` and `video` (those subdirectories are standard in CLAMS, but the parent directory could be any directory depending on your local set up). We can now run a Docker container with

```bash
$ docker run --rm -d -p 5000:5000 -v /Users/Shared/archive:/data clams-mmif-visualizer
```
### Running the server natively

See the *Data source repository and input MMIF file* section below for a description of the MMIF file. Assuming you have not made any changes to the directory structure you can use the example MMIF files in the `input` folder.

**Some background**

With the docker command above we do two things of note:
First install the python dependencies listed in `requirements.txt`:

1. The container port 5000 (the default for a Flask server) is exposed to the same port on your Docker host (your local computer) with the `-p` option.
2. The local data repository `/Users/Shared/archive` is mounted to `/data` on the container with the `-v` option.
````bash
$ pip install -r requirements.txt
````

Another useful piece of information is that the Flask server on the Docker container has no direct access to `/data` since it can only see data in the `static` directory of this repository. Therefore we have created a symbolic link `static/data` that links to `/data`:
You will also need to install opencv-python if you are not running within a container (`pip install opencv-python`).
Then, to run the server do:

```bash
$ ln -s /data static/data
$ python app.py
```

With this, the mounted directory `/data` in the container is accessable from inside the `/app/static` directory of the container. You do not need to use this command unless you change your set up because the symbolic link is part of this repository.

Running the server natively means that the source media file paths in the target MMIF file are all accessible in the local file system, under the same directory paths.
If that's not the case, and the paths in the MMIF is beyond your FS permission, using container is recommended. See the next section for an example.

#### Data source repository and example MMIF file
This repository contains an example MMIF file in `example/whisper-spacy.json`. This file refers to three media files:

## Running the server locally
1. service-mbrs-ntscrm-01181182.mp4
2. service-mbrs-ntscrm-01181182.wav
3. service-mbrs-ntscrm-01181182.txt

> [!NOTE]
> Note on source/copyright: these documents are sourced from [the National Screening Room collection in the Library of Congress Online Catalog](https://hdl.loc.gov/loc.mbrsmi/ntscrm.01181182). The collection provides the following copyright information:
> > The Library of Congress is not aware of any U.S. copyright or other restrictions in the vast majority of motion pictures in these collections. Absent any such restrictions, these materials are free to use and reuse.
First install the python dependencies listed in `requirements.txt`:
These files can be found in the directory `example/example-documents`. But according to the `whisper-spacy.json` MMIF file, those three files should be found in their respective subdirectories in `/data`.
Easy way to align these paths is probably to create a symbolic link to the `example-documents` directory in the `/data` directory.
However, since `/data` is located at the root directory, you might not have permission to write a new symlink to the FS root.
In this case you can more easily re-map the `examples/example-documents` directory to `/data` by using the `-v` option in the docker-run command. See below.

````bash
$ pip install -r requirements.txt
````
### Running the server in a container

You will also need to install opencv-python if you are not running within a container (`pip install opencv-python`).
Download or clone this repository and build an image using the `Containerfile` (you may use another name for the -t parameter,
for this example we use `clams-mmif-visualizer` throughout).

Let's again assume that the data are in a local directory `/Users/Shared/archive` with sub directories `audio`, `image`, `text` and`video`. You need to copy, symlink, or mount that local directory into the `static` directory. Note that the `static/data` symbolic link that is in the repository is set up to work with the docker containers, if you keep it in that form your data need to be in `/data`, otherwise you need to change the link to fit your needs, for example, you could remove the symbolic link and replace it with one that uses your local directory:
> [!NOTE]
> if using podman, just substitute `docker` for `podman` in the following commands.
```bash
$ rm static/data
$ ln -s /Users/Shared/archive static/data
$ docker build . -f Containerfile -t clams-mmif-visualizer
```

To run the server do:
In these notes we assume that the data are in a local directory named `/home/myuser/public` with subdirectories `audio`, `image`, `text` and `video`. We can now run a container with

```bash
$ python app.py
$ docker run --rm -d -p 5001:5000 -v /home/myuser/public:/data clams-mmif-visualizer
```

> [!NOTE]
> With the docker command above we do two things of note:
> 1. The container port 5000 (the default for a Flask server) is exposed to the same port on your host (your local computer) with the `-p` option.
> 2. The local data repository `/home/myuser/public` is mounted to `/data` on the container with the `-v` option.
Now, when you use the `example/example-documents` directory as the data source to visualize `examples/whisper-spacy.json` MMIF file, you need to triple-mount the example directory to the container, as `audio`, `video`, and `text` respectively.

## Uploading Files
MMIF files can be uploaded to the visualization server one of two ways:
$ docker run --rm -d -p 5001:5000 -v $(pwd)/example/example-documents:/data/audio -v $(pwd)/example/example-documents:/data/video -v $(pwd)/example/example-documents:/data/text clams-mmif-visualizer

## Usage
Use the visualizer by uploading files. MMIF files can be uploaded to the visualization server one of two ways:
* Point your browser to http://0.0.0.0:5000/upload, click "Choose File" and then click "Visualize". This will generate a static URL containing the visualization of the input file (e.g. `http://localhost:5000/display/HaTxbhDfwakewakmzdXu5e`). Once the file is uploaded, the page will automatically redirect to the file's visualization.
* Using a command line, enter:
```
Expand All @@ -117,31 +122,3 @@ MMIF files can be uploaded to the visualization server one of two ways:

The server will maintain a cache of up to 50MB for these temporary files, so the visualizations can be repeatedly accessed without needing to re-upload any files. Once this limit is reached, the server will delete stored visualizations until enough space is reclaimed, drawing from oldest/least recently accessed pages first. If you attempt to access the /display URL of a deleted file, you will be redirected back to the upload page instead.


## Data source repository and input MMIF file
The data source includes video, audio, and text (transcript) files that are subjects for the CLAMS analysis tools. As mentioned above, to make this visualizer work with those files and be able to display the contents on the web browser, those source files need to be accessible from inside the `static` directory.

This repository contains an example MMIF file in `input/whisper-spacy.json`. This file refers to three media files:

1. service-mbrs-ntscrm-01181182.mp4
2. service-mbrs-ntscrm-01181182.wav
3. service-mbrs-ntscrm-01181182.txt

These files can be found in the directory `input/example-documents`. They can be moved anywhere on the host machine, as long as they are placed in the subdirectories `video`, `audio`, and `text` respectively. (e.g. `/Users/Shared/archive/video`, etc.)

According to the MMIF file, those three files should be found in their respective subdirectories in `/data`. The Flask server will look for these files in `static/data/video`, `static/data/audio` and `static/data/text`, amd those directories should point at the appropriate location:

- If you run the visualizer in a Docker container, then the `-v` option in the docker-run command is used to mount the local data directory `/Users/shared/archive` to the `/data` directory on the container and the `static/data` symlink already points to that.
- If you run the visualizer on your local machine without using a container, then you have a couple of options (where you may need to remove the current link first):
- Make sure that the `static/data` symlink points at the local data directory
`$ ln -s /Users/Shared/archive/ static/data`
- Copy the contents of `/Users/Shared/archive` into `static/data`.
- You could choose to copy the data to any spot in the `static` folder but then you would have to edit the MMIF input file.


---
Note on source/copyright: these documents are sourced from [the National Screening Room collection in the Library of Congress Online Catalog](https://hdl.loc.gov/loc.mbrsmi/ntscrm.01181182). The collection provides the following copyright information:

> The Library of Congress is not aware of any U.S. copyright or other restrictions in the vast majority of motion pictures in these collections. Absent any such restrictions, these materials are free to use and reuse.
---
37 changes: 24 additions & 13 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def index():
def ocr():
try:
data = dict(request.json)
mmif_str = open(cache.get_cache_path() / data["mmif_id"] / "file.mmif").read()
mmif_str = open(cache.get_cache_root() / data["mmif_id"] / "file.mmif").read()
mmif = Mmif(mmif_str)
ocr_view = mmif.get_view_by_id(data["view_id"])
return prepare_ocr_visualization(mmif, ocr_view, data["mmif_id"])
Expand Down Expand Up @@ -67,23 +67,29 @@ def upload():
def invalidate_cache():
app.logger.debug(f"Request to invalidate cache on {request.args}")
if not request.args.get('viz_id'):
app.logger.debug("Invalidating entire cache.")
cache.invalidate_cache()
return redirect("/upload")
viz_id = request.args.get('viz_id')
in_mmif = open(cache.get_cache_path() / viz_id / 'file.mmif', 'rb').read()
in_mmif = open(cache.get_cache_root() / viz_id / 'file.mmif', 'rb').read()
app.logger.debug(f"Invalidating {viz_id} from cache.")
cache.invalidate_cache([viz_id])
return upload_file(in_mmif)


@app.route('/display/<viz_id>')
def display(viz_id):
try:
path = cache.get_cache_path() / viz_id
path = cache.get_cache_root() / viz_id
app.logger.debug(f"Displaying visualization {viz_id} from {path}")
if os.path.exists(path / "index.html"):
app.logger.debug(f"Visualization {viz_id} found in cache.")
set_last_access(path)
with open(os.path.join(path, "index.html")) as f:
html_file = f.read()
return html_file
except FileNotFoundError:
else:
app.logger.debug(f"Visualization {viz_id} not found in cache.")
os.remove(path)
flash("File not found -- please upload again (it may have been deleted to clear up cache space).")
return redirect("/upload")

Expand All @@ -95,12 +101,12 @@ def send_js(path):

def render_mmif(mmif_str, viz_id):
mmif = Mmif(mmif_str)
media = documents_to_htmls(mmif, viz_id)
app.logger.debug(f"Prepared Media: {[m[0] for m in media]}")
htmlized_docs = documents_to_htmls(mmif, viz_id)
app.logger.debug(f"Prepared document: {[d[0] for d in htmlized_docs]}")
annotations = prep_annotations(mmif, viz_id)
app.logger.debug(f"Prepared Annotations: {[annotation[0] for annotation in annotations]}")
return render_template('player.html',
media=media, viz_id=viz_id, annotations=annotations)
docs=htmlized_docs, viz_id=viz_id, annotations=annotations)


def upload_file(in_mmif):
Expand All @@ -109,7 +115,7 @@ def upload_file(in_mmif):
in_mmif_str = in_mmif_bytes.decode('utf-8')
viz_id = hashlib.sha1(in_mmif_bytes).hexdigest()
app.logger.debug(f"Visualization ID: {viz_id}")
path = cache.get_cache_path() / viz_id
path = cache.get_cache_root() / viz_id
app.logger.debug(f"Visualization Directory: {path}")
try:
os.makedirs(path)
Expand All @@ -136,9 +142,14 @@ def upload_file(in_mmif):

if __name__ == '__main__':
# Make path for temp files
cache_path = cache.get_cache_path()
if not os.path.exists(cache_path):
os.makedirs(cache_path)
cache_path = cache.get_cache_root()
cache_symlink_path = os.path.join(app.static_folder, cache._CACHE_DIR_SUFFIX)
if os.path.islink(cache_symlink_path):
os.unlink(cache_symlink_path)
elif os.path.exists(cache_symlink_path):
raise RuntimeError(f"Expected {cache_symlink_path} to be a symlink (for re-linking to a new cache dir, "
f"but it is a real path.")
os.symlink(cache_path, cache_symlink_path)

# to avoid runtime errors for missing keys when using flash()
alphabet = 'abcdefghijklmnopqrstuvwxyz1234567890'
Expand All @@ -148,4 +159,4 @@ def upload_file(in_mmif):
if len(sys.argv) > 2 and sys.argv[1] == '-p':
port = int(sys.argv[2])

app.run(port=port, host='0.0.0.0', debug=True, use_reloader=False)
app.run(port=port, host='0.0.0.0', debug=True, use_reloader=True)
29 changes: 13 additions & 16 deletions cache.py
Original file line number Diff line number Diff line change
@@ -1,31 +1,28 @@
import os
import time
import pathlib
import shutil
import tempfile
import threading
import pathlib

from utils import app
import time

lock = threading.Lock()


def get_cache_path():
return pathlib.Path(app.static_folder) / "tmp"
# module constants are unchanged throughout multiple "imports"
_CACHE_DIR_SUFFIX = "mmif-viz-cache"
_CACHE_DIR_ROOT = tempfile.TemporaryDirectory(suffix=_CACHE_DIR_SUFFIX)


def get_cache_relpath(full_path):
return str(full_path)[len(app.static_folder):]
def get_cache_root():
return pathlib.Path(_CACHE_DIR_ROOT.name)


def invalidate_cache(viz_ids):
if not viz_ids:
app.logger.debug("Invalidating entire cache.")
shutil.rmtree(get_cache_path())
os.makedirs(get_cache_path())
shutil.rmtree(get_cache_root())
os.makedirs(get_cache_root())
else:
for v in viz_ids:
app.logger.debug(f"Invalidating {v} from cache.")
shutil.rmtree(get_cache_path() / v)
shutil.rmtree(get_cache_root() / v)


def set_last_access(path):
Expand All @@ -35,9 +32,9 @@ def set_last_access(path):

def scan_tmp_directory():
oldest_accessed_dir = {"dir": None, "access_time": None}
total_size = sum(f.stat().st_size for f in get_cache_path().glob('**/*') if f.is_file())
total_size = sum(f.stat().st_size for f in get_cache_root().glob('**/*') if f.is_file())
# this will be some visualization IDs
for p in get_cache_path().glob('*'):
for p in get_cache_root().glob('*'):
if not (p / 'last_access.txt').exists():
oldest_accessed_dir = {"dir": p, "access_time": 0}
elif oldest_accessed_dir["dir"] is None:
Expand Down
11 changes: 2 additions & 9 deletions displacy/__init__.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,8 @@
import os

from spacy import displacy

from mmif.serialize import Mmif, View, Annotation
from mmif.vocabulary import AnnotationTypes
from mmif.vocabulary import DocumentTypes
from lapps.discriminators import Uri


def get_displacy(mmif: Mmif):
return displacy_dict_to_ent_html(mmif_to_displacy_dict(mmif))
from mmif.serialize import Mmif, View, Annotation
from spacy import displacy


def visualize_ner(mmif: Mmif, view: View, document_id: str, app_root: str) -> str:
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 10eb0e2

Please sign in to comment.