Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Extraction - Refactor command-line script into a Python module #4

Merged
merged 18 commits into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -94,4 +94,7 @@ dmypy.json
# misc
*.mp4
sweep*/
core*
core*

features_outputs
*.pth
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This repository contains research code for the paper [*Towards Privacy-Aware Sig



SSVP-SLT relies on masked autoencoding (MAE) on anonymized videos as a form of self-supervised pretraining to learn continuous sign language representations at scale. The learned representations are transferred to the supervised gloss-free sign language translation task. SSVP-SLT outperforms prior SOTA methods on the ASL-to-English How2Sign benchmark in the finetuned and zero-shot settings by over 3 BLEU points.
SSVP-SLT relies on masked autoencoding (MAE) on anonymized and unannotated videos as a form of self-supervised pretraining to learn continuous sign language representations at scale. The learned representations are transferred to the supervised gloss-free sign language translation task. SSVP-SLT outperforms prior SOTA methods on the ASL-to-English How2Sign benchmark in the finetuned and zero-shot settings by over 3 BLEU points.
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved

----

Expand Down
38 changes: 38 additions & 0 deletions tests/feature_extraction_test.py
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
# All rights reserved.

# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

import unittest
from translation.feature_extraction_module import FeatureExtractionConfig, FeatureExtractionModule, LauncherConfig
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
from unittest.mock import patch, MagicMock
from utils.download_model import get_model_path

class TestFeatureExtractionModule(unittest.TestCase):
def setUp(self):
# Mock the configuration for the FeatureExtractionModule
model_path = get_model_path('https://dl.fbaipublicfiles.com/SONAR/asl/signhiera_mock.pth')
self.config = FeatureExtractionConfig(
data_dir="MOCK_dataset",
pretrained_model_path=model_path,
launcher=LauncherConfig(cluster="local")
)
self.module = FeatureExtractionModule(self.config)

@patch("torch.cuda.is_available", return_value=False) # Mock CUDA for CPU
def test_load_model(self, mock_cuda):
# Test if the model loads properly
model = self.module.load_model()
self.assertIsNotNone(model)

@patch("ssvp_slt.data.video_dataset.VideoDataset")
def test_get_dataloader(self, mock_video_dataset):
# Mock the VideoDataset and test the dataloader
mock_dataset = MagicMock()
mock_video_dataset.return_value = mock_dataset
dataloader = self.module.get_dataloader((0, 10))
self.assertIsNotNone(dataloader)

if __name__ == "__main__":
unittest.main()
20 changes: 15 additions & 5 deletions translation/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
## Translation

### Feature Extraction

Once you have obtained a pretrained model, and your data is prepared as outlined in [DATASETS.md](../DATASETS.md), you are ready for feature extraction. Below is an example for how it can be run.

`num_items_per_shard` is tuned based on the size of our dataset (fewer items per shard result in larger slurm array).
`max_batch_size=2` is based on our setup using fp32 and 32GB V100 GPUs. When using a CLIP checkpoint with SignHiera vision tower, pass `from_clip=true`. If you are not using a SLURM cluster, you can pass `launcher.cluster=local` to run feature extraction on the local machine.
`num_items_per_shard` is tuned based on the size of our dataset (fewer items per shard result in larger slurm array).
`max_batch_size=2` is based on our setup using fp32 and 32GB V100 GPUs. When using a CLIP checkpoint with SignHiera vision tower, pass `from_clip=true`. If you are not using a SLURM cluster, you can pass `launcher.cluster=local` to run feature extraction on the local machine.

For additional parameters, please refer to the `FeatureExtractionConfig` dataclass in [launch_feature_extraction.py](launch_feature_extraction.py).

Expand All @@ -27,6 +28,7 @@ python launch_feature_extraction.py \
```

After extracting the features, you should make sure they follow the nested folder structure expected by our [`SignFeaturesDataset`](../src/ssvp_slt/data/sign_features_dataset.py) below. The manifest `.tsv` files should have the following columns: `["video_name", "length", "label"]`, where `video_name` uniquely identifies a video in the dataset, `length` denotes the length of the features tensor, and `label` is the translation. You can obtain manifest files with the length column populated based on the extracted features via our provided [get_feature_lengths.py](../scripts/get_feature_lengths.py) script.

```
├── DATASET_NAME/
├── manifests/
Expand All @@ -38,16 +40,18 @@ After extracting the features, you should make sure they follow the nested folde
├── 00000 (prefix)
├── 00000-abcdef.pt (video_name.pt)
└── ...
├── 00001
├── 00001
├── 00001-cdefgh.pt
└── ...
└── ...
└── ... (if extracted more than 1 epoch)
```

### Training

After you have successfully extracted the features, you can simply run the translation pipeline as follows.

For T5:

```bash
# sweep over 5 random seeds
python run_translation.py \
Expand Down Expand Up @@ -81,12 +85,12 @@ python run_translation.py \
common.fp16=true
```


Adjust the number of GPUs and gradient accumulation steps based on your setup. These setups are based on fp32 training on 32GB V100 GPUs. You can also play with the max sequence length (`data.max_source_positions`) to reduce memory footprint. While we recommend `common.fp16=true` for BART training and the `slt_tarres_fairseq` configuration, `fp16` does not work with T5. We have not tried `bfloat16` training but encourage adding support for it.

For all additional training and evaluation parameters, refer to the dataclasses in [run_translation.py](run_translation.py).

### Evaluation

You can run the translation pipeline in eval mode to only perform evaluation on the validation and test sets.
To do this, simply pass the appropriate overrides, e.g.:

Expand All @@ -100,4 +104,10 @@ python run_translation.py \
common.load_model=/path/to/finetuned-t5/best_model.pth
```

### Feature Extraction Demo

To help you get started with the feature extraction process, we've provided a Colab notebook that demonstrates the key steps. The notebook walks through the workflow, allowing you to interactively extract features from your data using our tools.

You can access the demo here: [Feature Extraction Demo on Colab](https://colab.research.google.com/drive/1EeL84ZrPRwaQBldPS9TqbWogJj52vn5Q?usp=sharing)

Simply open the notebook, connect to a runtime, and follow the instructions. This example provides insights into how to utilize our feature extraction pipeline effectively.
215 changes: 215 additions & 0 deletions translation/feature_extraction_module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,215 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.

# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

import math
import time
from dataclasses import dataclass
from pathlib import Path
from einops import rearrange
from typing import Any, Generator, List, Union, Tuple

import pandas as pd
import torch
from torch import nn
from torch.utils.data import DataLoader
from tqdm import tqdm

from src.ssvp_slt.data.video_dataset import VideoDataset
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
import src.ssvp_slt.modeling.sign_hiera as sign_hiera
import src.ssvp_slt.util.misc as misc
from src.ssvp_slt.modeling.sign_hiera import SignHiera
from stopes.core import Requirements, StopesModule

@dataclass
class LauncherConfig:
cluster: str = "slurm"
partition: str = "gpu"
max_jobarray_jobs: int = 128

@dataclass
class FeatureExtractionConfig:
data_dir: str
pretrained_model_path: str
model_name: str = "hiera_base_128x224"
output_dir: str = "features_outputs"
from_clip: bool = False
split: str = "test"
fp16: bool = False
do_aug: bool = False
epochs: int = 1
num_frames: int = 128
sampling_rate: int = 2
target_fps: int = 25
num_items_per_shard: int = 50
max_batch_size: int = 2
video_backend: str = "pyav"
launcher: LauncherConfig = LauncherConfig()

def shard_generator(data: Any, shard_size: int) -> Generator[Any, None, None]:
for i in range(0, len(data), shard_size):
yield data[i : i + shard_size]

class FeatureExtractionModule(StopesModule):
def __init__(self, config: FeatureExtractionConfig):
super().__init__(config, FeatureExtractionConfig)

manifest_file = Path(self.config.data_dir) / "manifests" / f"{self.config.split}.tsv"

self.num_items = len(
pd.read_csv(
manifest_file,
delimiter="\t",
names=["video_name", "duration", "caption"],
quoting=3,
)
)

Path(self.config.output_dir).mkdir(exist_ok=True, parents=True)

def requirements(self) -> Requirements:
return Requirements(
nodes=1,
mem_gb=6,
tasks_per_node=1,
gpus_per_node=1,
cpus_per_task=1,
timeout_min=60 * 72,
)

def name(self) -> str:
return (
f"feature_extractor_{self.config.split}_{self.config.model_name}_"
f"{Path(self.config.pretrained_model_path).stem}"
)

@property
def num_shards(self) -> int:
return math.ceil(self.num_items / self.config.num_items_per_shard)

def load_model(self, device: Union[torch.device, str] = "cuda") -> SignHiera:
if self.config.from_clip:
if "hiera" in self.config.model_name:
model = SignHiera.from_clip_model(
self.config.model_name, self.config.pretrained_model_path
)
else:
raise ValueError(
f"Loading `{self.config.model_name}` from a CLIP model is not supported."
)
else:
model = sign_hiera.__dict__[self.config.model_name](pretrained=True, strict=False)
misc.load_model(model, self.config.pretrained_model_path)

model.head = nn.Identity()
print(f"Number of parameters: {sum([p.numel() for p in model.parameters()])}")

model.eval()
model.to(device)

return model

def get_dataloader(self, indices: Tuple[int, int]) -> DataLoader:
dataset = VideoDataset(
mode=self.config.split,
video_backend=self.config.video_backend,
target_fps=self.config.target_fps,
data_dir=self.config.data_dir,
sampling_rate=self.config.sampling_rate,
num_frames=self.config.num_frames,
rand_aug=self.config.do_aug,
train_random_horizontal_flip=self.config.do_aug,
train_random_crop=self.config.do_aug,
feature_extraction=True,
feature_extraction_stride=self.config.num_frames // 2,
indices=indices,
gpu=(torch.cuda.current_device() if self.config.video_backend == "cuda" else None),
)

return DataLoader(
dataset,
batch_size=1,
num_workers=1 if self.config.video_backend == "cuda" else 2,
persistent_workers=True,
pin_memory=not self.config.video_backend == "cuda",
)

def run(self, iteration_value: Any, iteration_index: int):
output_dir = Path(self.config.output_dir)
start_id, end_id = (
self.config.num_items_per_shard * iteration_index,
self.config.num_items_per_shard * (iteration_index + 1),
)

print(f"Indices: {start_id}-{end_id}")
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device {device}")

model = self.load_model(device=device)
dataloader = self.get_dataloader(indices=(start_id, end_id))

fids = [
Path(dataloader.dataset.path_to_videos[i]).stem for i in range(len(dataloader.dataset))
]
prefixes = [fid[:5] for fid in fids]

for epoch in tqdm(range(self.config.epochs), desc="Creating folder structure"):
for prefix in prefixes:
prefix_path = output_dir / str(epoch) / prefix
prefix_path.mkdir(parents=True, exist_ok=True)

for epoch in range(self.config.epochs):
print(f"Epoch: {epoch}")

prefetcher = misc.Prefetcher(dataloader, device=device)

start_time = time.time()
idx = 0
pbar = tqdm(total=len(dataloader))

batch = next(prefetcher)
while batch is not None:
frames = batch["frames"].float()
padding = batch["padding"]

if frames.dim() == 6:
frames = rearrange(frames, "b r c t h w -> (b r) c t h w")
if padding.dim() == 2:
padding = rearrange(padding, "b r -> (b r)")

if len(frames) > self.config.max_batch_size:
shard_outputs = []
frames_shards = shard_generator(frames, self.config.max_batch_size)
padding_shards = shard_generator(padding, self.config.max_batch_size)

for frames_shard, padding_shard in zip(frames_shards, padding_shards):
with torch.inference_mode(), torch.cuda.amp.autocast(
enabled=self.config.fp16
):
shard_output = model.extract_features(
frames_shard, padding=padding_shard
).cpu()
if len(shard_output.shape) == 1:
shard_output = shard_output.unsqueeze(0)
shard_outputs.append(shard_output)

outputs = torch.concatenate(shard_outputs, dim=0)

else:
with torch.inference_mode(), torch.cuda.amp.autocast(enabled=self.config.fp16):
outputs = model.extract_features(frames, padding=padding).detach().cpu()

fid = fids[idx]
prefix = prefixes[idx]

output_file = output_dir / str(epoch) / prefix / f"{fid}.pt"
torch.save(outputs, output_file)

idx += 1
pbar.update(1)
batch = next(prefetcher)

print(f"Epoch time: {time.time() - start_time:.2f}s")
pbar.close()
19 changes: 19 additions & 0 deletions utils/download_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
# All rights reserved.

# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

import os
import wget

def get_model_path(url: str, model_path: str = 'signhiera_mock.pth'):
JooZef315 marked this conversation as resolved.
Show resolved Hide resolved
# Check if the model file exists
if os.path.exists(model_path):
print(f"Model already exists at: {model_path}")
else:
print("Model not found, downloading...")
filename = wget.download(url, model_path)
print(f"Downloaded model to: {filename}")

return model_path