Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add codespell: workflow, config + make it fix some typos #207

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Codespell configuration is within setup.cfg
---
name: Codespell

on:
push:
branches: [main]
pull_request:
branches: [main]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Codespell
uses: codespell-project/actions-codespell@v2
2 changes: 1 addition & 1 deletion docs/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ exclude_patterns : [ README.md, _build, Thumbs.db, .DS_Store,

execute:
execute_notebooks : off # Whether to execute notebooks at build time. Must be one of ("auto", "force", "cache", "off")
cache : "" # A path to the jupyter cache that will be used to store execution artifacs. Defaults to `_build/.jupyter_cache/`
cache : "" # A path to the jupyter cache that will be used to store execution artifacts. Defaults to `_build/.jupyter_cache/`
# exclude_patterns : [content/Download_Data.ipynb] # A list of patterns to *skip* in execution (e.g. a notebook that takes a really long time)
timeout : 30 # The maximum time (in seconds) each notebook cell is allowed to run.
run_in_temp : true # If `True`, then a temporary directory will be created and used as the command working directory (cwd),
Expand Down
2 changes: 1 addition & 1 deletion docs/basic_tutorials/01_basics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"\n",
"A detector is a swiss-army-knife class that \"glues\" together a particular combination of a Face, Landmark, Action Unit, and Emotion detection model into a single object. This allows us to provide a very easy-to-use high-level API, e.g. `detector.detect_image('my_image.jpg')`, which will automatically make use of the correct underlying model to solve the sub-tasks of identifying face locations, getting landmarks, extracting action units, etc. \n",
"\n",
"The first time you initialize a `Detector` instance on your computer will take a moment as Py-Feat will automatically download required pretrained model weights for you and save them to disk. Everytime after that it will use existing model weights:\n"
"The first time you initialize a `Detector` instance on your computer will take a moment as Py-Feat will automatically download required pretrained model weights for you and save them to disk. Every time after that it will use existing model weights:\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/basic_tutorials/02_detector_imgs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@
"source": [
"#### Loading detection results from a saved file\n",
"\n",
"We can load this output using the `read_feat()` function, which behaves just like `pd.read_csv` from Pandas, but returns a `Fex` data class instead of a DataFrame. This gives you the full suite of Fex funcionality right away."
"We can load this output using the `read_feat()` function, which behaves just like `pd.read_csv` from Pandas, but returns a `Fex` data class instead of a DataFrame. This gives you the full suite of Fex functionality right away."
]
},
{
Expand Down Expand Up @@ -1386,7 +1386,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"However, it's easy to use pandas slicing sytax to just grab predictions for the image you want. For example you can use `.loc` and chain it to `.plot_detections()`:"
"However, it's easy to use pandas slicing syntax to just grab predictions for the image you want. For example you can use `.loc` and chain it to `.plot_detections()`:"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/basic_tutorials/04_plotting.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@
"source": [
"### Adding muscle heatmaps to the plot\n",
"\n",
"We can also visualize how AU intensity affects the underyling facial muscle movement by passing in a dictionary of facial muscle names and colors (or the value `'heatmap'`) to `plot_face()`. \n",
"We can also visualize how AU intensity affects the underlying facial muscle movement by passing in a dictionary of facial muscle names and colors (or the value `'heatmap'`) to `plot_face()`. \n",
"\n",
"Below we activate 2 AUs and use the key `'all'` with the value `'heatmap'` to overlay muscle movement intensities affected by these specific AUs:"
]
Expand Down Expand Up @@ -172,7 +172,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"But it's also possibile to arbitrarily highlight any facial muscle by setting it to a color instead. This ignores the AU intensity and useful for highlighting specific facial muscles. Below we higlight two different muscles on a neutral face:"
"But it's also possible to arbitrarily highlight any facial muscle by setting it to a color instead. This ignores the AU intensity and useful for highlighting specific facial muscles. Below we highlight two different muscles on a neutral face:"
]
},
{
Expand Down Expand Up @@ -504,7 +504,7 @@
"\n",
"While `animate_face()` is useful for animating a single facial expression, sometimes you might want to make more complex multi-face animations. We can do that using `plot_face()` along with the `interpolate_aus()` helper function which will generate intermediate AU intensity values between two arrays in a manner that creates graceful animations ([cubic bezier easing function](https://easings.net/)).\n",
"\n",
"We can easily make a grid of all 20 AUs and animate their intensity changes one at a time from a netural facial expression. To generate the animation from matplotlib plots, we use the [`celluloid`](https://github.com/jwkvam/celluloid) library that makes it a bit easier to work with matplotlib animations. It's also what `animate_face` uses under the hood: "
"We can easily make a grid of all 20 AUs and animate their intensity changes one at a time from a neutral facial expression. To generate the animation from matplotlib plots, we use the [`celluloid`](https://github.com/jwkvam/celluloid) library that makes it a bit easier to work with matplotlib animations. It's also what `animate_face` uses under the hood: "
]
},
{
Expand Down
10 changes: 5 additions & 5 deletions docs/basic_tutorials/05_fex_analysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,14 @@
"\n",
"In the original paper the authors had 3 speakers deliver *good* or *bad* news while filming their facial expressions. They found that could accurately \"decode\" each condition based on participants' facial expressions extracted either using a custom multi-chanel-gradient model or action units (AUs) extracted using [Open Face](https://github.com/TadasBaltrusaitis/OpenFace). \n",
"\n",
"In this tutorial we'll show how easiy it is to not only reproduce their decoding analysis with py-feat, but just as easily perform additional analyses. Specifically we'll:\n",
"In this tutorial we'll show how easily it is to not only reproduce their decoding analysis with py-feat, but just as easily perform additional analyses. Specifically we'll:\n",
"\n",
"1. Download 20 of the first subject's videos (the full dataset is available on [OSF](https://osf.io/6tbwj/)\n",
"2. Extract facial features using the `Detector`\n",
"3. Aggregate and summarize detections per video using `Fex`\n",
"2. Train and test a decoder to classify *good* vs *bad* news using extracted emotions, AUs, and poses\n",
"3. Run a fMRI style \"mass-univariate\" comparison across all AUs between conditions\n",
"4. Run a time-series analysis comparing videos based on the time-courses of extracted facial fatures "
"4. Run a time-series analysis comparing videos based on the time-courses of extracted facial features "
]
},
{
Expand All @@ -40,7 +40,7 @@
"source": [
"# 5.1 Download the data\n",
"\n",
"Here's we'll download and save the first 20 video files and their corresponding attributes from OSF. The next cell should run quickly on Google Collab, but will depend on your own internet conection if you're executing this notebook locally. You can rerun this cell in case the download fails for any reason, as it should skip downloading existing files:"
"Here's we'll download and save the first 20 video files and their corresponding attributes from OSF. The next cell should run quickly on Google Collab, but will depend on your own internet connection if you're executing this notebook locally. You can rerun this cell in case the download fails for any reason, as it should skip downloading existing files:"
]
},
{
Expand Down Expand Up @@ -1204,7 +1204,7 @@
" )\n",
")\n",
"\n",
"# Update sesssions to group by condition, compute means (per condition), and make a\n",
"# Update sessions to group by condition, compute means (per condition), and make a\n",
"# barplot of the mean AUs for each condition\n",
"ax = (\n",
" by_video.update_sessions(video2condition)\n",
Expand Down Expand Up @@ -1319,7 +1319,7 @@
" X=\"sessions\", y=\"aus\", fit_intercept=True\n",
")\n",
"\n",
"# We can perform bonferroni correction for multiple comparisions:\n",
"# We can perform bonferroni correction for multiple comparisons:\n",
"p_bonf = p / p.shape[1]\n",
"\n",
"results = pd.concat(\n",
Expand Down
8 changes: 4 additions & 4 deletions docs/extra_tutorials/06_trainAUvisModel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can examine the correlation between AU occurences across all the datasets to get a sense of what AU's tend to co-occur:"
"We can examine the correlation between AU occurrences across all the datasets to get a sense of what AU's tend to co-occur:"
]
},
{
Expand Down Expand Up @@ -254,7 +254,7 @@
"source": [
"## Balance AU-occurences by sub-sampling\n",
"\n",
"Because datasets differ in which AUs they contain and because AUs differ greatly in their occurence across samples, we sub-sample the aggregated data to generate a new dataset that contains at least 650 occurences of each AU. This number was chosen because it is the largest number of positive samples (samples where the AU was present) for the AU with the fewest positive samples (AU43). This helps balance the features out a bit:"
"Because datasets differ in which AUs they contain and because AUs differ greatly in their occurrence across samples, we sub-sample the aggregated data to generate a new dataset that contains at least 650 occurrences of each AU. This number was chosen because it is the largest number of positive samples (samples where the AU was present) for the AU with the fewest positive samples (AU43). This helps balance the features out a bit:"
]
},
{
Expand Down Expand Up @@ -327,7 +327,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that our resampled dataset contains signficantly higher proportions of each AU, which will make it a bit easier to train the model."
"We can see that our resampled dataset contains significantly higher proportions of each AU, which will make it a bit easier to train the model."
]
},
{
Expand Down Expand Up @@ -399,7 +399,7 @@
" _X = poly.fit_transform(_X)\n",
"\n",
" # It can also be helpful to scale AUs within each sample such that they reflect\n",
" # z-scores relative to the mean/std AU occurences within that sample, rather than\n",
" # z-scores relative to the mean/std AU occurrences within that sample, rather than\n",
" # values between 0-1. This can be helpful if you use a polynomial degree > 1\n",
" # But we don't do this by default\n",
" if scale_across_features:\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ This is a large overhaul and refactor of some of the core testing and API functi

### Breaking Changes

- `Detector` no longer support unintialized models, e.g. `any_model = None`
- `Detector` no longer support uninitialized models, e.g. `any_model = None`
- This is is also true for `Detector.change_model`
- Columns of interest on `Fex` data classes were previously accessed like class _methods_, i.e. `fex.aus()`. These have now been changed to class _attributes_, i.e. `fex.aus`
- Remove support for `DRML` AU detector
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Models names are case-insensitive: `'resmasknet' == 'ResMaskNet'`
- `svm`: SVM model trained on Histogram of Oriented Gradients\*\* extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

```{note}
\*For AU07, our `xbg` detector was trained with hinge-loss instead of cross-entropy loss like other AUs as this yielded substantially better detection peformance given the labeled data available for this AU. This means that while it returns continuous probability predictions, these are more likely to appear binary in practice (i.e. be 0 or 1) and should be interpreted as *proportion of decision-trees with a detection* rather than *average decision-tree confidence* like other AU values.
\*For AU07, our `xbg` detector was trained with hinge-loss instead of cross-entropy loss like other AUs as this yielded substantially better detection performance given the labeled data available for this AU. This means that while it returns continuous probability predictions, these are more likely to appear binary in practice (i.e. be 0 or 1) and should be interpreted as *proportion of decision-trees with a detection* rather than *average decision-tree confidence* like other AU values.
```

```{note}
Expand Down
6 changes: 3 additions & 3 deletions feat/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -941,7 +941,7 @@ def ttest_1samp(self, popmean=0):

Args:
popmean (int, optional): Population mean to test against. Defaults to 0.
threshold_dict ([type], optional): Dictonary for thresholding. Defaults to None. [NOT IMPLEMENTED]
threshold_dict ([type], optional): Dictionary for thresholding. Defaults to None. [NOT IMPLEMENTED]

Returns:
t, p: t-statistics and p-values
Expand Down Expand Up @@ -999,7 +999,7 @@ def predict(

mX, my = self._parse_features_labels(X, y)

# user passes an unintialized class, e.g. LogisticRegression
# user passes an uninitialized class, e.g. LogisticRegression
if isinstance(model, type):
clf = model(*args, **kwargs)
else:
Expand Down Expand Up @@ -1042,7 +1042,7 @@ def isc(self, col, index="frame", columns="input", method="pearson"):
method (str, optional): Method to use for correlation pearson, kendall, or spearman. Defaults to "pearson".

Returns:
DataFrame: Correlation matrix with index as colmns
DataFrame: Correlation matrix with index as columns
"""
if index is None:
index = "frame"
Expand Down
8 changes: 4 additions & 4 deletions feat/detector.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
from tqdm import tqdm
import torchvision.transforms as transforms

# Supress sklearn warning about pickled estimators and diff sklearn versions
# Suppress sklearn warning about pickled estimators and diff sklearn versions
warnings.filterwarnings("ignore", category=UserWarning, module="sklearn")


Expand Down Expand Up @@ -77,12 +77,12 @@ def __init__(
info (dict):
n_jobs (int): Number of jobs to be used in parallel.
face_model (str, default=retinaface): Name of face detection model
landmark_model (str, default=mobilenet): Nam eof landmark model
landmark_model (str, default=mobilenet): Name eof landmark model
au_model (str, default=svm): Name of Action Unit detection model
emotion_model (str, default=resmasknet): Path to emotion detection model.
facepose_model (str, default=img2pose): Name of headpose detection model.
identity_model (str, default=facenet): Name of identity detection model.
face_detection_columns (list): Column names for face detection ouput (x, y, w, h)
face_detection_columns (list): Column names for face detection output (x, y, w, h)
face_landmark_columns (list): Column names for face landmark output (x0, y0, x1, y1, ...)
emotion_model_columns (list): Column names for emotion model output
emotion_model_columns (list): Column names for emotion model output
Expand Down Expand Up @@ -170,7 +170,7 @@ def _init_detectors(

# Initialize model instances and any additional post init setup
# Only initialize a model if the currently initialized model is diff than the
# requested one. Lets us re-use this with .change_model
# requested one. Lets us reuse this with .change_model

# FACE MODEL
if self.info["face_model"] != face:
Expand Down
2 changes: 1 addition & 1 deletion feat/face_detectors/FaceBoxes/readme.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Liscense:
## License:
# MIT License

# Copyright (c) 2017 Max deGroot, Ellis Brown
Expand Down
2 changes: 1 addition & 1 deletion feat/facepose_detectors/img2pose/deps/rpn.py
Original file line number Diff line number Diff line change
Expand Up @@ -362,7 +362,7 @@ def filter_proposals(
# -> Tuple[List[Tensor], List[Tensor]]
num_images = proposals.shape[0]
device = proposals.device
# do not backprop throught objectness
# do not backprop through objectness
objectness = objectness.detach()
objectness = objectness.reshape(num_images, -1)

Expand Down
2 changes: 1 addition & 1 deletion feat/facepose_detectors/img2pose/img2pose_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def __init__(

Args:
device (str): device to execute code. can be ['auto', 'cpu', 'cuda', 'mps']
contrained (bool): whether to run constrained (default) or unconstrained mode
constrained (bool): whether to run constrained (default) or unconstrained mode

Returns:
Img2Pose object
Expand Down
2 changes: 1 addition & 1 deletion feat/utils/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def validate_input(inputFname):
def download_url(*args, **kwargs):
"""By default just call download_url from torch vision, but we pass a verbose =
False keyword argument, then call download_url with a special context manager than
supresses the print messages"""
suppresses the print messages"""
verbose = kwargs.pop("verbose", True)

if verbose:
Expand Down
2 changes: 1 addition & 1 deletion feat/utils/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def wavelet(freq, num_cyc=3, sampling_freq=30.0):
Creates a complex Morlet wavelet by windowing a cosine function by a Gaussian. All formulae taken from Cohen, 2014 Chaps 12 + 13

Args:
freq: (float) desired frequence of wavelet
freq: (float) desired frequency of wavelet
num_cyc: (float) number of wavelet cycles/gaussian taper. Note that smaller cycles give greater temporal precision and that larger values give greater frequency precision; (default: 3)
sampling_freq: (float) sampling frequency of original signal.

Expand Down
7 changes: 7 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,10 @@ universal = 1

[aliases]
test = pytest

[codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = .git
check-hidden = true
ignore-regex = ^\s*"image/\S+": ".*
ignore-words-list = ists,gaus