Skip to content

Commit

Permalink
Merge pull request #347 from pynapple-org/main
Browse files Browse the repository at this point in the history
Pull updated main to fix links
  • Loading branch information
sjvenditto authored Oct 2, 2024
2 parents 0931c28 + 82df628 commit 41c28eb
Show file tree
Hide file tree
Showing 14 changed files with 575 additions and 187 deletions.
31 changes: 27 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: 3.8
python-version: "3.10"
- name: Install dependencies
run: |
echo "testing: ${{github.ref}}"
Expand All @@ -45,7 +45,7 @@ jobs:
# - os: windows-latest
# python-version: 3.7
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
Expand All @@ -63,6 +63,29 @@ jobs:
uses: codecov/[email protected]
with:
token: ${{ secrets.CODECOV_TOKEN }}
check_links:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ".[docs]"
- name: Build site
run: mkdocs build
- name: Check html
uses: chabad360/htmlproofer@master
with:
directory: "./site"
# The directory to scan
arguments: --checks Links,Scripts --ignore-urls "https://fonts.gstatic.com,https://mkdocs-gallery.github.io" --assume-extension --check-external-hash --ignore-status-codes 403
# The arguments to pass to HTMLProofer


check:
if: always()
needs:
Expand Down
5 changes: 1 addition & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,7 @@ PYthon Neural Analysis Package.
pynapple is a light-weight python library for neurophysiological data analysis. The goal is to offer a versatile set of tools to study typical data in the field, i.e. time series (spike times, behavioral events, etc.) and time intervals (trials, brain states, etc.). It also provides users with generic functions for neuroscience such as tuning curves and cross-correlograms.

- Free software: MIT License
- __Documentation__: <https://pynapple-org.github.io/pynapple>
- __Notebooks and tutorials__ : <https://pynapple-org.github.io/pynapple/generated/gallery/>
<!-- - __Collaborative repository__: <https://github.com/pynapple-org/pynacollada> -->

- __Documentation__: <https://pynapple.org>

> **Note**
> :page_with_curl: If you are using pynapple, please cite the following [paper](https://elifesciences.org/reviewed-preprints/85786)
Expand Down
19 changes: 15 additions & 4 deletions docs/AUTHORS.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
Credits
=======
---
hide:
- navigation
---

Development Lead
----------------
Expand All @@ -12,8 +14,17 @@ Contributors
------------

- Adrien Peyrache
- Dan Levenstein
- Dan Levenstein
- Sofia Skromne Carrasco
- Davide Spalla
- Luigi Petrucco
- ... [and many more!](https://github.com/pynapple-org/pynapple/graphs/contributors)
- ... [and many more!](https://github.com/pynapple-org/pynapple/graphs/contributors)

Special Credits
---------------

Special thanks to Francesco P. Battaglia
(<https://github.com/fpbattaglia>) for the development of the original
*TSToolbox* (<https://github.com/PeyracheLab/TStoolbox>) and
*neuroseries* (<https://github.com/NeuroNetMem/neuroseries>) packages,
the latter constituting the core of *pynapple*.
6 changes: 3 additions & 3 deletions docs/api_guide/tutorial_pynapple_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
# Interval Sets object
# --------------------
#
# The [IntervalSet](https://peyrachelab.github.io/pynapple/core.interval_set/) object stores multiple epochs with a common time unit. It can then be used to restrict time series to this particular set of epochs.
# The [IntervalSet](https://pynapple-org.github.io/pynapple/reference/core/interval_set/) object stores multiple epochs with a common time unit. It can then be used to restrict time series to this particular set of epochs.


epochs = nap.IntervalSet(start=[0, 10], end=[5, 15], time_units="s")
Expand All @@ -82,7 +82,7 @@
print(new_tsd)

# %%
# Multiple operations are available for IntervalSet. For example, IntervalSet can be merged. See the full documentation of the class [here](https://peyrachelab.github.io/pynapple/core.interval_set/#pynapple.core.interval_set.IntervalSet.intersect) for a list of all the functions that can be used to manipulate IntervalSets.
# Multiple operations are available for IntervalSet. For example, IntervalSet can be merged. See the full documentation of the class [here](https://pynapple-org.github.io/pynapple/reference/core/interval_set/#pynapple.core.interval_set.IntervalSet.intersect) for a list of all the functions that can be used to manipulate IntervalSets.


epoch1 = nap.IntervalSet(start=0, end=10) # no time units passed. Default is us.
Expand Down Expand Up @@ -132,7 +132,7 @@
print(count)

# %%
# One advantage of grouping time series is that metainformation can be added directly on an element-wise basis. In this case, we add labels to each Ts object when instantiating the group and after. We can then use this label to split the group. See the [TsGroup](https://peyrachelab.github.io/pynapple/core.ts_group/) documentation for a complete methodology for splitting TsGroup objects.
# One advantage of grouping time series is that metainformation can be added directly on an element-wise basis. In this case, we add labels to each Ts object when instantiating the group and after. We can then use this label to split the group. See the [TsGroup](https://pynapple-org.github.io/pynapple/reference/core/ts_group/) documentation for a complete methodology for splitting TsGroup objects.
#
# First we create a pandas Series for the label.

Expand Down
31 changes: 26 additions & 5 deletions docs/api_guide/tutorial_pynapple_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,43 @@
# Navigating a structured dataset
# -------------------------------
#
# The dataset in this example can be found [here](https://www.dropbox.com/s/pr1ze1nuiwk8kw9/MyProject.zip?dl=1).

# First let's import the necessary packages.
#
# mkdocs_gallery_thumbnail_path = '_static/treeview.png'

import numpy as np
import pynapple as nap
import os
import requests, math
import tqdm
import zipfile

# mkdocs_gallery_thumbnail_path = '_static/treeview.png'
# %%
# Here we download a small example dataset.

project_path = "../../your/path/to/MyProject"
project_path = "MyProject"


if project_path not in os.listdir("."):
r = requests.get(f"https://osf.io/a9n6r/download", stream=True)
block_size = 1024*1024
with open(project_path+".zip", 'wb') as f:
for data in tqdm.tqdm(r.iter_content(block_size), unit='MB', unit_scale=True,
total=math.ceil(int(r.headers.get('content-length', 0))//block_size)):
f.write(data)

with zipfile.ZipFile(project_path+".zip", 'r') as zip_ref:
zip_ref.extractall(".")

# %%
# Let's load the project with `nap.load_folder`.

project = nap.load_folder(project_path)

print(project)

# %%
# The pynapple IO offers a convenient way of visualizing and navigating a folder based dataset. To visualize the whole hierarchy of Folders, you can call the view property or the expand function.
# The pynapple IO Folders class offers a convenient way of visualizing and navigating a folder based dataset. To visualize the whole hierarchy of Folders, you can call the view property or the expand function.

project.view

Expand Down
38 changes: 28 additions & 10 deletions docs/api_guide/tutorial_pynapple_nwb.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,34 @@
- [NWB format](https://pynwb.readthedocs.io/en/stable/index.html#)
This notebook focuses on the NWB format. Additionaly it demonstrates the capabilities of pynapple for lazy-loading different formats.
This notebook focuses on the NWB format. Additionally it demonstrates the capabilities of pynapple for lazy-loading different formats.
The dataset in this example can be found [here](https://www.dropbox.com/s/pr1ze1nuiwk8kw9/MyProject.zip?dl=1).
"""
# %%
#
# Let's import libraries.

import numpy as np
import pynapple as nap
import os
import requests, math
import tqdm
import zipfile

# %%
# Here we download the data.

project_path = "MyProject"

if project_path not in os.listdir("."):
r = requests.get(f"https://osf.io/a9n6r/download", stream=True)
block_size = 1024*1024
with open(project_path+".zip", 'wb') as f:
for data in tqdm.tqdm(r.iter_content(block_size), unit='MB', unit_scale=True,
total=math.ceil(int(r.headers.get('content-length', 0))//block_size)):
f.write(data)

with zipfile.ZipFile(project_path+".zip", 'r') as zip_ref:
zip_ref.extractall(".")

# %%
# NWB
Expand All @@ -30,15 +48,15 @@
# Multiple tools exists to create NWB file automatically. You can check [neuroconv](https://neuroconv.readthedocs.io/en/main/), [NWBGuide](https://nwb-guide.readthedocs.io/en/latest/) or even [NWBmatic](https://github.com/pynapple-org/nwbmatic).


data = nap.load_file("../../your/path/to/MyProject/sub-A2929/A2929-200711/pynapplenwb/A2929-200711.nwb")
data = nap.load_file("MyProject/sub-A2929/A2929-200711/pynapplenwb/A2929-200711.nwb")

print(data)

# %%
# Pynapple will give you a table with all the entries of the NWB file that are compatible with a pynapple object.
# When parsing the NWB file, nothing is loaded. The `NWBFile` keeps track of the position of the data whithin the NWB file with a key. You can see it with the attributes `key_to_id`.
# When parsing the NWB file, nothing is loaded. The `NWBFile` keeps track of the position of the data within the NWB file with a key. You can see it with the attributes `key_to_id`.

data.key_to_id
print(data.key_to_id)


# %%
Expand All @@ -52,7 +70,7 @@
#
# Internally, the `NWBClass` has replaced the pointer to the data with the actual data.
#
# While it looks like pynapple has loaded the data, in fact it did not. By default, calling the NWB object will return an HDF5 dataset.
# While it looks like pynapple has loaded the data, in fact it still did not. By default, calling the NWB object will return an HDF5 dataset.
# !!! warning
#
# New in `0.6.6`
Expand Down Expand Up @@ -89,7 +107,7 @@

# %%
# To change this behavior, you can pass `lazy_loading=False` when instantiating the `NWBClass`.
path = "../../your/path/to/MyProject/sub-A2929/A2929-200711/pynapplenwb/A2929-200711.nwb"
path = "MyProject/sub-A2929/A2929-200711/pynapplenwb/A2929-200711.nwb"
data = nap.NWBFile(path, lazy_loading=False)

z = data['z']
Expand All @@ -103,7 +121,7 @@
#
# In fact, pynapple can work with any type of memory map. Here we read a binary file with [`np.memmap`](https://numpy.org/doc/stable/reference/generated/numpy.memmap.html).

eeg_path = "../../your/path/to/MyProject/sub-A2929/A2929-200711/A2929-200711.eeg"
eeg_path = "MyProject/sub-A2929/A2929-200711/A2929-200711.eeg"
frequency = 1250 # Hz
n_channels = 16
f = open(eeg_path, 'rb')
Expand Down
Loading

0 comments on commit 41c28eb

Please sign in to comment.