diff --git a/notebooks/tutorial.ipynb b/notebooks/tutorial.ipynb index 54a2123e..954766ca 100644 --- a/notebooks/tutorial.ipynb +++ b/notebooks/tutorial.ipynb @@ -112,7 +112,7 @@ "source": [ "### **Activate the DataJoint Pipeline**\n", "\n", - "This tutorial activates the `ephys-acute.py` module from `element-array-ephys`, along\n", + "This tutorial activates the `ephys_acute.py` module from `element-array-ephys`, along\n", "with upstream dependencies from `element-animal` and `element-session`. Please refer to the\n", "[`tutorial_pipeline.py`](./tutorial_pipeline.py) for the source code." ] @@ -1065,7 +1065,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Every experimental session produces a set of data files. The purpose of the `SessionDirectory` table is to locate these files. It references a directory path relative to a root directory, defined in `dj.config[\\\"custom\\\"]`. More information about `dj.config` is provided in the [documentation](https://datajoint.com/docs/elements/user-guide/)." + "Every experimental session produces a set of data files. The purpose of the `SessionDirectory` table is to locate these files. It references a directory path relative to a root directory, defined in `dj.config[\"custom\"]`. More information about `dj.config` is provided in the [documentation](https://datajoint.com/docs/elements/user-guide/)." ] }, { @@ -1557,7 +1557,8 @@ "source": [ "### **Populate electrophysiology recording metadata**\n", "\n", - "In the upcoming cells, populate the `ephys.EphysRecording` table and its part table `ephys.EphysRecording.EphysFile` will extract and store the recording information from a given experimental session." + "In the upcoming cells, the `.populate()` method will automatically extract and store the\n", + "recording metadata for each experimental session in the `ephys.EphysRecording` table and its part table `ephys.EphysRecording.EphysFile`." ] }, { @@ -2194,8 +2195,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now that we've inserted kilosort parameters into the `ClusteringParamSet` table,\n", - "we're almost ready to sort our data. DataJoint uses a `ClusteringTask` table to\n", + "DataJoint uses a `ClusteringTask` table to\n", "manage which `EphysRecording` and `ClusteringParamSet` should be used during processing. \n", "\n", "This table is important for defining several important aspects of\n", @@ -2235,18 +2235,12 @@ "metadata": {}, "source": [ "The `ClusteringTask` table contains two important attributes: \n", - "+ `paramset_idx` \n", - "+ `task_mode` \n", - "\n", - "The `paramset_idx` attribute tracks\n", - "your kilosort parameter sets. You can choose the parameter set using which \n", - "you want spike sort ephys data. For example, `paramset_idx=0` may contain\n", - "default parameters for kilosort processing whereas `paramset_idx=1` contains your custom parameters for sorting. This\n", - "attribute tells the `Processing` table which set of parameters you are processing in a given `populate()`.\n", - "\n", - "The `task_mode` attribute can be set to either `load` or `trigger`. When set to `load`,\n", - "running the processing step initiates a search for exisiting kilosort output files. When set to `trigger`, the\n", - "processing step will run kilosort on the raw data. " + "+ `paramset_idx` - Allows the user to choose the parameter set with which you want to\n", + " run spike sorting.\n", + "+ `task_mode` - Can be set to `load` or `trigger`. When set to `load`, running the\n", + " Clustering step initiates a search for existing output files of the spike sorting\n", + " algorithm defined in `ClusteringParamSet`. When set to `trigger`, the processing step\n", + " will run spike sorting on the raw data." ] }, { @@ -2266,6 +2260,13 @@ ")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's call populate on the `Clustering` table which checks for kilosort results since `task_mode=load`." + ] + }, { "cell_type": "code", "execution_count": 28,