From 3ad4352b0098f074c7788fb58dad2a476f8a8fee Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 7 May 2024 12:46:53 -0700 Subject: [PATCH 01/22] add additional usage info for snakemake and communal lab resources --- doc/Food.rst | 2 +- doc/TSCC.rst | 30 +++++++++++++++++++++++++++--- 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/doc/Food.rst b/doc/Food.rst index 76467d9..19d5476 100644 --- a/doc/Food.rst +++ b/doc/Food.rst @@ -3,7 +3,7 @@ Ordering Food for the Lab ========================= -Choose from the catering locations listed below. Note that you'll need to get prior approval from Melissa for anywhere besides Domino's. +Choose from the catering locations listed below. **Note that you'll need to get prior approval from Melissa** for anywhere besides Domino's. If you'd like to order from somewhere that isn't listed here, you'll also need to keep the order <$200, in addition to requesting prior approval from Melissa. Afterwards, please also update this page with instructions for ordering from that location, so that others can order from there again in the future. diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 210b5fb..556fede 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -20,7 +20,7 @@ Logging in * To configure ssh for expedited access, consider following the directions under the section *Linux or Mac* on `the TSCC user guide `_ to add an entry to your :code:`~/.ssh/config` -* Windows users can use `Windows Subsystem for Linux `_ +* If you are running Windows, you can use the `Windows Subsystem for Linux `_ to acquire a Linux terminal with SSH The login nodes are often quite slow because there are too many users on them, and you're not supposed to run code that's at all computationally burdensome there. So if you want to use tscc as a workstation, you should immediately try to grab an @@ -48,9 +48,9 @@ Filesystem locations -------------------- We have 100TB of space in :code:`/tscc/projects/ps-gymreklab`, which is where all of our files are stored. Your personal storage directory is :code:`/tscc/projects/ps-gymreklab/`. Your home directory for config and the like is -:code:`/tscc/nfs/home/`, don't store any large files there, since you'll only get 100 GB there. +:code:`/tscc/nfs/home/`, but don't store any large files there, since you'll only get 100 GB there. -If you need some extra space just for a few months, consider using your Lustre *scratch* directory (:code:`/tscc/lustre/ddn/scratch/$USER`). Files here are deleted automatically after 90 days but there is more than 2 PB available, shared over all of the users of TSCC. Otherwise, if you simply need some extra space just until your job finishes running, you can refer to :code:`/scratch/$USER/job_$SLURM_JOBID` within your jobscript. This storage will be deleted once your job dies, but it's better than Lustre scratch for I/O intensive jobs. +If you need some extra space just for a few months, consider using your personal Lustre *scratch* directory (:code:`/tscc/lustre/ddn/scratch/$USER`). Files here are deleted automatically after 90 days but there is more than 2 PB available, shared over all of the users of TSCC. Otherwise, if you simply need some extra space just until your job finishes running, you can refer to :code:`/scratch/$USER/job_$SLURM_JOBID` within your jobscript. This storage will be deleted once your job dies, but it's better than Lustre scratch for I/O intensive jobs. Communal lab resources are in :code:`/tscc/projects/ps-gymreklab/resources/`. Feel free to contribute to these as appropriate. @@ -59,6 +59,10 @@ Communal lab resources are in :code:`/tscc/projects/ps-gymreklab/resources/`. Fe * :code:`/tscc/projects/ps-gymreklab/resources/dbase` contains reference genome builds for humans and mice and other non-project-specific datasets * :code:`/tscc/projects/ps-gymreklab/resources/datasets` contains project-specific datasets that are shared across the lab. +* :code:`/tscc/projects/ps-gymreklab/resources/datasets/ukbiobank` contains our local copy of the UK Biobank. You must have the proper Unix permissions to read these files. Ask Melissa to add you on the UK Biobank portal and then ask to be added to the :code:`gymreklab-ukb` group afterwards. +* :code:`/tscc/projects/ps-gymreklab/resources/datasets/1000Genomes` contains files for the 1000 Genomes dataset +* :code:`/tscc/projects/ps-gymreklab/resources/datasets/gtex` contains the GTEX dataset +* :code:`/tscc/projects/ps-gymreklab/resources/datasets/pangenome` contains pangenome files Sharing files with Snorlax ^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -341,6 +345,7 @@ Here's an example of one. #SBATCH --nodes 1 #SBATCH --ntasks 1 #SBATCH --cpus-per-task 1 + #SBATCH --mem 2G #SBATCH --time 1:00:00 #SBATCH --output /dev/null @@ -377,3 +382,22 @@ Here's an example of one. fi fi exit "$exit_code" + +Let's assume that you name the file :code:`run.bash` and mark it as executable with :code:`chmod u+x run.bash`. +Then you can run it on an interactive node with: + +.. code-block:: bash + + ./run.bash + +Or on a login node with: + +.. code-block:: bash + + sbatch run.bash + +You can override the default :code:`sbatch` parameters or :code:`snakemake` profile values directly from the command-line. For example, you can perform `a dry-run `_ of the workflow like this: + +.. code-block:: bash + + sbatch --time 0:10:00 run.bash -np From 1800ada5eb2bd4dfcbaf98c59388e4f3035b6308 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 7 May 2024 16:29:44 -0700 Subject: [PATCH 02/22] clean up debugging instructions --- doc/TSCC.rst | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 556fede..afd317f 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -118,8 +118,12 @@ Notes: * Don't request more than one node per job. That means you would be managing inter-node inter-process communication yourself. (e.g. message passing). Instead, just submit more jobs * If :code:`` is mistyped, the job will not run. Double check that location before you submit. +* There may be an optional shebang line at the start of the file, but no blank or other lines between the beginning and the :code:`#SBATCH` lines * None of the SLURM settings can access environment variables. If you want to set a value (e.g. the log directory) dynamically, you'll need to dynamically generate the SLURM file. +* SLURM does not support using environment variables in :code:`#SBATCH` lines in scripts. If you wish to use + environment variables to set such values, you must pass them to the :code:`sbatch` command directly + (e.g. :code:`sbatch --output=$SOMEWHERE/out slurm_script.sh`) Partitions ^^^^^^^^^^ @@ -205,14 +209,10 @@ Managing jobs Listing current jobs: :code:`squeue -u `. To look at a single job, use :code:`squeue -j `. To list maximum information about a job, use :code:`squeue -l -j ` -* States are Q for queued, R for running, C for cancelled, and D for done. (if I recall correctly) +The output flag determines the file that stdout is written to. This must be a file, not a directory. +You can use some placeholders in the output location such as `%x` for job name and `%j` for job id. -If your jobs are called :code:`22409804.tscc-mgr7.local` then :code:`22409804` is the job ID. - -To look at the stdout of a currently running job: :code:`qpeek `. To look at the stderr -:code:`qpeek -e `. Once the jobs finish the stdout and stderr will be written to the files -:code:`/.o` and :code:`/.e` respectively and -:code:`qpeek` will no longer work. +Use the error flag to choose stderr's output location. If not specifie, it will go to the output location. To delete a running or queued job: :code:`scancel `. To delete all running or queued jobs: :code:`scancel -u $USER` @@ -221,13 +221,13 @@ To figure out why a job is queued use :code:`scontrol show job Debugging jobs the OS killed ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -#. Look at the output file :code:`/.o`, the first line should contain the node - name. (e.g. :code:`Nodes: tscc-5-7`) -#. ssh into the node (you can do this to any node, but if you run a large process the OS will kill you because - you have not been scheduled to that node) -#. Scan the os logs for a killed process `dmesg -T | grep ` - -The OS normally kills jobs because you ran over your memory limit. +#. Look at the standard output and standard error files. Any error messages should be there. +#. ssh into the node. You can do this to any node, but if you run a large process the OS will kill you because + you have not been scheduled to that node. You can figure out the name of the node assigned to your job using + :code:`squeue` once the status of the job is "RUNNING". +#. Scan the os logs for a killed process :code:`dmesg -T | grep ` +#. If there are any messages stating that your job was "Killed", its usually a sign that you ran out of memory. + You can request more memory by resubmitting the job with the :code:`--mem` parameter. For ex: :code:`--mem 8G` Get Slack notifications when your jobs finish ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ From 54cb4318b8b7f9100de0c2ce6bdabf07b58dcf1a Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 7 May 2024 16:42:03 -0700 Subject: [PATCH 03/22] explain the module system and offer some caveats --- doc/TSCC.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index afd317f..af15cba 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -246,6 +246,16 @@ Get Slack notifications when your jobs finish slack "your job terminated with exit status $?" +Installing software +------------------- +The best practice is for each user of TSCC to use :code:`Miniconda ("Miniconda3 Linux 64-bit") `_ to install software. You can install it in your home directory. + +If you are feeling lazy, you can also use :code:`module` system to load preconfigured software tools. +Refer to `the TSCC documentation `_ for more information. +Please note that software available through the module system is usually out of date and cannot be easily updated. +It's also unlikely that your collaborators/reviewers will be able to figure out which versions of the software you used. +For these reasons, we do not recommend using the :code:`module` system. + Managing funds -------------- :code:`gbalance -u ` will show the balance for our group, but I don't know how to see the balance on hotel vs condo, From 19061f22b2192a09f01fc7806b1b5e285ad14071 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 8 May 2024 10:22:04 -0700 Subject: [PATCH 04/22] provide instructions for installing from tscc --- doc/TSCC.rst | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index af15cba..ef8171e 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -222,12 +222,9 @@ To figure out why a job is queued use :code:`scontrol show job Debugging jobs the OS killed ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Look at the standard output and standard error files. Any error messages should be there. -#. ssh into the node. You can do this to any node, but if you run a large process the OS will kill you because - you have not been scheduled to that node. You can figure out the name of the node assigned to your job using - :code:`squeue` once the status of the job is "RUNNING". +#. ssh into the node. You can do this to any node, but if you run a large process the OS will kill you because you have not been scheduled to that node. You can figure out the name of the node assigned to your job using :code:`squeue` once the status of the job is "RUNNING". #. Scan the os logs for a killed process :code:`dmesg -T | grep ` -#. If there are any messages stating that your job was "Killed", its usually a sign that you ran out of memory. - You can request more memory by resubmitting the job with the :code:`--mem` parameter. For ex: :code:`--mem 8G` +#. If there are any messages stating that your job was "Killed", its usually a sign that you ran out of memory. You can request more memory by resubmitting the job with the :code:`--mem` parameter. For ex: :code:`--mem 8G` Get Slack notifications when your jobs finish ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -248,12 +245,25 @@ Get Slack notifications when your jobs finish Installing software ------------------- -The best practice is for each user of TSCC to use :code:`Miniconda ("Miniconda3 Linux 64-bit") `_ to install software. You can install it in your home directory. +The best practice is for each user of TSCC to use Miniconda to install their own software. Run these commands to download, install, and configure Miniconda properly on TSCC: + +.. code-block:: bash + + wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh + bash Miniconda3-latest-Linux-x86_64.sh -b -u + source ~/miniconda3/bin/activate + conda init bash + conda config --remove channels defaults + conda config --add channels nodefaults + conda config --add channels bioconda + conda config --add channels conda-forge + conda config --set channel_priority strict If you are feeling lazy, you can also use :code:`module` system to load preconfigured software tools. Refer to `the TSCC documentation `_ for more information. Please note that software available through the module system is usually out of date and cannot be easily updated. It's also unlikely that your collaborators/reviewers will be able to figure out which versions of the software you used. +(Unlike with conda, there isn't a way to share your module environments with non-TSCC users.) For these reasons, we do not recommend using the :code:`module` system. Managing funds From 7521a66988b26e439365d63a0f9ee21daf4c5c41 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 8 May 2024 12:35:05 -0700 Subject: [PATCH 05/22] explain that we no longer have a hotel allocation --- doc/TSCC.rst | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index ef8171e..4c631c0 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -137,6 +137,10 @@ First consider :code:`condo` * Jobs may be `preempted `_ after 8 hrs but can run for up to 14 days * The architectures of condo nodes vary wildly - if you might hit the mem/core or cores/node limit, go to hotel where (last I checked) you always get at least 4.57 GB memory/node and at least up to 28 cores/node. +.. warning:: + As of the migration to TSCC 2.0 (in Jan 2024), our lab no longer has a hotel allocation! + But we will continue to include the :code:`hotel` documentation below in case we ever obtain an allocation again. + If you need more than 8 hours, consider :code:`hotel`: * Compute hours are more expensive here than on :code:`condo` @@ -268,8 +272,11 @@ For these reasons, we do not recommend using the :code:`module` system. Managing funds -------------- -:code:`gbalance -u ` will show the balance for our group, but I don't know how to see the balance on hotel vs condo, -so I'm not actually sure what this output means. +.. code-block:: bash + + /cm/shared/apps/sdsc/1.0/bin/tscc_client.sh -A ddp268 + +Refer to `this page of the TSCC docs `_ for more info. Using Jupyter ------------- From e2b3277de7eb4279aa4ed31e205a2c6907b84c40 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 8 May 2024 13:51:58 -0700 Subject: [PATCH 06/22] explain how to get access to tscc --- doc/TSCC.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 4c631c0..f5d85ad 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -9,6 +9,13 @@ Official docs * The `tscc description `_ * The `tscc 2.0 transitional workshow video `_ +Getting access +-------------- +Email tscc-support AT sdsc DOT edu from your UCSD email and CC Melissa. You can include the following in your email: + + Hello TSCC Support, + I'm a new member of the Gymrek lab. Is there any chance that you can create a TSCC account for me and add me to the Gymrek Lab group (gymreklab-group:*:11136)? + Logging in ---------- .. code-block:: bash From b9a183bbbd3526fdfacc0ead78cc12ef7490bfef Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 8 May 2024 13:56:50 -0700 Subject: [PATCH 07/22] escape asterisk symbol --- doc/TSCC.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index f5d85ad..f1f25b6 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -11,10 +11,10 @@ Official docs Getting access -------------- -Email tscc-support AT sdsc DOT edu from your UCSD email and CC Melissa. You can include the following in your email: +Email tscc-support AT sdsc DOT edu from your UCSD email and CC Melissa. You can include the following in your email. Hello TSCC Support, - I'm a new member of the Gymrek lab. Is there any chance that you can create a TSCC account for me and add me to the Gymrek Lab group (gymreklab-group:*:11136)? + I'm a new member of the Gymrek lab. Is there any chance that you can create a TSCC account for me and add me to the Gymrek Lab group (gymreklab-group:\*:11136)? Logging in ---------- From 8acacde8f32604aabac03c2c920d27b71b53bccc Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 8 May 2024 14:13:45 -0700 Subject: [PATCH 08/22] explain how to create a personal directory in our mount --- doc/TSCC.rst | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index f1f25b6..bfedac5 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -14,6 +14,7 @@ Getting access Email tscc-support AT sdsc DOT edu from your UCSD email and CC Melissa. You can include the following in your email. Hello TSCC Support, + I'm a new member of the Gymrek lab. Is there any chance that you can create a TSCC account for me and add me to the Gymrek Lab group (gymreklab-group:\*:11136)? Logging in @@ -54,8 +55,8 @@ between interactive sessions, you should use :code:`tmux` or :ref:`screen `. Your home directory for config and the like is -:code:`/tscc/nfs/home/`, but don't store any large files there, since you'll only get 100 GB there. +storage directory is :code:`/tscc/projects/ps-gymreklab/`. (If this directory doesn't yet exist, feel free to create it with the :code:`mkdir` command. +Your home directory for config and the like is :code:`/tscc/nfs/home/`, but don't store any large files there, since you'll only get 100 GB there. If you need some extra space just for a few months, consider using your personal Lustre *scratch* directory (:code:`/tscc/lustre/ddn/scratch/$USER`). Files here are deleted automatically after 90 days but there is more than 2 PB available, shared over all of the users of TSCC. Otherwise, if you simply need some extra space just until your job finishes running, you can refer to :code:`/scratch/$USER/job_$SLURM_JOBID` within your jobscript. This storage will be deleted once your job dies, but it's better than Lustre scratch for I/O intensive jobs. From 6236dc75e5af2b6aabce12238f0f08d2f7d9630d Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Mon, 13 May 2024 17:09:15 -0700 Subject: [PATCH 09/22] change last-updated date --- doc/TSCC.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index bfedac5..0f0a03c 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -1,7 +1,7 @@ TSCC ==== -Last update: 2024/01/25 +Last update: 2024/05/13 Official docs ------------- From a9482dd3ab00bd330a46551c50d6e8cb3ea7d818 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 4 Jun 2024 09:51:26 -0700 Subject: [PATCH 10/22] explain how to get info on usage limits --- doc/TSCC.rst | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 0f0a03c..1c7941b 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -159,7 +159,12 @@ If you need more than 8 hours, consider :code:`hotel`: sacctmgr show qos format=Name%20,priority,gracetime,PreemptExemptTime,maxwall,MaxTRES%30,GrpTRES%30 where qos=hcg-ddp268 -So if you start a 36-core / 192GB memory job (or multiple jobs that use either a total of 36 cores OR a total of 192GB memory), then everyone else in our lab who submits to the :code:`hotel` partition will see their jobs wait in the queue until yours are finished. These limits are set according to the number of nodes that our lab has contributed to the :code:`hotel` partition. Jobs submitted to the :code:`condo` partition are not subject to this group limit. +So if you start a 36-core / 192GB memory job (or multiple jobs that use either a total of 36 cores OR a total of 192GB memory), then everyone else in our lab who submits to the :code:`hotel` partition will see their jobs wait in the queue until yours are finished. These limits are set according to the number of nodes that our lab has contributed to the :code:`hotel` partition. Jobs submitted to the :code:`condo` partition are not subject to this group limit. For more information about account limits, including info about viewing your account usage, read `the section of the TSCC docs titled "Managing Your User Account" `_. For example, you can get a lot of information by using the `tscc_client`: + +... code-block:: bash + + module load sdsc + tscc_client -A ddp268 Env Variables and Submitting Many Jobs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ From 58d792cb1307b14835eaee0a80cd22add1cc2217 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 31 Jul 2024 12:46:59 -0700 Subject: [PATCH 11/22] apply suggestions in comments --- doc/TSCC.rst | 121 +++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 108 insertions(+), 13 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 1c7941b..c82413b 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -26,7 +26,21 @@ Logging in * This will put you on a node such as `login1.tscc.sdsc.edu` or `login11.tscc.sdsc.edu` or `login2.tscc.sdsc.edu`. You can also ssh into those nodes directly (e.g. if you have :code:`tmux` sessions saved on one of them) -* To configure ssh for expedited access, consider following the directions under the section *Linux or Mac* on `the TSCC user guide `_ to add an entry to your :code:`~/.ssh/config` +* To configure ssh for expedited access, consider following the directions under the section *Linux or Mac* on `the TSCC user guide `_ to add an entry to your :code:`~/.ssh/config`. Here's an example. Remember to replace :code:`YOUR_USERNAME_GOES_HERE`! Afterwards, you should be able to log in with a simple: :code:`ssh tscc` command. + +.. code-block:: txt + + Host * + ControlMaster auto + ControlPath ~/.ssh/ssh_mux_%h_%p_%r + ControlPersist 1 + ServerAliveInterval 100 + + Host tscc + HostName login1.tscc.sdsc.edu + ForwardX11 yes + User YOUR_USERNAME_GOES_HERE + * If you are running Windows, you can use the `Windows Subsystem for Linux `_ to acquire a Linux terminal with SSH @@ -55,7 +69,14 @@ between interactive sessions, you should use :code:`tmux` or :ref:`screen `. (If this directory doesn't yet exist, feel free to create it with the :code:`mkdir` command. +storage directory is :code:`/tscc/projects/ps-gymreklab/`. (If this directory doesn't yet exist, feel free to create it with the :code:`mkdir` command.) + +You can check the available storage in the shared mount with the following command. + +.. code-block:: bash + + df -h | grep -E '^Filesystem|gymreklab' | column -t + Your home directory for config and the like is :code:`/tscc/nfs/home/`, but don't store any large files there, since you'll only get 100 GB there. If you need some extra space just for a few months, consider using your personal Lustre *scratch* directory (:code:`/tscc/lustre/ddn/scratch/$USER`). Files here are deleted automatically after 90 days but there is more than 2 PB available, shared over all of the users of TSCC. Otherwise, if you simply need some extra space just until your job finishes running, you can refer to :code:`/scratch/$USER/job_$SLURM_JOBID` within your jobscript. This storage will be deleted once your job dies, but it's better than Lustre scratch for I/O intensive jobs. @@ -67,11 +88,62 @@ Communal lab resources are in :code:`/tscc/projects/ps-gymreklab/resources/`. Fe * :code:`/tscc/projects/ps-gymreklab/resources/dbase` contains reference genome builds for humans and mice and other non-project-specific datasets * :code:`/tscc/projects/ps-gymreklab/resources/datasets` contains project-specific datasets that are shared across the lab. -* :code:`/tscc/projects/ps-gymreklab/resources/datasets/ukbiobank` contains our local copy of the UK Biobank. You must have the proper Unix permissions to read these files. Ask Melissa to add you on the UK Biobank portal and then ask to be added to the :code:`gymreklab-ukb` group afterwards. +* :code:`/tscc/projects/ps-gymreklab/resources/datasets/ukbiobank` contains our local copy of the UK Biobank. You must have the proper Unix permissions to read these files. First, create an account `here `_ and then once that's approved, ask Melissa to add you on the UK Biobank portal and the :code:`gymreklab-ukb` Unix group. * :code:`/tscc/projects/ps-gymreklab/resources/datasets/1000Genomes` contains files for the 1000 Genomes dataset * :code:`/tscc/projects/ps-gymreklab/resources/datasets/gtex` contains the GTEX dataset * :code:`/tscc/projects/ps-gymreklab/resources/datasets/pangenome` contains pangenome files +Access TSCC files locally on your computer +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +You can upload and download files from TSCC using the `scp` command. Assuming you've configured a host in your `~/.ssh/config` named `tscc`, you would download chrM of the hg19 reference genome like this, for example. + +.. code-block:: bash + + scp -r tscc:/tscc/projects/ps-gymreklab/resources/dbase/human_by_chrom/hg19/chrM.fa . + +However, if you would like to download many files from TSCC or edit files on TSCC in real time, you may opt to mount TSCC as a network drive, instead. A program called `sshfs` will allow you to view and edit TSCC files on your computer and keep them synced with TSCC. + +To set up :code:`sshfs`, you must first download and install it. With `homebrew `_ on MacOS, you can do :code:`brew install sshfs` or on Ubuntu or `Ubuntu `_ on `Windows Subsystem for Linux `_, you can just do :code:`sudo apt install sshfs`. Next, simply add the following snippet to your :code:`~/.bashrc`: + +.. code-block:: bash + + # mount a remote drive over ssh + # arg 1: the hostname of the server, as specified in your ssh config + # arg 2 (optional): the mount directory; defaults to arg1 in the current directory + sshopen() { + # perform validation checks, first + command -v sshfs >/dev/null 2>&1 || { echo >&2 "error: sshfs is not installed"; return 1; } + grep -q '^user_allow_other' /etc/fuse.conf || { echo >&2 "error: please uncomment the 'user_allow_other' option in /etc/fuse.conf"; return 1; } + ssh -q "$1" exit >/dev/null || { echo >&2 "error: cannot connect to '$1' via ssh; check that '$1' is in your ~/.ssh/config"; return 1; } + [ -d "${2:-$1}" ] && { ls -1qA "${2:-$1}" | grep -q .; } >/dev/null 2>&1 && { echo >&2 "error: '${2:-$1}' is not an empty directory; is it already mounted?"; return 1; } + # set up a trap to exit the mount before attempting to create it + trap "cd \"$PWD\" && { fusermount -u \"${2:-$1}\"; rmdir \"${2:-$1}\"; }" EXIT && mkdir -p "${2:-$1}" && { + # ServerAlive settings prevent the ssh connection from dying unexpectedly + # cache_timeout controls the number of seconds before sshfs retrieves new files from the server + sshfs -o allow_other,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=900,follow_symlinks "$1": "${2:-$1}" + } || { + # if the sshfs command didn't work, store the exit code, clean up the dir and the trap, and then return the exit code + local exit_code=$? + rmdir "${2:-$1}" && trap - EXIT + return $exit_code + } + } + +After sourcing your :code:`~/.bashrc` you should now be able to run :code:`sshopen tscc`! This will create a folder in your working directory with all of your files from TSCC. The network mount will be automatically disconnected when you close your terminal. + +Some notes on usage: +* Depending on your network connection, :code:`sshopen` might choke on large files. Consider using :code:`scp` for such files, instead. +* In order to reduce network usage, sshopen will only retrieve new files from the server every 15 minutes. If you want this to happen more frequently, just change the cache_timeout setting in the sshfs command. +* The unmount will fail if any processes are still utilizing files in the mount, so you should close your File Explorer or any other applications before you close your terminal window. If the unmount fails, you can always unmount manually: :code:`pkill sshfs && rmdir tscc` will kill the :code:`sshfs` command and delete the mounted folder. + +Syncing TSCC files with Google Drive or OneDrive +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Ever wanted to share your plots with a collaborator or your PI? But you have too many and they're updated too often to use :code:`scp` to download and reupload each time? + +Consider using :code:`rclone` to automatically sync your files with a cloud storage provider! You can install :code:`rclone` `using conda `_ and then configure it according to the instructions for `Google Drive `_ or for `OneDrive `_. + +When configuring :code:`rclone`, you should answer **No** to the question *Use web browser to automatically authenticate rclone with remote?*. You can instead follow their directions to install :code:`rclone` on your laptop or personal computer to get the appropriate token. Or, if that doesn't work, you can try using the (less secure) `SSH tunneling approach `_. + Sharing files with Snorlax ^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -262,26 +334,49 @@ Get Slack notifications when your jobs finish Installing software ------------------- -The best practice is for each user of TSCC to use Miniconda to install their own software. Run these commands to download, install, and configure Miniconda properly on TSCC: +The best practice is for each user of TSCC to use conda to install their own software. Run these commands to download, install, and configure conda properly on TSCC: .. code-block:: bash - wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh - bash Miniconda3-latest-Linux-x86_64.sh -b -u - source ~/miniconda3/bin/activate + wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh + bash Miniforge3-Linux-x86_64.sh -b -u + rm Miniforge3-Linux-x86_64.sh + source ~/miniforge3/bin/activate conda init bash - conda config --remove channels defaults conda config --add channels nodefaults conda config --add channels bioconda conda config --add channels conda-forge conda config --set channel_priority strict + conda update -y --all -If you are feeling lazy, you can also use :code:`module` system to load preconfigured software tools. +If you are feeling lazy, you can also use the :code:`module` system to load preconfigured software tools. Refer to `the TSCC documentation `_ for more information. -Please note that software available through the module system is usually out of date and cannot be easily updated. -It's also unlikely that your collaborators/reviewers will be able to figure out which versions of the software you used. -(Unlike with conda, there isn't a way to share your module environments with non-TSCC users.) -For these reasons, we do not recommend using the :code:`module` system. +.. warning:: + Software available through the module system is usually out of date and cannot be easily updated. + It's also unlikely that collaborators/reviewers will be able to run your code once you're ready to share it with them, since, + unlike with conda, the module system doesn't offer a way to share your software environment with non-TSCC users. + For these reasons, we do not recommend using the :code:`module` system. + +Using containers +---------------- +You can also load software via containers. Unfortunately, Docker is not available on TSCC and cannot be installed. Instead, you can use singularity (which was recently renamed to apptainer). First, run :code:`module load singularity` to make the :code:`singularity` command available. Refer to `the apptainer documentation `_ for usage information. + +For example, to grab a bash shell with TRTools: + +.. code-block:: bash + + singularity shell --bind /tscc docker://quay.io/biocontainers/trtools:6.0.1--pyhdfd78af_0 + +Or, to run the :code:`dumpSTR --help` command, for example: + +.. code-block:: bash + + singularity exec --bind /tscc docker://quay.io/biocontainers/trtools:6.0.1--pyhdfd78af_0 dumpSTR --help + +You can find containers for all Bioconda packages on `the Biocontainers registry `_. + +.. warning:: + You must provide :code:`--bind /tscc` if you want to have access to files in the :code:`/tscc` directory within the container. Managing funds -------------- From 017992220e7233844efc5fbdefe08c4eae6428f7 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 31 Jul 2024 12:50:36 -0700 Subject: [PATCH 12/22] fix indentation err --- doc/TSCC.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index c82413b..b6ded1a 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -351,6 +351,7 @@ The best practice is for each user of TSCC to use conda to install their own sof If you are feeling lazy, you can also use the :code:`module` system to load preconfigured software tools. Refer to `the TSCC documentation `_ for more information. + .. warning:: Software available through the module system is usually out of date and cannot be easily updated. It's also unlikely that collaborators/reviewers will be able to run your code once you're ready to share it with them, since, From 0b4dcdfe961a5ec34a6df129dd7aa4cc8cff0e13 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 31 Jul 2024 13:01:36 -0700 Subject: [PATCH 13/22] change txt to text to resolve pygments err --- doc/TSCC.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index b6ded1a..150bb14 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -28,7 +28,7 @@ Logging in * To configure ssh for expedited access, consider following the directions under the section *Linux or Mac* on `the TSCC user guide `_ to add an entry to your :code:`~/.ssh/config`. Here's an example. Remember to replace :code:`YOUR_USERNAME_GOES_HERE`! Afterwards, you should be able to log in with a simple: :code:`ssh tscc` command. -.. code-block:: txt +.. code-block:: text Host * ControlMaster auto From 1320af8eea417cd62512b6b54451a6197c6427e5 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 7 Aug 2024 11:05:57 -0700 Subject: [PATCH 14/22] fix bulleted list under sshfs section --- doc/TSCC.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 150bb14..1cddeaa 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -132,8 +132,9 @@ To set up :code:`sshfs`, you must first download and install it. With `homebrew After sourcing your :code:`~/.bashrc` you should now be able to run :code:`sshopen tscc`! This will create a folder in your working directory with all of your files from TSCC. The network mount will be automatically disconnected when you close your terminal. Some notes on usage: + * Depending on your network connection, :code:`sshopen` might choke on large files. Consider using :code:`scp` for such files, instead. -* In order to reduce network usage, sshopen will only retrieve new files from the server every 15 minutes. If you want this to happen more frequently, just change the cache_timeout setting in the sshfs command. +* In order to reduce network usage, sshopen will only retrieve new files from the server every 15 minutes. If you want this to happen more frequently, just change the :code:`cache_timeout` setting in the sshfs command. * The unmount will fail if any processes are still utilizing files in the mount, so you should close your File Explorer or any other applications before you close your terminal window. If the unmount fails, you can always unmount manually: :code:`pkill sshfs && rmdir tscc` will kill the :code:`sshfs` command and delete the mounted folder. Syncing TSCC files with Google Drive or OneDrive From e7add9b6435351debd1aeb286966e1480d35c8e4 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 7 Aug 2024 11:13:42 -0700 Subject: [PATCH 15/22] add code blocks and a quick note --- doc/TSCC.rst | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 1cddeaa..1a9af30 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -234,7 +234,7 @@ If you need more than 8 hours, consider :code:`hotel`: So if you start a 36-core / 192GB memory job (or multiple jobs that use either a total of 36 cores OR a total of 192GB memory), then everyone else in our lab who submits to the :code:`hotel` partition will see their jobs wait in the queue until yours are finished. These limits are set according to the number of nodes that our lab has contributed to the :code:`hotel` partition. Jobs submitted to the :code:`condo` partition are not subject to this group limit. For more information about account limits, including info about viewing your account usage, read `the section of the TSCC docs titled "Managing Your User Account" `_. For example, you can get a lot of information by using the `tscc_client`: -... code-block:: bash +.. code-block:: bash module load sdsc tscc_client -A ddp268 @@ -242,8 +242,12 @@ So if you start a 36-core / 192GB memory job (or multiple jobs that use either a Env Variables and Submitting Many Jobs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To pass an environment variable to a job, make sure the :code:`#SBATCH --export ALL` flag is set in the SLURM file or run -:code:`sbatch .slurm --export "=,=,..."`. You should then be able to access those -values in the script using :code:`$var1` and so on. + +.. code-block:: bash + + sbatch .slurm --export "=,=,..." + +You should then be able to access those values in the script using :code:`$var1` and so on. Here's an example for how to submit many jobs. Suppose your current directory is:: @@ -350,6 +354,9 @@ The best practice is for each user of TSCC to use conda to install their own sof conda config --set channel_priority strict conda update -y --all +.. note:: + Make sure to never install software with conda on a login node! It will take a long time and slow down the login node for other TSCC users. + If you are feeling lazy, you can also use the :code:`module` system to load preconfigured software tools. Refer to `the TSCC documentation `_ for more information. From 728fcc30bebe9fd1a7789a490da7f63cc379daeb Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 11 Sep 2024 09:58:18 +0300 Subject: [PATCH 16/22] add example rclone command --- doc/TSCC.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 1a9af30..6421964 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -145,6 +145,12 @@ Consider using :code:`rclone` to automatically sync your files with a cloud stor When configuring :code:`rclone`, you should answer **No** to the question *Use web browser to automatically authenticate rclone with remote?*. You can instead follow their directions to install :code:`rclone` on your laptop or personal computer to get the appropriate token. Or, if that doesn't work, you can try using the (less secure) `SSH tunneling approach `_. +Read up on the `rclone commands `_ to figure out how to use it. For example, to upload a single file to Google Drive: + +.. code-block:: bash + + rclone copyto FILEPATH_ON_TSCC gdrive:FILEPATH_ON_GDRIVE + Sharing files with Snorlax ^^^^^^^^^^^^^^^^^^^^^^^^^^ From cb05e00f30010d94a1f1ed5f3286bbdaccf1bf06 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Mon, 16 Sep 2024 09:16:03 -0700 Subject: [PATCH 17/22] update conda install instructions to discourage usage of the base env --- doc/TSCC.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 6421964..2245c5b 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -358,7 +358,8 @@ The best practice is for each user of TSCC to use conda to install their own sof conda config --add channels bioconda conda config --add channels conda-forge conda config --set channel_priority strict - conda update -y --all + conda config --set auto_activate_base false + conda update -n base -y --all .. note:: Make sure to never install software with conda on a login node! It will take a long time and slow down the login node for other TSCC users. From d9519e939f81b5f26ce854dbb6b353452158cb68 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 8 Oct 2024 08:36:59 -0700 Subject: [PATCH 18/22] update the current status of the compute clusters --- doc/ComputeResources.rst | 8 +++++++- doc/Onboarding.rst | 6 ++---- doc/TSCC.rst | 2 ++ 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/doc/ComputeResources.rst b/doc/ComputeResources.rst index cf510c2..84d376e 100644 --- a/doc/ComputeResources.rst +++ b/doc/ComputeResources.rst @@ -1,7 +1,13 @@ +.. _computeresources: + Lab Compute Resources ===================== -We use three primary compute resources. Our local server (snorlax) is used for smaller compute jobs and every day coding, data exploration, etc. TSCC is a shared compute cluster at UCSD where we run larger jobs, especially those involving large datasets that we share with UCSD collaborators. For large jobs where the data is already available on Amazon, we use Amazon Web Services. If you're just getting started, you'll probably mostly be using snorlax at first. +We use three primary compute resources. Our local server (snorlax) is used for smaller compute jobs and every day coding, data exploration, etc. However, it has very little storage on it these days, so we recommend avoiding it for now. + +TSCC and Expanse are shared compute clusters at UCSD where we run larger jobs, especially those involving large datasets that we share with UCSD collaborators. For large jobs where the data is already available on Amazon, we use Amazon Web Services. + +If you're just getting started, you'll probably want to choose either TSCC or Expanse. .. toctree:: :maxdepth: 1 diff --git a/doc/Onboarding.rst b/doc/Onboarding.rst index 3480163..5af9951 100644 --- a/doc/Onboarding.rst +++ b/doc/Onboarding.rst @@ -3,9 +3,7 @@ Onboarding ========== -If you are new to the group, this page has a list of things you should do to get set up with helpful lab resources. - -* Get an account on our lab server, :ref:`Snorlax `. +If you are new to the group, this page has a list of things you can do to get set up with helpful lab resources. * Request permission to join the "Gymrek Lab" google calendar and our mailing list gymrek-lab AT googlegroups DOT com. @@ -23,7 +21,7 @@ You should also add a picture of yourself to `the lab website `. +To get access to our lab's computing resources on Snorlax, TSCC, or Expanse, just follow :ref:`these directions `. Finally, you should request badge access to our lab by filling out `this form `_. You should mark that you are requesting access to our collaboratory, the "Center for Precision Genomics" (CPG). diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 2245c5b..60299fe 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -9,6 +9,8 @@ Official docs * The `tscc description `_ * The `tscc 2.0 transitional workshow video `_ +.. _tscc-access: + Getting access -------------- Email tscc-support AT sdsc DOT edu from your UCSD email and CC Melissa. You can include the following in your email. From 08d2398036431ced50572459ed35e13961b626a0 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 8 Oct 2024 08:41:24 -0700 Subject: [PATCH 19/22] update tscc date --- doc/TSCC.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 60299fe..785e238 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -1,7 +1,7 @@ TSCC ==== -Last update: 2024/05/13 +Last update: 2024/10/08 Official docs ------------- From 22f6d7972c6427deeedae93d9cbfc6cfc8a6aa57 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Sat, 19 Oct 2024 07:44:29 -0700 Subject: [PATCH 20/22] apply Helia's suggestions thanks for the review, @heliziii ! --- doc/TSCC.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/TSCC.rst b/doc/TSCC.rst index 785e238..ffeeb7b 100644 --- a/doc/TSCC.rst +++ b/doc/TSCC.rst @@ -212,7 +212,7 @@ Notes: need to dynamically generate the SLURM file. * SLURM does not support using environment variables in :code:`#SBATCH` lines in scripts. If you wish to use environment variables to set such values, you must pass them to the :code:`sbatch` command directly - (e.g. :code:`sbatch --output=$SOMEWHERE/out slurm_script.sh`) + For example, you can use :code:`--output` as a command-line parameter as in :code:`sbatch --output=$SOMEWHERE/out slurm_script.sh` to override :code:`--output` in the header of the script. Partitions ^^^^^^^^^^ @@ -314,7 +314,7 @@ To list maximum information about a job, use :code:`squeue -l -j ` The output flag determines the file that stdout is written to. This must be a file, not a directory. You can use some placeholders in the output location such as `%x` for job name and `%j` for job id. -Use the error flag to choose stderr's output location. If not specifie, it will go to the output location. +Use the error flag to choose stderr's output location. If not specified, it will go to the output location. To delete a running or queued job: :code:`scancel `. To delete all running or queued jobs: :code:`scancel -u $USER` @@ -324,8 +324,8 @@ To figure out why a job is queued use :code:`scontrol show job Debugging jobs the OS killed ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Look at the standard output and standard error files. Any error messages should be there. -#. ssh into the node. You can do this to any node, but if you run a large process the OS will kill you because you have not been scheduled to that node. You can figure out the name of the node assigned to your job using :code:`squeue` once the status of the job is "RUNNING". -#. Scan the os logs for a killed process :code:`dmesg -T | grep ` +#. ssh into the node while the job is running. You can do this to any node, but if you run a large process the OS will kill you because you have not been scheduled to that node. You can figure out the name of the node assigned to your job using :code:`squeue -u $USER` once the status of the job is "RUNNING". +#. Scan the os logs for a job once it's been killed via :code:`dmesg -T | grep `. You can get the jobid from :code:`squeue -u $USER` #. If there are any messages stating that your job was "Killed", its usually a sign that you ran out of memory. You can request more memory by resubmitting the job with the :code:`--mem` parameter. For ex: :code:`--mem 8G` Get Slack notifications when your jobs finish From 20e42ffba8a8882146c51b0b1ced8bad6b635f8d Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Wed, 6 Nov 2024 07:59:38 -0800 Subject: [PATCH 21/22] add melissa's preferences --- doc/Food.rst | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/doc/Food.rst b/doc/Food.rst index bfc2bdd..09ca900 100644 --- a/doc/Food.rst +++ b/doc/Food.rst @@ -55,16 +55,18 @@ Ike's Sandwiches .. code-block:: md - - :chicken: MENAGE A TROIS: Chicken (Halal), Honey Mustard, BBQ Sauce, Real Honey, Pepper Jack, Swiss, Cheddar. [1610 cal] - - :cut_of_meat: MADISON BUMGARNER: Steak, Yellow BBQ Sauce, (Light) Habanero, Pepper Jack, American. [1400 cal] - - :pig: DA VINCI: Turkey, Ham, Salami, Italian Dressing, Provolone. [1380 cal] - - :leafy_green: SOMETIMES I'M A VEGETARIAN: Marinated Artichoke Hearts, Mushrooms, Pesto, Provolone. [1300 cal] + - :chicken: MENAGE A TROIS: Chicken (Halal), Honey Mustard, BBQ Sauce, Real Honey, Pepper Jack, Swiss, Cheddar + - :cut_of_meat: MADISON BUMGARNER: Steak, Yellow BBQ Sauce, (Light) Habanero, Pepper Jack, American + - :cow: Hollywould’s SF Cheesesteak: Steak, Mushrooms, Provolone + - :pig: DA VINCI: Turkey, Ham, Salami, Italian Dressing, Provolone + - :leafy_green: SOMETIMES I'M A VEGETARIAN: Marinated Artichoke Hearts, Mushrooms, Pesto, Provolone 2. If you have the bandwidth, you can additionally offer people the choice of picking an arbitrary sandwich as long as they find someone in the lab with whom to split that sandwich. Providing this option is certainly not required and entirely up to you. -3. When ordering the sandwiches, request each half of the sandwich to be wrapped indivudally in the special instructions section. This will make it easier to split the sandwiches. For customizing the sandwiches, I kept it simple: Dutch crunch bread and just lettuce+tomato+onions as toppings. You can also opt to skip the lollipops to reduce waste (although they are sometimes still included). -4. Order online 2-2.5 hours before the event and request delivery to FAH (3180 Voigt Dr, La Jolla CA 92093) for 0.5-1 hour before the meeting. You will need to provide credit info online. -5. Keep your phone with you as the delivery person will contact you if they have any questions about the address. Meet them outside FAH. -6. After the event, make sure to :ref:`clean up the meeting room ` and :ref:`submit your receipt ` for reimbursement. Refer to the directions below. +3. Note that Melissa does not usually reply to the poll but should always be included in the headcount. She usually prefers a beef option like *Hollywould’s SF Cheesesteak*. +4. When ordering the sandwiches, request each half of the sandwich to be wrapped indivudally in the special instructions section. This will make it easier to split the sandwiches. For customizing the sandwiches, I kept it simple: Dutch crunch bread and just lettuce+tomato+onions as toppings. You can also opt to skip the lollipops to reduce waste (although they are sometimes still included). +5. Order online 2-2.5 hours before the event and request delivery to FAH (3180 Voigt Dr, La Jolla CA 92093) for 0.5-1 hour before the meeting. You will need to provide credit info online. +6. Keep your phone with you as the delivery person will contact you if they have any questions about the address. Meet them outside FAH. +7. After the event, make sure to :ref:`clean up the meeting room ` and :ref:`submit your receipt ` for reimbursement. Refer to the directions below. General Questions ~~~~~~~~~~~~~~~~~ From fa8a59bdf6ac0c581bef1c1634d2fcbc58f334d5 Mon Sep 17 00:00:00 2001 From: Arya Massarat <23412689+aryarm@users.noreply.github.com> Date: Tue, 12 Nov 2024 13:48:22 -0800 Subject: [PATCH 22/22] add dorit's phone number info to the food doc --- doc/Food.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/Food.rst b/doc/Food.rst index 09ca900..08e5473 100644 --- a/doc/Food.rst +++ b/doc/Food.rst @@ -72,7 +72,7 @@ General Questions ~~~~~~~~~~~~~~~~~ How should I pay? ----------------- -Do not use your own credit card! (Reimbursement requires Dorit to `register you on Concur `__.) Ask Melissa for the lab credit card info when you're ready to order. You can find her in `her office or you can call her office phone `_. If you can't get hold of her, contact Arya. **Make sure to submit a reimbursement request (see below) on behalf of the person who paid!** +Do not use your own credit card! (Reimbursement requires Dorit to `register you on Concur `__.) Ask Dorit for the lab credit card info when you're ready to order. You can find her phone number on her profile in the CAST slack. If you can't get hold of her, contact Arya. **Make sure to submit a reimbursement request (see below) on behalf of the person who paid!** .. _food-loadingdock: