From 3fdd161252be5407677127e2937fcd0721f855cc Mon Sep 17 00:00:00 2001 From: <> Date: Tue, 15 Oct 2024 06:45:36 +0000 Subject: [PATCH] Deployed 2336109 with MkDocs version: 1.6.1 --- 0_setup/index.html | 2 +- 1_rdm-guidelines/index.html | 2 +- 2_starting-assay-project/index.html | 2 +- 3_pipelines/index.html | 2 +- 404.html | 2 +- 4_conda/index.html | 2 +- 5_vscode/index.html | 11 ++++++----- 6_handy-scripts/index.html | 2 +- index.html | 2 +- miscellaneous/dropbox/index.html | 2 +- miscellaneous/ku-computer/index.html | 2 +- miscellaneous/podman/index.html | 2 +- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes tools_and_packages/alphafold2/index.html | 2 +- tools_and_packages/dReg/index.html | 2 +- tools_and_packages/packages/index.html | 2 +- tools_and_packages/ucsc_liftover/index.html | 2 +- 18 files changed, 22 insertions(+), 21 deletions(-) diff --git a/0_setup/index.html b/0_setup/index.html index be964fa..f7799a9 100644 --- a/0_setup/index.html +++ b/0_setup/index.html @@ -16,7 +16,7 @@ - + diff --git a/1_rdm-guidelines/index.html b/1_rdm-guidelines/index.html index 3c3531d..16aa036 100644 --- a/1_rdm-guidelines/index.html +++ b/1_rdm-guidelines/index.html @@ -16,7 +16,7 @@ - + diff --git a/2_starting-assay-project/index.html b/2_starting-assay-project/index.html index dd2a6f3..cfad6ba 100644 --- a/2_starting-assay-project/index.html +++ b/2_starting-assay-project/index.html @@ -16,7 +16,7 @@ - + diff --git a/3_pipelines/index.html b/3_pipelines/index.html index 90bb5f5..afa07e2 100644 --- a/3_pipelines/index.html +++ b/3_pipelines/index.html @@ -16,7 +16,7 @@ - + diff --git a/404.html b/404.html index 5c123d1..35d79f5 100644 --- a/404.html +++ b/404.html @@ -12,7 +12,7 @@ - + diff --git a/4_conda/index.html b/4_conda/index.html index 4abfc49..c87f059 100644 --- a/4_conda/index.html +++ b/4_conda/index.html @@ -16,7 +16,7 @@ - + diff --git a/5_vscode/index.html b/5_vscode/index.html index e041049..eea68a1 100644 --- a/5_vscode/index.html +++ b/5_vscode/index.html @@ -16,7 +16,7 @@ - + @@ -883,7 +883,7 @@
Warning
-In this example we use version R/4.3.1. If you want to use a different one +
In this example we use version R/4.2.1. If you want to use a different one change the R version!
ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.3.1 miniconda/latest
module load vscode_cli gcc/11.2.0 R/4.2.1 quarto
code tunnel
Microsoft account
when asked how you would like to log in to VScode->
To use R
, install additional packages by clicking Extensions
in the left panel.
Search for packages:
ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.3.1 miniconda/latest
module load vscode_cli gcc/11.2.0 R/4.2.1 quarto
code tunnel
Remote Explorer
dancmpn01flunicphdom
or dancmpn02flunicphdom
Welcome to the Brickman Lab wiki!
Here you can find documentation for our analysis workflows. For more information about our research, visit the Brickman Group website.
"},{"location":"#transcriptional-basis-for-cell-fate-choice","title":"Transcriptional basis for cell fate choice","text":"The Brickman Group aims to understand the transcriptional basis for early embryonic lineage specification.
We are interested in the dynamic mechanisms by which cells can both reversible prime towards a particular fate or undergo a transition into commitment.
"},{"location":"#publications","title":"Publications","text":"Selected publicationsWong, Y. F., Kumar, Y., Proks, M., Herrera, J. A. R., Rothov\u00e1,M. M., Monteiro, R. S., Pozzi, S., Jennings, R. E., Hanley, N. A., Bickmore, W. A., and Brickman, J. M. (2023). Expansion of ventral foregut is linked to changes in the enhancer landscape for organ-specific differentiation. Nature Cell Biology, doi: 10.1038/s41556-022-01075-8.
Perera, M., Nissen, S. B., Proks, M., Pozzi, S., Monteiro, R. S., Trusina, A., and Brickman, J. M. (2022). Transcriptional heterogeneity and cell cycle regulation as central determinants of Primitive Endoderm priming. eLife, doi: 10.7554/eLife.78967.
Rothov\u00e1, M. M., Nielsen, A. V., Proks, M., Wong, Y. F., Riveiro, A. R., Linneberg-Agerholm, M., David, E., Amit, I., Trusina, A., and Brickman, J. M. (2022). Identification of the central intermediate in the extra-embryonic to embryonic endoderm transition through single-cell transcriptomics. Nature Cell Biology, doi: 10.1038/s41556-022-00923-x.
Riveiro, A. R., and Brickman, J. M. (2020). From pluripotency to totipotency: an experimentalist's guide to cellular potency. Development, doi: 10.1242/dev.189845.
Hamilton, W.B., Mosesson, Y., Monteiro, R.S., Emdal, K.B., Knudsen, T.E., Francavilla, C., Barkai, N., Olsen, J.V. and Brickman, J.M. (2019). Dynamic lineage priming is driven via direct enhancer regulation by ERK. Nature, doi: 10.1038/s41586-019-1732-z.
Weinert, B.T., Narita, T., Satpathy, S., Srinivasan, B., Hansen, B.K., Scholz, C., Hamilton, W.B., Zucconi, B.E., Wang, W.W., Liu, W.R., Brickman, J.M., Kesicki, E.A., Lai, A., Bromberg, K.D., Cole, P.A., and Choudhary, C. (2018). Time-Resolved Analysis Reveals Rapid Dynamics and Broad Scope of the CBP/p300 Acetylome. Cell 174, 231-244.e212, doi:10.1016/j.cell.2018.04.033.
Anderson, K.G.V., Hamilton, W.B., Roske, F.V., Azad, A., Knudsen, T.E., Canham, M.A., Forrester, L.M., and Brickman, J.M. (2017). Insulin fine-tunes self-renewal pathways governing naive pluripotency and extra-embryonic endoderm. Nature Cell Biology 19, 1164-1177, doi:10.1038/ncb3617.
Nissen, S.B., Perera, M., Gonzalez, J.M., Morgani, S.M., Jensen, M.H., Sneppen, K., Brickman, J.M., and Trusina, A. (2017). Four simple rules that are sufficient to generate the mammalian blastocyst. PLoS Biol 15, e2000737, doi:10.1371/journal.pbio.2000737. *joint senior author
Migueles, R.P., Shaw, L., Rodrigues, N.P., May, G., Henseleit, K., Anderson, K.G., Goker, H., Jones, C.M., de Bruijn, M.F., Brickman, J.M., and Enver, T. (2017). Transcriptional regulation of Hhex in hematopoiesis and hematopoietic stem cell ontogeny. Developmental Biology 424, 236-245, doi:10.1016/j.ydbio.2016.12.021.
Illingworth, R.S., H\u00f6lzenspies, J.J., Roske, F.V., Bickmore, W.A., and Brickman, J.M. (2016). Polycomb enables primitive endoderm lineage priming in embryonic stem cells. Elife 5, doi:10.7554/eLife.14926.
Martin Gonzalez, J., Morgani, S.M., Bone, R.A., Bonderup, K., Abelchian, S., Brakebusch, C., and Brickman, J.M. (2016). Embryonic Stem Cell Culture Conditions Support Distinct States Associated with Different Developmental Stages and Potency. Stem Cell Reports 7, 177-191, doi:10.1016/j.stemcr.2016.07.009.
"},{"location":"#datasets","title":"Datasets","text":"Rothova et al., (2022). Nature Cell Biology. Single-cell RNA-seq datasets from FOXA2Venus reporter mouse embryos and embryonic stem cell differentiation towards endoderm.
"},{"location":"0_setup/","title":"First time on danserver","text":"For starting on the server make sure to read:
ssh $USER@danhead01fl.unicph.domain
nano ~/.bash_profile
if [ -f ~/.bashrc ]; then\n . ~/.bashrc\nfi\n
nano ~/.bashrc
# .bashrc\n\n# Source global definitions\nif [ -f /etc/bashrc ]; then\n . /etc/bashrc\nfi\n\n# User specific environment\nif ! [[ \"$PATH\" =~ \"$HOME/.local/bin:$HOME/bin:\" ]]\nthen\n PATH=\"$HOME/.local/bin:$HOME/bin:$PATH\"\nfi\nexport PATH\n\n# Uncomment the following line if you don't like systemctl's auto-paging feature:\n# export SYSTEMD_PAGER=\n\n# User specific aliases and functions\n### Source DanGPU definitions\nif [ -f /maps/projects/dan1/apps/etc/bashrc ]; then\n . /maps/projects/dan1/apps/etc/bashrc\nfi\n\n### Source Brickman definitions\nif [ -f /maps/projects/dan1/data/Brickman/config/brickman.bashrc ]; then\n . /maps/projects/dan1/data/Brickman/config/brickman.bashrc\nfi\n
Brickman
folderThis section provides guidelines for effective research data management within our lab. By adopting these guidelines, we aim to improve data organization and naming conventions, leading to enhanced data governance and research efficiency. The guidelines include the following steps:
Assays
and Projects
folders.metadata.yml
in each folderAssays
and Projects
folders and browse it with a Panel python app.Projects
folders will be version controlled with Github and the Brickman organization.Projects
reports will be displayed under the Brickman organization GitHub Pages.Projects
will be syncronized and archived in Zenodo, which will give a DOI that can be used in a publication.Assays
folder will be uploaded to GEO, with the information provided in the metadata file.To ensure efficient data management, it is important to establish a consistent approach to organizing research data. We consider the following practices:
We are currently using a cookiecutter template to generate a folder structure. Use cruft when generating assay and project folders to allow us to validate and sync old templates with the latest version.
See this section to get started with a new project/assay.
"},{"location":"1_rdm-guidelines/#12-assay-folder","title":"1.2 Assay folder","text":"For each NGS experiment there should be an Assay
folder that will contain all experimental datasets (raw files and pipeline processed files). Inside Assay
there will be subfolders named after a unique NGS ID and the date it was created:
<Assay-ID>_YYYYMMDD\n
"},{"location":"1_rdm-guidelines/#assay-id-code-names","title":"Assay ID code names","text":"CHIP
: ChIP-seqRNA
: RNA-seqATAC
: ATAC-seqSCR
: scRNA-seqPROT
: Mass Spectrometry AssayCAT
: Cut&TagCAR
: Cut&RunRIME
: Rapid Immunoprecipitation Mass spectrometry of Endogenous proteinsFor example CHIP_20230101
is a ChIPseq assay made on 1st January 2023.
CHIP_20230424\n\u251c\u2500\u2500 description.yaml\n\u251c\u2500\u2500 metadata.yaml\n\u251c\u2500\u2500 pipeline.md\n\u251c\u2500\u2500 processed\n\u2514\u2500\u2500 raw\n \u251c\u2500\u2500 .fastq.gz\n \u2514\u2500\u2500 samplesheet.csv\n
There should be another folder called Projects
that will contain project information and data analysis.
A project may use one or more assays to answer a scientific question. This should be, for example, all the data analysis related to a publication.
The project folder should be named after a unique identifier, such as:
<Project-ID>_YYYYMMDD\n
<Project-ID>
should be the initials of the owner of the project folder and the publication year, e.g. JARH_et_al_20230101
.
<Project-ID>_20230424\n\u251c\u2500\u2500 data\n\u2502 \u251c\u2500\u2500 assays\n\u2502 \u251c\u2500\u2500 external\n\u2502 \u2514\u2500\u2500 processed\n\u251c\u2500\u2500 documents\n\u2502 \u2514\u2500\u2500 Non-sensitive_NGS_research_project_template.docx\n\u251c\u2500\u2500 notebooks\n\u2502 \u2514\u2500\u2500 01_data_analysis.rmd\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 reports\n\u2502 \u251c\u2500\u2500 figures\n\u2502 \u2502 \u2514\u2500\u2500 01_data_analysis\n\u2502 \u2514\u2500\u2500 01_data_analysis.html\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 results\n\u2502 \u2514\u2500\u2500 01_data_analysis/\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 description.yml\n\u2514\u2500\u2500 metadata.yml\n
00_preprocessing
We will have to setup a cron job to perform one-way sync between the /projects
folder and NGS_data
folder. All the analysis will be done on danGPU server, with no exceptions!
After project is done and published, it will be moved to NGS_data
.
YYYYMMDD
_
. Words in each section are written in camelCase. For example: field1_word1Word2.txt
.Transcriptomics metadata standards and fields
More info on naming conventions for different types of files and analysis is in development.
name description naming_convention file format example .fastq raw sequencing reads nan nan sampleID_run_read1.fastq .fastqc quality control from fastqc nan nan sampleID_run_read1.fastqc .bam aligned reads nan nan sampleID_run_read1.bam GTF sequence annotation nan nan one of https://www.gencodegenes.org/ GFF sequence annotation nan nan one of https://www.gencodegenes.org/ .bed genome locations nan nan nan .bigwig genome coverage nan nan nan .fasta sequence data (nucleotide/aminoacid) nan nan one of https://www.gencodegenes.org/ Multiqc report QC aggregated report <assayID>_YYYYMMDD.multiqc multiqc RNA_20200101.multiqc Count matrix final count matrix <assayID>_cm_aligner_YYYYMMDD.tsv tsv RNA_cm_salmon_20200101.tsv DEA differential expression analysis results DEA_<condition1-condition2>_LFC<absolute_threshold>_p<pvalue decimals>_YYYYMMDD.tsv tsv DEA_treat-untreat_LFC1_p01_20200101.tsv DBA differential binding analysis results DBA_<condition1-condition2>_LFC<absolute_threshold>_p<pvalue decimals>_YYYYMMDD.tsv tsv DBA_treat-untreat_LFC1_p01_20200101.tsv MAplot MA plot MAplot_<condition1-condition2>_YYYYMMDD.jpeg jpeg MAplot_treat-untreat_20200101.jpeg Heatmap plot Heatmap plot of anything heatmap_<type>_YYYYMMDD.jpeg jpeg Heatmap_sampleCor_20200101.jpeg Volcano plot Volcano plot volcano_<condition1-condition2>_YYYYMMDD.jpeg jpeg volcano_treat-untreat_20200101.jpeg Venn diagram Venn diagram venn_<type>_YYYYMMDD.jpeg jpeg venn_consensus_20200101.jpeg Enrichment table Enrichment results nan tsv nan"},{"location":"1_rdm-guidelines/#2-metadata-and-documentation","title":"2. Metadata and documentation","text":"Accurate documentation and metadata play a crucial role in facilitating data discovery and interpretation. Consider the following guidelines:
In development.
Metadata field Definition Format Example project Project name <name>_<keyword>_YYYY lundregan_oct4_2023 author Owner of the project <First name> <Surname> Sarah Lundregran date Date of creation YYYYMMDD 20230101 description Short description of the project Plain text This is a project describing the effect of Oct4 perturbation after pERK activation"},{"location":"1_rdm-guidelines/#3-data-catalogue-and-browser","title":"3. Data catalogue and browser","text":"@SLundregan is in the process of building a prototype for Assay
, using the metadata contained in all description.yml
and metadata.yml
files in the assay folder. This will be in the form of an SQLite database that that is easily updatable by running a helper script.
@SLundregan is also working on a browsable database using Panel python app. The app will display the latest version of the SQLite database. Clicking on an item from the database will open a tab containing all available metadata for the assay.
Also, it would be nice if you can create an Assay
folder directly from there, making it easy to fill up the info for the metadata and GEO submission (see below)
In the future, you could ideally visualize an analysed single cell RNAseq dataset by opening Cirrocumulus session.
"},{"location":"1_rdm-guidelines/#4-projects-version-control","title":"4.Projects
version control","text":"All projects should be version controlled using GitHub under the Brickman organization. After creating a cookiecutter template, initiate a git repository on the folder. The Git repository can stay private until it is ready for publication.
"},{"location":"1_rdm-guidelines/#5-projects-github-pages","title":"5.Projects
GitHub pages","text":"Using GitHub pages, it is possible to display your data analyses (or anything related to the project) inside the Projects
folder so that they are open to the public in a html format. This is great for transparency and reproducibility purposes. This can be done after the paper has been made public (it is not possible to do with a private repository without paying).
Info on how this is done should be put here
"},{"location":"1_rdm-guidelines/#6-project-archiving-in-zenodo","title":"6.Project
archiving in Zenodo","text":"Before submitting, link the repository to Zenodo and then create a Git release. This release will be caught by Zenodo and will give you a DOI that you can submit along the manuscript.
"},{"location":"1_rdm-guidelines/#7-data-upload-to-geo","title":"7. Data upload to GEO","text":"The raw data from NGS experiments will be uploaded to the Gene Expression Omnibus (GEO). Whenever a new Assay folder is created, the data owner must fill up the required documentation and information needed to make the GEO submission as smooth as possible.
"},{"location":"1_rdm-guidelines/#8-create-a-data-management-plan","title":"8. Create a Data Management Plan","text":"From the University of Copenhagen RDM team
\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200bA Data Management Plan (DMP) is a planning tool that helps researchers to establish good practices for working with physical m\u200baterial and data in a research project. A DMP covers all relevant aspects of research data management throughout the project. Writing a DMP early on in a project helps:
We are have written a DMP template that it is prefilled with repetitive information using DMPonline and the Horizon Europe guidelines. This template contains all the necessary information regarding common practices that we will use, the repositories we use for NGS, etc. The template is part of the project
folder template, under documents
. You can check the file here.
The Horizon Europe template is mostly focused on digital data and so, it is maybe not the best option regarding the needs of the Brickman Lab, due to the fact that it is mostly a wet lab with some bioinformatics. We will start working on another DMP based on the KU template, which is designed for both physical and digital data.
"},{"location":"2_starting-assay-project/","title":"Starting a new assay or project","text":"Whenever you obtain sequencing data from Genomic's Platform, you have to create an Assay. By running the commands below, you will have option to fill all required information about the experiment. This workflow will help us with tracking of all sequencing done in our lab.
"},{"location":"2_starting-assay-project/#assay","title":"Assay","text":"When you sequence an experiment, we create an Assay out of it, so we can use it in a project afterwards.
Login to danhead and run command:
create_assay\n
"},{"location":"2_starting-assay-project/#project","title":"Project","text":"Every time you want to make some analysis, you should create a project. Our folder structure will allow you to easily link various experiments to your project and make your analysis easier.
Please use the following naming convention: surname-<YOUR_CODENAME>
create_project\n
Link required assays to your project.
ln -s /maps/projects/dan1/data/Brickman/assays/<ASSAY_ID> /maps/projects/dan1/data/Brickman/projects/<PROJECT_ID>/data/assays/\n
Link external data if needed
ln -s /maps/projects/dan1/data/Brickman/shared /maps/projects/dan1/data/Brickman/projects/<PROJECT_ID>/data/external/\n
"},{"location":"3_pipelines/","title":"Running pipelines","text":"By default, we run nf-core pipelines. To run a pipeline, read the official documentation with an example.
"},{"location":"3_pipelines/#monitoring-runs-with-nextflow-tower","title":"Monitoring runs with Nextflow Tower","text":"This is a guide on how to use Nextflow Tower to monitor nf-core pipeline runs.
We have created an API token for our GitHub account (brickmanlab) and restricted it to run only pipelines, nothing else. The TOWER_WORKSPACE_ID
and TOWER_ACCESS_TOKEN
are stored in Brickman/config/brickman.bashrc
.
To do more advance stuff, you have to create your own personal access token.
"},{"location":"3_pipelines/#tower-cli-installation","title":"Tower CLI installation","text":"The tower cli1 is required to be installed only once to connect the server as a computing resource. Afterward, it's not required any more2.
# Download the latest version of Tower CLI:\nwget https://github.com/seqeralabs/tower-cli/releases/download/v0.7.3/tw-0.7.3-linux-x86_64\n\n# Make the file executable and move to directory accessible by $PATH variable:\nmkdir ~/.local/bin && mv tw-* tw && chmod +x ~/.local/bin/tw\n
Tower CLI configuration \u21a9
Tower Agent \u21a9
If you work with conda
you can use mamba
instead, which is faster tool to install packages.
We created shared conda
environments to simplify your life.
conda env list
source activate brickman
Here is an example how we created shared environment called brickman
.
module load miniconda/latest\n\nconda create --prefix /maps/projects/dan1/data/Brickman/conda/envs/brickman python=3.10\nsource activate brickman\npip install cruft cookiecutter\n\nchmod -R 755 /maps/projects/dan1/data/Brickman/conda/envs/brickman\n
To install shared conda
environment for the lab, follow the steps below.
brickman-<NGS>.yml
mamba env create -p /projects/dan1/data/Brickman/conda/envs/brickman-<NGS>.yml -f brickman-<NGS>.yml\n
"},{"location":"4_conda/#example-conda-environment","title":"Example conda environment","text":"Configuration for brickman-chipseq
environment.
name: brickman-chipseq\nchannels:\n - conda-forge\n - bioconda\n - anaconda\n - defaults\ndependencies:\n - bioconda::bedtools==2.31.0\n - bioconda::deeptools==2.31.0\n - bioconda::homer==4.11\n - bioconda::intervene==0.6.4\n - bioconda::macs2==2.2.9.1\n - bioconda::pygenometracks==3.8\n - bioconda::seacr==1.3\n - bioconda::samtools==1.17\nprefix: /projects/dan1/data/Brickman/conda/envs/brickman-chipseq\n
To install the environment, run
mamba env create -p /projects/dan1/data/Brickman/conda/envs/brickman-chipseq -f brickman-chipseq.yml\n
"},{"location":"4_conda/#modules","title":"Modules","text":"module avail\n\nmodule load miniconda/latest\n
"},{"location":"5_vscode/","title":"Setup R with Visual Studio Code","text":"This setup guides you through setting up R
in VSCode so you can use it on dancmpn01fl
and dancmpn02fl
computing nodes.
Info
The original RStudio server is using 4.0.5 version. If you want to stick this version, make sure to specify it when loading modules.
Why do you need this?
Because RStudio server sucks when you don't have a license and our place, so alternative it is. Also, VSCode has a bunch of plugins.
"},{"location":"5_vscode/#setting-up-remote-tunnels","title":"Setting up Remote Tunnels","text":"Warning
In this example we use version R/4.3.1. If you want to use a different one change the R version!
"},{"location":"5_vscode/#requirements","title":"Requirements","text":"ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.3.1 miniconda/latest
code tunnel
Microsoft account
when asked how you would like to log in to VScodeRemote Explorer
Sign in to the tunnels registered with Microsoft
dancmpn01flunicphdom
or dancmpn02flunicphdom
->
To use R
, install additional packages by clicking Extensions
in the left panel. Search for packages:
Quarto
Next, top panel lick View
-> Terminal
-> Write R
and hit ENTER
install.packages(\"languageserver\")
YES
then hit ENTER30
(Denmark servers to download packages)install.packages(\"httpgd\")
q()
to get outCode
-> Settings
-> Settings
r.plot.useHttpgd
If everything went well, you should be able to do this. If not, you know what to do.
"},{"location":"5_vscode/#i-already-did-the-setup-i-want-my-r-again","title":"I already did the setup, I want my R again","text":"ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.3.1 miniconda/latest
code tunnel
Remote Explorer
dancmpn01flunicphdom
or dancmpn02flunicphdom
curl -Lk 'https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64' --output vscode_cli.tar.gz\ntar -xf vscode_cli.tar.gz\n
"},{"location":"5_vscode/#known-issues","title":"Known issues","text":"VSCode can be installed as a server code-server
, however it is not possible to listen on the port when on computing node. This works only in the case of dangpu01fl
.
Error when trying to do reverse ssh:
error listen EADDRINUSE: address already in use 127.0.0.1:8080\n
VSCode code-server
is an alternative to code tunnel
that consists of running code-server on a compute node and accessing it via a web browser using reverse ssh
tunnel.
curl -fL https://github.com/coder/code-server/releases/download/v4.90.2/code-server-4.90.2-linux-amd64.tar.gz | tar -C /maps/projects/dan1/data/Brickman/shared/modules/software/code-server/4.90.2 -xz\n
ssh user@danhead01fl.unicph.domain\ntmux new\nsrun -c 2 --mem=30gb --time=0-4:00:00 -p gpuqueue --pty bash\nmodule load code-server\ncode-server\n# On local machine\nssh -fNL localhost:8080:localhost:8080 $USER@dangpu01fl.unicph.domain\n
"},{"location":"6_handy-scripts/","title":"Handy scripts","text":""},{"location":"6_handy-scripts/#geo-submission","title":"GEO submission","text":"~/Brickman/projects/
or ~/ucph/ndir/SUN-RENEW-Brickman/
Transfer files
and copy the login information for the ftpNOTE: before running the command below, make sure you are already in the folder and you see all the folder/files you want to upload. It will make the steps below simpler.
# we run tmux session in case we loose connection\ntmux new -s geo\n\n# this loges you to FTP\nsftp geoftp@sftp-private.ncbi.nlm.nih.gov\npassword: <PASSWORD>\n\ncd uploads/<FOLDER>\nmkdir <RNAseq>\ncd <RNAseq>\nmput *\n
"},{"location":"miscellaneous/dropbox/","title":"Moving Dropbox to SUND","text":"This is a step-by-step guide how I moved our Dropbox into SUND organized by KU IT. In first attempt I have tried moving the files into OneDrive, but because there might be issues with long filenames I eventually ran into more and more problems
Simpler solution is just to move things to SAMBA drives.
First, ssh into the server
ssh danhead01fl\ntmux new -s dropbox-transfer\nmodule load rclone/1.65.1\n
"},{"location":"miscellaneous/dropbox/#linking-remotes","title":"Linking remotes","text":""},{"location":"miscellaneous/dropbox/#dropbox","title":"Dropbox","text":"> n\n> dropbox\n> client_id <ENTER>\n> client_secret <ENTER>\n> y\nforward port `ssh -fNL localhost:53682:localhost:53682 danhead01fl` and access the website locally\n
"},{"location":"miscellaneous/dropbox/#onedrive","title":"Onedrive","text":"> n\n> onedrive\n> client_id <ENTER>\n> client_secret <ENTER>\n> region <ENTER>\n> y\nforward port `ssh -fNL localhost:53682:localhost:53682 danhead01fl` and access the website locally\n> config_type 3\n> https://alumni.sharepoint.com/sites/UCPH_BrickmanLab\n> y\n
"},{"location":"miscellaneous/dropbox/#test-connections","title":"Test connections","text":"rclone lsd Dropbox:\nrclone lsd dropbox_jb:\nrclone lsd Onedrive:\n
"},{"location":"miscellaneous/dropbox/#copy-files","title":"Copy files","text":"I have started first with manual folders because we had to many folders and sometimes there are timeout issues.
rclone copy --progress --checksum Dropbox:Computerome ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Computerome\nrclone copy --progress --checksum Dropbox:Courses ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Courses\nrclone copy --progress --checksum Dropbox:Grants ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Grants\nrclone copy --progress --checksum Dropbox:Other ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Other\nrclone copy --progress --checksum Dropbox:Papers ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Papers\nrclone copy --progress --checksum Dropbox:Pictures ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Pictures\nrclone copy --progress --checksum Dropbox:People ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/People\nrclone copy --progress --checksum Dropbox:sc_seq_analysis ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/sc_seq_analysis\n
After the initial copy, I ran again copy this time of all the folders, most of them should be present already. This is to make sure all files were moved.
rclone copy \\\n --progress --checksum \\\n --exclude=\"People/Fung/Home/IRCMS_interview_2024**\" \\\n --exclude=\"People/Fung/Home/MB1016613_backup**\" \\\n --exclude=\"GEO_data/**\" \\\n Dropbox: ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/\n\nrclone copy --progress --checksum Dropbox:GEO_data ~/ucph/ndir/SUN-RENEW-Brickman/GEO_data/\nrclone copy --progress --checksum dropbox_jb: ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/JoshBrickman\n
"},{"location":"miscellaneous/ku-computer/","title":"KU computer setup","text":""},{"location":"miscellaneous/ku-computer/#conda","title":"Conda","text":"Go here and download Miniconda PKG not BASH. If you're running M1/2 please follow this guideline.
"},{"location":"miscellaneous/ku-computer/#example-for-chip-seq-setup","title":"Example for CHIP-seq setup","text":"conda create --name chipseq python=3.6\nconda activate chipseq\nconda install -c bioconda deeptools bedtools\npip install intervene\n
"},{"location":"miscellaneous/podman/","title":"Podman","text":""},{"location":"miscellaneous/podman/#setup","title":"Setup","text":"Storage for Podman needs to be configured to fix UID errors when running on UTF filesystem:
mkdir -p ~/.config/containers\ncp /maps/projects/dan1/apps/podman/4.0.2/storage.conf $HOME/.config/containers/\n
Rootless Podman also requires username and allowed UID range to be listed in /etc/subuid and /etc/subgid
List running containers and run a publically available container image to confirm Podman is working:
podman ps\npodman run -it docker.io/library/busybox\n
"},{"location":"miscellaneous/podman/#running-the-ku-sund-dangpu-nf-core-config-with-podman","title":"Running the KU SUND DANGPU nf-core config with Podman","text":"Currently this is not practical because file permissions cause the following error:
error during container init: error setting cgroup config for procHooks process: cannot set memory limit: container could not join or create cgroup\n
The nf-core config file, podman.config, can be found at /scratch/Brickman/pipelines/
Specify podman.config in nextflow run options to run a pipeline with Podman, e.g. for the rnaseq test profile:
nextflow run nf-core/rnaseq -r 3.8.1 -c podman.config -profile test --outdir nfcore_test\n
"},{"location":"tools_and_packages/alphafold2/","title":"Alphafold 2","text":""},{"location":"tools_and_packages/alphafold2/#1-running","title":"1. Running","text":""},{"location":"tools_and_packages/alphafold2/#11-create-a-target-file","title":"1.1 Create a target file","text":"# cat target.fasta\n>query\nMAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH\n
"},{"location":"tools_and_packages/alphafold2/#12-setup-environments","title":"1.2. Setup environments","text":"srun -N 1 --ntasks-per-node=10 --gres=gpu:2 --pty bash\nmodule load miniconda/latest cuda/11.4 cudnn/8.2.2\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\n\ncd /maps/projects/dan1/data/Brickman/alphafold\nexport AF2_DATA_DIR=\"~/projects/data/Alphafold2/24022023\"\n
"},{"location":"tools_and_packages/alphafold2/#13-run-monomer-cli","title":"1.3. Run monomer (cli)","text":"python run_alphafold.py \\\n --fasta_paths=~/projects/data/Brickman/target_01.fasta \\\n --output_dir=/scratch/tmp/alphatest \\\n --model_preset=monomer \\\n --db_preset=full_dbs \\\n --data_dir=$AF2_DATA_DIR \\\n --uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n --uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n --mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n --pdb70_database_path=$AF2_DATA_DIR/pdb70/pdb70 \\\n --template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n --obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n --bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n --max_template_date=2022-01-01 \\\n --use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#14-run-multimer-cli","title":"1.4. Run multimer (cli)","text":"The example below generates 10 models.
python run_alphafold.py \\\n --fasta_paths=/home/fdb589/projects/data/Brickman/WTPU_1_WTC_EBPa.fasta \\\n --output_dir=/scratch/tmp/alphatest \\\n --model_preset=multimer \\\n --db_preset=full_dbs \\\n --data_dir=$AF2_DATA_DIR \\\n --uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n --uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n --mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n --template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n --obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n --pdb_seqres_database_path=$AF2_DATA_DIR/pdb_seqres/pdb_seqres.txt \\\n --uniprot_database_path=$AF2_DATA_DIR/uniprot/uniprot.fasta \\\n --bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n --max_template_date=2022-01-01 \\\n --num_multimer_predictions_per_model=10 \\\n --use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#15-example-sbatch-script","title":"1.5. Example SBATCH script","text":"#!/bin/bash\n#SBATCH --job-name=AF2\n#SBATCH --gres=gpu:2\n#SBATCH --cpus-per-task=10\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest cuda/11.4 cudnn/8.2.2\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\ncd ~/projects/data/Brickman/alphafold\nmkdir -p /scratch/tmp/alphatest\nexport AF2_DATA_DIR=\"~/projects/data/Alphafold2/24022023\"\n\nsrun python run_alphafold.py \\\n--fasta_paths=~/projects/data/Brickman/target_01.fasta \\\n--output_dir=/scratch/tmp/alphatest \\\n--model_preset=monomer \\\n--db_preset=full_dbs \\\n--data_dir=$AF2_DATA_DIR \\\n--uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n--uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n--mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n--pdb70_database_path=$AF2_DATA_DIR/pdb70/pdb70 \\\n--template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n--obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n--bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n--max_template_date=2022-01-01 \\\n--use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#2-installation","title":"2. Installation","text":"conda create --prefix /maps/projects/dan1/data/Brickman/conda/envs/af2 python=3.8\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\n\nmamba install hmmer\npip install py3dmol\nmamba install pdbfixer==1.7\nmamba install -c conda-forge openmm=7.5.1\n\ncd /maps/projects/dan1/data/Brickman/\ngit clone --branch main https://github.com/deepmind/alphafold alphafold\npip install -r ./alphafold/requirements.txt\npip install --no-dependencies ./alphafold\n\n# stereo chemical props needs to be in common folder\nwget \u2013q \u2013P /maps/projects/dan1/data/Brickman/alphafold/alphafold/common/ https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt\n\n# skipping content part\nmkdir -p ./alphafold/data/params && cd ./alphafold/data/params\nwget https://storage.googleapis.com/alphafold/alphafold_params_colab_2022-12-06.tar\ntar --extract --verbose --preserve-permissions --file alphafold_params_colab_2022-12-06.tar\npip install ipykernel ipywidgets tqdm\npip install --upgrade scprep phate\n\n# Install jax\nmodule load miniconda/latest\nmodule load cuda/11.4 cudnn/8.2.2\nexport CUDA_VISIBLE_DEVICES='3'\npip install \"jax[cuda11_cudnn82]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n\n# fix last issues\nmamba install -c conda-forge -c bioconda hhsuite\nmamba install -c bioconda kalign3\npip install numpy==1.21.6\n
"},{"location":"tools_and_packages/alphafold2/#21-download-references","title":"2.1. Download references","text":"Note
Downloading references will not work on one try, had to do a lot of manual re-running of scripts.
# create folder\nmkdir -p ~/projects/data/Alphafold2/24022023\ncd ~/projects/data/Alphafold2/24022023\n\n# Download all databases\nsh download_all_data.sh ~/projects/data/Alphafold2/24022023/ > download.log 2> download_all.log\n\n# Some fix-ups\n# mmCIF will not work because the firewall blocks the port, so I found this workaroud online\n# ref: https://github.com/deepmind/alphafold/issues/196\nwget -e robots=off -r --no-parent -nH --cut-dirs=7 -q ftp://ftp.ebi.ac.uk/pub/databases/pdb/data/structures/divided/mmCIF/ -P \"${RAW_DIR}\"\n\n# Last step is to fix all the permissions\nchmod -R 755 24022023/\n
"},{"location":"tools_and_packages/alphafold2/#references","title":"References","text":"If your nascent RNA-seq data is already aligned, bw suitable for use with dReg can be prepared using Danko-Lab RunOnBamToBigWig
If you have fastq files from PRO-seq, GRO-seq, or CHrO-seq, run the Danko-Lab's mapping pipeline using the shared dReg_dataprep
conda environment
Example SBATCH script for mapping pipeline
#!/bin/bash\n\n#SBATCH --job-name=pro_align\n#SBATCH -c 20\n#SBATCH --mem=30gb\n#SBATCH --time=00-24:00:00\n#SBATCH --output=01_proseq_alignment.out\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest\nsource activate dReg_dataprep\n\nPROSEQ=(\"/maps/projects/dan1/data/Brickman/proseq2.0/proseq2.0.bsh\")\nGENO=(\"/scratch/Brickman/references/mus_musculus/ensembl/GRCm38_102/\")\nRESL=(\"/maps/projects/dan1/data/Brickman/projects/NAME_DATE/data/external/proseq/\")\nSAMPLES=(\"SRX14164616_SRR18010280 SRX14164617_SRR18010278\")\n\nfor sample in ${SAMPLES}; do\n bash ${PROSEQ} -i ${GENO}bwa \\\n -c ${GENO}GRCm38.102.genome \\\n -PE --RNA5=R2_5prime --UMI1=6 \\\n -O ${RESL} \\\n -I ${sample} \\\n --thread=20\ndone\n
"},{"location":"tools_and_packages/dReg/#gpu-check","title":"GPU check","text":"Check available GPUs and running processes before using dReg. GPU 0 is reserved for Brickman group
nvidia-smi\n
"},{"location":"tools_and_packages/dReg/#example-dreg-script","title":"Example dReg script","text":"#!/bin/bash\n\n#SBATCH --job-name=dREG\n#SBATCH -c 30\n#SBATCH --mem=30gb\n#SBATCH --time=00-24:00:00\n#SBATCH --output=01-1_dREG.out\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest cuda/11.8-dangpu cudnn/8.6.0-dangpu\nsource activate dReg\n\nBW=(\"../data/assays/RNA_INITIAL_DATE/processed/bw/\")\nRESL=(\"../results/01/dREG/\")\ndREG=(\"/projects/dan1/data/Brickman/dREG/run_dREG.bsh\")\nMODEL=(\"/projects/dan1/data/Brickman/dREG/resources/asvm.gdm.6.6M.20170828.rdata\")\n\n\nSAMPLES=(\"0h_A 0h_B 2h_A 2h_B\")\n\nfor sample in ${SAMPLES}; do\n bash ${dREG} ${BW}${sample}_sorted_filt_dedup_plus.bw ${BW}${sample}_sorted_filt_dedup_minus.bw \\\n ${RESL}${sample}_test ${MODEL} \\\n 30 0\ndone\n
"},{"location":"tools_and_packages/dReg/#2-installation","title":"2. Installation","text":""},{"location":"tools_and_packages/dReg/#installing-dreg","title":"Installing dReg","text":"Note: Python version in conda env must be 3.8, and R version < 4.0
cd /maps/projects/dan1/data/Brickman/conda/\nmodule load miniconda/latest\nmamba env create -p /projects/dan1/data/Brickman/conda/envs/dReg -f dREG.yml\nsource activate dReg\n\ncd /maps/projects/dan1/data/Brickman/\ngit clone https://github.com/Danko-Lab/dREG\ncd dREG\nmake R_dependencies\n\nR\ndevtools::install_github(\"CshlSiepelLab/RPHAST\")\ndevtools::install_version(\"MASS\", version=\"7.3-51.5\", repos=\"https://mirrors.dotsrc.org/cran/\")\ninstall.packages(\"e1071\", repos=\"https://mirrors.dotsrc.org/cran/\")\ndevtools::install_version(\"randomForest\", version=\"4.6-14\", repos=\"https://mirrors.dotsrc.org/cran/\")\nquit()\n\nmake dreg\nmkdir resources\ncd resources\nwget ftp://cbsuftp.tc.cornell.edu/danko/hub/dreg.models/asvm.gdm.6.6M.20170828.rdata\n
"},{"location":"tools_and_packages/dReg/#installing-rgtsvm","title":"Installing Rgtsvm","text":"Rgtsvm is required for dReg to use GPU resources
# make sure in dREG repo and that dReg environment is activated\ncd /maps/projects/dan1/data/Brickman/dREG\nsource activate dReg\n\nR\ninstall.packages(c(\"bit64\", \"snow\", \"SparseM\"), repos=\"https://mirrors.dotsrc.org/cran/\")\ndevtools::install_version(\"lattice\", version=\"0.20-41\", repos=\"https://mirrors.dotsrc.org/cran/\")\ninstall.packages(\"Matrix\", repos=\"https://mirrors.dotsrc.org/cran/\")\nquit()\nmamba install -c conda-forge boost=1.70.0\n\nmkdir third-party\ncd third-party\ngit clone https://github.com/Danko-Lab/Rgtsvm.git\ncd Rgtsvm\n\nmodule load cuda/11.8-dangpu\nmodule load cudnn/8.6.0-dangpu\n\nR CMD INSTALL --configure-args=\"--with-boost-home=$CONDA_PREFIX\" Rgtsvm\n
"},{"location":"tools_and_packages/packages/","title":"Bioinformatics tools","text":"Tool Description NGS Language Link Functional enrichment on genomic regions CHIP-seq ATAC-seq R https://github.com/jokergoo/rGREAT Pseudotime inference scRNA-seq Python https://github.com/LouisFaure/scFates nan Single-cell analysis package scRNA-seq Python https://github.com/scverse/scanpy nan AI probabilistic package for transfer learning DR and more scRNA-seq Python https://github.com/scverse/scvi-tools Gene set enrichment analysis on steroids scRNA-seq Python https://github.com/zqfang/GSEApy nan UpsetR on stereoids (complicated Venn Diagrams) Plotting R https://github.com/krassowski/complex-upset nan Complex heatmap Plotting Python https://github.com/DingWB/PyComplexHeatmap nan"},{"location":"tools_and_packages/ucsc_liftover/","title":"UCSC liftover tool","text":"Documentation for UCSC liftover.
"},{"location":"tools_and_packages/ucsc_liftover/#issue-separate-peaks-map-to-same-coordinates-after-liftover","title":"Issue: separate peaks map to same coordinates after liftover","text":"Remove any peaks with overlapping coordinates after liftover before using the lifted over peak file:
#!/bin/bash\n\nmodule load bedtools\n\nEXTL=(\"../data/external/\")\n\n# Sort the lifted over peakfile for use with bedtools\nsort -k1,1 -k2,2n ${EXTL}wong_fig3c_peaks_GRCh38.bed > ${EXTL}peaks.tmp && mv ${EXTL}peaks.tmp ${EXTL}wong_fig3c_peaks_GRCh38.bed\n\n# Bedtools merge count rows contributing to merged peaks (overlapping peaks will have count > 1)\nbedtools merge -i ${EXTL}wong_fig3c_peaks_GRCh38.bed -c 1 -o count > ${EXTL}counted.bed\n\n# Get non-overlapping peaks\nawk '/\\t1$/{print}' ${EXTL}counted.bed > ${EXTL}filtered.bed\n\n# Intersect original file with non-overlapping peaks and output overlapping peaks\nbedtools intersect -wa -a ${EXTL}wong_fig3c_peaks_GRCh38.bed -b ${EXTL}filtered.bed > ${EXTL}wong_fig3c_peaks_GRCh38_correct_liftover.bed\nbedtools intersect -v -a ${EXTL}wong_fig3c_peaks_GRCh38.bed -b ${EXTL}filtered.bed > ${EXTL}wong_fig3c_peaks_GRCh38_overlapping.bed\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"Welcome to the Brickman Lab wiki!
Here you can find documentation for our analysis workflows. For more information about our research, visit the Brickman Group website.
"},{"location":"#transcriptional-basis-for-cell-fate-choice","title":"Transcriptional basis for cell fate choice","text":"The Brickman Group aims to understand the transcriptional basis for early embryonic lineage specification.
We are interested in the dynamic mechanisms by which cells can both reversible prime towards a particular fate or undergo a transition into commitment.
"},{"location":"#publications","title":"Publications","text":"Selected publicationsWong, Y. F., Kumar, Y., Proks, M., Herrera, J. A. R., Rothov\u00e1,M. M., Monteiro, R. S., Pozzi, S., Jennings, R. E., Hanley, N. A., Bickmore, W. A., and Brickman, J. M. (2023). Expansion of ventral foregut is linked to changes in the enhancer landscape for organ-specific differentiation. Nature Cell Biology, doi: 10.1038/s41556-022-01075-8.
Perera, M., Nissen, S. B., Proks, M., Pozzi, S., Monteiro, R. S., Trusina, A., and Brickman, J. M. (2022). Transcriptional heterogeneity and cell cycle regulation as central determinants of Primitive Endoderm priming. eLife, doi: 10.7554/eLife.78967.
Rothov\u00e1, M. M., Nielsen, A. V., Proks, M., Wong, Y. F., Riveiro, A. R., Linneberg-Agerholm, M., David, E., Amit, I., Trusina, A., and Brickman, J. M. (2022). Identification of the central intermediate in the extra-embryonic to embryonic endoderm transition through single-cell transcriptomics. Nature Cell Biology, doi: 10.1038/s41556-022-00923-x.
Riveiro, A. R., and Brickman, J. M. (2020). From pluripotency to totipotency: an experimentalist's guide to cellular potency. Development, doi: 10.1242/dev.189845.
Hamilton, W.B., Mosesson, Y., Monteiro, R.S., Emdal, K.B., Knudsen, T.E., Francavilla, C., Barkai, N., Olsen, J.V. and Brickman, J.M. (2019). Dynamic lineage priming is driven via direct enhancer regulation by ERK. Nature, doi: 10.1038/s41586-019-1732-z.
Weinert, B.T., Narita, T., Satpathy, S., Srinivasan, B., Hansen, B.K., Scholz, C., Hamilton, W.B., Zucconi, B.E., Wang, W.W., Liu, W.R., Brickman, J.M., Kesicki, E.A., Lai, A., Bromberg, K.D., Cole, P.A., and Choudhary, C. (2018). Time-Resolved Analysis Reveals Rapid Dynamics and Broad Scope of the CBP/p300 Acetylome. Cell 174, 231-244.e212, doi:10.1016/j.cell.2018.04.033.
Anderson, K.G.V., Hamilton, W.B., Roske, F.V., Azad, A., Knudsen, T.E., Canham, M.A., Forrester, L.M., and Brickman, J.M. (2017). Insulin fine-tunes self-renewal pathways governing naive pluripotency and extra-embryonic endoderm. Nature Cell Biology 19, 1164-1177, doi:10.1038/ncb3617.
Nissen, S.B., Perera, M., Gonzalez, J.M., Morgani, S.M., Jensen, M.H., Sneppen, K., Brickman, J.M., and Trusina, A. (2017). Four simple rules that are sufficient to generate the mammalian blastocyst. PLoS Biol 15, e2000737, doi:10.1371/journal.pbio.2000737. *joint senior author
Migueles, R.P., Shaw, L., Rodrigues, N.P., May, G., Henseleit, K., Anderson, K.G., Goker, H., Jones, C.M., de Bruijn, M.F., Brickman, J.M., and Enver, T. (2017). Transcriptional regulation of Hhex in hematopoiesis and hematopoietic stem cell ontogeny. Developmental Biology 424, 236-245, doi:10.1016/j.ydbio.2016.12.021.
Illingworth, R.S., H\u00f6lzenspies, J.J., Roske, F.V., Bickmore, W.A., and Brickman, J.M. (2016). Polycomb enables primitive endoderm lineage priming in embryonic stem cells. Elife 5, doi:10.7554/eLife.14926.
Martin Gonzalez, J., Morgani, S.M., Bone, R.A., Bonderup, K., Abelchian, S., Brakebusch, C., and Brickman, J.M. (2016). Embryonic Stem Cell Culture Conditions Support Distinct States Associated with Different Developmental Stages and Potency. Stem Cell Reports 7, 177-191, doi:10.1016/j.stemcr.2016.07.009.
"},{"location":"#datasets","title":"Datasets","text":"Rothova et al., (2022). Nature Cell Biology. Single-cell RNA-seq datasets from FOXA2Venus reporter mouse embryos and embryonic stem cell differentiation towards endoderm.
"},{"location":"0_setup/","title":"First time on danserver","text":"For starting on the server make sure to read:
ssh $USER@danhead01fl.unicph.domain
nano ~/.bash_profile
if [ -f ~/.bashrc ]; then\n . ~/.bashrc\nfi\n
nano ~/.bashrc
# .bashrc\n\n# Source global definitions\nif [ -f /etc/bashrc ]; then\n . /etc/bashrc\nfi\n\n# User specific environment\nif ! [[ \"$PATH\" =~ \"$HOME/.local/bin:$HOME/bin:\" ]]\nthen\n PATH=\"$HOME/.local/bin:$HOME/bin:$PATH\"\nfi\nexport PATH\n\n# Uncomment the following line if you don't like systemctl's auto-paging feature:\n# export SYSTEMD_PAGER=\n\n# User specific aliases and functions\n### Source DanGPU definitions\nif [ -f /maps/projects/dan1/apps/etc/bashrc ]; then\n . /maps/projects/dan1/apps/etc/bashrc\nfi\n\n### Source Brickman definitions\nif [ -f /maps/projects/dan1/data/Brickman/config/brickman.bashrc ]; then\n . /maps/projects/dan1/data/Brickman/config/brickman.bashrc\nfi\n
Brickman
folderThis section provides guidelines for effective research data management within our lab. By adopting these guidelines, we aim to improve data organization and naming conventions, leading to enhanced data governance and research efficiency. The guidelines include the following steps:
Assays
and Projects
folders.metadata.yml
in each folderAssays
and Projects
folders and browse it with a Panel python app.Projects
folders will be version controlled with Github and the Brickman organization.Projects
reports will be displayed under the Brickman organization GitHub Pages.Projects
will be syncronized and archived in Zenodo, which will give a DOI that can be used in a publication.Assays
folder will be uploaded to GEO, with the information provided in the metadata file.To ensure efficient data management, it is important to establish a consistent approach to organizing research data. We consider the following practices:
We are currently using a cookiecutter template to generate a folder structure. Use cruft when generating assay and project folders to allow us to validate and sync old templates with the latest version.
See this section to get started with a new project/assay.
"},{"location":"1_rdm-guidelines/#12-assay-folder","title":"1.2 Assay folder","text":"For each NGS experiment there should be an Assay
folder that will contain all experimental datasets (raw files and pipeline processed files). Inside Assay
there will be subfolders named after a unique NGS ID and the date it was created:
<Assay-ID>_YYYYMMDD\n
"},{"location":"1_rdm-guidelines/#assay-id-code-names","title":"Assay ID code names","text":"CHIP
: ChIP-seqRNA
: RNA-seqATAC
: ATAC-seqSCR
: scRNA-seqPROT
: Mass Spectrometry AssayCAT
: Cut&TagCAR
: Cut&RunRIME
: Rapid Immunoprecipitation Mass spectrometry of Endogenous proteinsFor example CHIP_20230101
is a ChIPseq assay made on 1st January 2023.
CHIP_20230424\n\u251c\u2500\u2500 description.yaml\n\u251c\u2500\u2500 metadata.yaml\n\u251c\u2500\u2500 pipeline.md\n\u251c\u2500\u2500 processed\n\u2514\u2500\u2500 raw\n \u251c\u2500\u2500 .fastq.gz\n \u2514\u2500\u2500 samplesheet.csv\n
There should be another folder called Projects
that will contain project information and data analysis.
A project may use one or more assays to answer a scientific question. This should be, for example, all the data analysis related to a publication.
The project folder should be named after a unique identifier, such as:
<Project-ID>_YYYYMMDD\n
<Project-ID>
should be the initials of the owner of the project folder and the publication year, e.g. JARH_et_al_20230101
.
<Project-ID>_20230424\n\u251c\u2500\u2500 data\n\u2502 \u251c\u2500\u2500 assays\n\u2502 \u251c\u2500\u2500 external\n\u2502 \u2514\u2500\u2500 processed\n\u251c\u2500\u2500 documents\n\u2502 \u2514\u2500\u2500 Non-sensitive_NGS_research_project_template.docx\n\u251c\u2500\u2500 notebooks\n\u2502 \u2514\u2500\u2500 01_data_analysis.rmd\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 reports\n\u2502 \u251c\u2500\u2500 figures\n\u2502 \u2502 \u2514\u2500\u2500 01_data_analysis\n\u2502 \u2514\u2500\u2500 01_data_analysis.html\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 results\n\u2502 \u2514\u2500\u2500 01_data_analysis/\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 description.yml\n\u2514\u2500\u2500 metadata.yml\n
00_preprocessing
We will have to setup a cron job to perform one-way sync between the /projects
folder and NGS_data
folder. All the analysis will be done on danGPU server, with no exceptions!
After project is done and published, it will be moved to NGS_data
.
YYYYMMDD
_
. Words in each section are written in camelCase. For example: field1_word1Word2.txt
.Transcriptomics metadata standards and fields
More info on naming conventions for different types of files and analysis is in development.
name description naming_convention file format example .fastq raw sequencing reads nan nan sampleID_run_read1.fastq .fastqc quality control from fastqc nan nan sampleID_run_read1.fastqc .bam aligned reads nan nan sampleID_run_read1.bam GTF sequence annotation nan nan one of https://www.gencodegenes.org/ GFF sequence annotation nan nan one of https://www.gencodegenes.org/ .bed genome locations nan nan nan .bigwig genome coverage nan nan nan .fasta sequence data (nucleotide/aminoacid) nan nan one of https://www.gencodegenes.org/ Multiqc report QC aggregated report <assayID>_YYYYMMDD.multiqc multiqc RNA_20200101.multiqc Count matrix final count matrix <assayID>_cm_aligner_YYYYMMDD.tsv tsv RNA_cm_salmon_20200101.tsv DEA differential expression analysis results DEA_<condition1-condition2>_LFC<absolute_threshold>_p<pvalue decimals>_YYYYMMDD.tsv tsv DEA_treat-untreat_LFC1_p01_20200101.tsv DBA differential binding analysis results DBA_<condition1-condition2>_LFC<absolute_threshold>_p<pvalue decimals>_YYYYMMDD.tsv tsv DBA_treat-untreat_LFC1_p01_20200101.tsv MAplot MA plot MAplot_<condition1-condition2>_YYYYMMDD.jpeg jpeg MAplot_treat-untreat_20200101.jpeg Heatmap plot Heatmap plot of anything heatmap_<type>_YYYYMMDD.jpeg jpeg Heatmap_sampleCor_20200101.jpeg Volcano plot Volcano plot volcano_<condition1-condition2>_YYYYMMDD.jpeg jpeg volcano_treat-untreat_20200101.jpeg Venn diagram Venn diagram venn_<type>_YYYYMMDD.jpeg jpeg venn_consensus_20200101.jpeg Enrichment table Enrichment results nan tsv nan"},{"location":"1_rdm-guidelines/#2-metadata-and-documentation","title":"2. Metadata and documentation","text":"Accurate documentation and metadata play a crucial role in facilitating data discovery and interpretation. Consider the following guidelines:
In development.
Metadata field Definition Format Example project Project name <name>_<keyword>_YYYY lundregan_oct4_2023 author Owner of the project <First name> <Surname> Sarah Lundregran date Date of creation YYYYMMDD 20230101 description Short description of the project Plain text This is a project describing the effect of Oct4 perturbation after pERK activation"},{"location":"1_rdm-guidelines/#3-data-catalogue-and-browser","title":"3. Data catalogue and browser","text":"@SLundregan is in the process of building a prototype for Assay
, using the metadata contained in all description.yml
and metadata.yml
files in the assay folder. This will be in the form of an SQLite database that that is easily updatable by running a helper script.
@SLundregan is also working on a browsable database using Panel python app. The app will display the latest version of the SQLite database. Clicking on an item from the database will open a tab containing all available metadata for the assay.
Also, it would be nice if you can create an Assay
folder directly from there, making it easy to fill up the info for the metadata and GEO submission (see below)
In the future, you could ideally visualize an analysed single cell RNAseq dataset by opening Cirrocumulus session.
"},{"location":"1_rdm-guidelines/#4-projects-version-control","title":"4.Projects
version control","text":"All projects should be version controlled using GitHub under the Brickman organization. After creating a cookiecutter template, initiate a git repository on the folder. The Git repository can stay private until it is ready for publication.
"},{"location":"1_rdm-guidelines/#5-projects-github-pages","title":"5.Projects
GitHub pages","text":"Using GitHub pages, it is possible to display your data analyses (or anything related to the project) inside the Projects
folder so that they are open to the public in a html format. This is great for transparency and reproducibility purposes. This can be done after the paper has been made public (it is not possible to do with a private repository without paying).
Info on how this is done should be put here
"},{"location":"1_rdm-guidelines/#6-project-archiving-in-zenodo","title":"6.Project
archiving in Zenodo","text":"Before submitting, link the repository to Zenodo and then create a Git release. This release will be caught by Zenodo and will give you a DOI that you can submit along the manuscript.
"},{"location":"1_rdm-guidelines/#7-data-upload-to-geo","title":"7. Data upload to GEO","text":"The raw data from NGS experiments will be uploaded to the Gene Expression Omnibus (GEO). Whenever a new Assay folder is created, the data owner must fill up the required documentation and information needed to make the GEO submission as smooth as possible.
"},{"location":"1_rdm-guidelines/#8-create-a-data-management-plan","title":"8. Create a Data Management Plan","text":"From the University of Copenhagen RDM team
\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200bA Data Management Plan (DMP) is a planning tool that helps researchers to establish good practices for working with physical m\u200baterial and data in a research project. A DMP covers all relevant aspects of research data management throughout the project. Writing a DMP early on in a project helps:
We are have written a DMP template that it is prefilled with repetitive information using DMPonline and the Horizon Europe guidelines. This template contains all the necessary information regarding common practices that we will use, the repositories we use for NGS, etc. The template is part of the project
folder template, under documents
. You can check the file here.
The Horizon Europe template is mostly focused on digital data and so, it is maybe not the best option regarding the needs of the Brickman Lab, due to the fact that it is mostly a wet lab with some bioinformatics. We will start working on another DMP based on the KU template, which is designed for both physical and digital data.
"},{"location":"2_starting-assay-project/","title":"Starting a new assay or project","text":"Whenever you obtain sequencing data from Genomic's Platform, you have to create an Assay. By running the commands below, you will have option to fill all required information about the experiment. This workflow will help us with tracking of all sequencing done in our lab.
"},{"location":"2_starting-assay-project/#assay","title":"Assay","text":"When you sequence an experiment, we create an Assay out of it, so we can use it in a project afterwards.
Login to danhead and run command:
create_assay\n
"},{"location":"2_starting-assay-project/#project","title":"Project","text":"Every time you want to make some analysis, you should create a project. Our folder structure will allow you to easily link various experiments to your project and make your analysis easier.
Please use the following naming convention: surname-<YOUR_CODENAME>
create_project\n
Link required assays to your project.
ln -s /maps/projects/dan1/data/Brickman/assays/<ASSAY_ID> /maps/projects/dan1/data/Brickman/projects/<PROJECT_ID>/data/assays/\n
Link external data if needed
ln -s /maps/projects/dan1/data/Brickman/shared /maps/projects/dan1/data/Brickman/projects/<PROJECT_ID>/data/external/\n
"},{"location":"3_pipelines/","title":"Running pipelines","text":"By default, we run nf-core pipelines. To run a pipeline, read the official documentation with an example.
"},{"location":"3_pipelines/#monitoring-runs-with-nextflow-tower","title":"Monitoring runs with Nextflow Tower","text":"This is a guide on how to use Nextflow Tower to monitor nf-core pipeline runs.
We have created an API token for our GitHub account (brickmanlab) and restricted it to run only pipelines, nothing else. The TOWER_WORKSPACE_ID
and TOWER_ACCESS_TOKEN
are stored in Brickman/config/brickman.bashrc
.
To do more advance stuff, you have to create your own personal access token.
"},{"location":"3_pipelines/#tower-cli-installation","title":"Tower CLI installation","text":"The tower cli1 is required to be installed only once to connect the server as a computing resource. Afterward, it's not required any more2.
# Download the latest version of Tower CLI:\nwget https://github.com/seqeralabs/tower-cli/releases/download/v0.7.3/tw-0.7.3-linux-x86_64\n\n# Make the file executable and move to directory accessible by $PATH variable:\nmkdir ~/.local/bin && mv tw-* tw && chmod +x ~/.local/bin/tw\n
Tower CLI configuration \u21a9
Tower Agent \u21a9
If you work with conda
you can use mamba
instead, which is faster tool to install packages.
We created shared conda
environments to simplify your life.
conda env list
source activate brickman
Here is an example how we created shared environment called brickman
.
module load miniconda/latest\n\nconda create --prefix /maps/projects/dan1/data/Brickman/conda/envs/brickman python=3.10\nsource activate brickman\npip install cruft cookiecutter\n\nchmod -R 755 /maps/projects/dan1/data/Brickman/conda/envs/brickman\n
To install shared conda
environment for the lab, follow the steps below.
brickman-<NGS>.yml
mamba env create -p /projects/dan1/data/Brickman/conda/envs/brickman-<NGS>.yml -f brickman-<NGS>.yml\n
"},{"location":"4_conda/#example-conda-environment","title":"Example conda environment","text":"Configuration for brickman-chipseq
environment.
name: brickman-chipseq\nchannels:\n - conda-forge\n - bioconda\n - anaconda\n - defaults\ndependencies:\n - bioconda::bedtools==2.31.0\n - bioconda::deeptools==2.31.0\n - bioconda::homer==4.11\n - bioconda::intervene==0.6.4\n - bioconda::macs2==2.2.9.1\n - bioconda::pygenometracks==3.8\n - bioconda::seacr==1.3\n - bioconda::samtools==1.17\nprefix: /projects/dan1/data/Brickman/conda/envs/brickman-chipseq\n
To install the environment, run
mamba env create -p /projects/dan1/data/Brickman/conda/envs/brickman-chipseq -f brickman-chipseq.yml\n
"},{"location":"4_conda/#modules","title":"Modules","text":"module avail\n\nmodule load miniconda/latest\n
"},{"location":"5_vscode/","title":"Setup R with Visual Studio Code","text":"This setup guides you through setting up R
in VSCode so you can use it on dancmpn01fl
and dancmpn02fl
computing nodes.
Info
The original RStudio server is using 4.0.5 version. If you want to stick this version, make sure to specify it when loading modules.
Why do you need this?
Because RStudio server sucks when you don't have a license and our place, so alternative it is. Also, VSCode has a bunch of plugins.
"},{"location":"5_vscode/#setting-up-remote-tunnels","title":"Setting up Remote Tunnels","text":"Warning
In this example we use version R/4.2.1. If you want to use a different one change the R version!
"},{"location":"5_vscode/#requirements","title":"Requirements","text":"ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.2.1 quarto
code tunnel
Microsoft account
when asked how you would like to log in to VScodeRemote Explorer
Sign in to the tunnels registered with Microsoft
dancmpn01flunicphdom
or dancmpn02flunicphdom
->
To use R
, install additional packages by clicking Extensions
in the left panel. Search for packages:
Quarto
Next, top panel lick View
-> Terminal
-> Write R
and hit ENTER
install.packages(\"languageserver\")
YES
then hit ENTER30
(Denmark servers to download packages)install.packages(\"httpgd\")
q()
to get outCode
-> Settings
-> Settings
r.plot.useHttpgd
If everything went well, you should be able to do this. If not, you know what to do.
"},{"location":"5_vscode/#i-already-did-the-setup-i-want-my-r-again","title":"I already did the setup, I want my R again","text":"ssh $USER@danhead01fl.unicph.domain
tmux new -s rstudio
srun -c 2 --mem=30gb --time=0-4:00:00 --pty bash
module load vscode_cli gcc/11.2.0 R/4.2.1 quarto
code tunnel
Remote Explorer
dancmpn01flunicphdom
or dancmpn02flunicphdom
curl -Lk 'https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64' --output vscode_cli.tar.gz\ntar -xf vscode_cli.tar.gz\n
"},{"location":"5_vscode/#known-issues","title":"Known issues","text":"VSCode can be installed as a server code-server
, however it is not possible to listen on the port when on computing node. This works only in the case of dangpu01fl
.
Error when trying to do reverse ssh:
error listen EADDRINUSE: address already in use 127.0.0.1:8080\n
VSCode code-server
is an alternative to code tunnel
that consists of running code-server on a compute node and accessing it via a web browser using reverse ssh
tunnel.
curl -fL https://github.com/coder/code-server/releases/download/v4.90.2/code-server-4.90.2-linux-amd64.tar.gz | tar -C /maps/projects/dan1/data/Brickman/shared/modules/software/code-server/4.90.2 -xz\n
ssh user@danhead01fl.unicph.domain\ntmux new\nsrun -c 2 --mem=30gb --time=0-4:00:00 -p gpuqueue --pty bash\nmodule load code-server\ncode-server\n# On local machine\nssh -fNL localhost:8080:localhost:8080 $USER@dangpu01fl.unicph.domain\n
"},{"location":"6_handy-scripts/","title":"Handy scripts","text":""},{"location":"6_handy-scripts/#geo-submission","title":"GEO submission","text":"~/Brickman/projects/
or ~/ucph/ndir/SUN-RENEW-Brickman/
Transfer files
and copy the login information for the ftpNOTE: before running the command below, make sure you are already in the folder and you see all the folder/files you want to upload. It will make the steps below simpler.
# we run tmux session in case we loose connection\ntmux new -s geo\n\n# this loges you to FTP\nsftp geoftp@sftp-private.ncbi.nlm.nih.gov\npassword: <PASSWORD>\n\ncd uploads/<FOLDER>\nmkdir <RNAseq>\ncd <RNAseq>\nmput *\n
"},{"location":"miscellaneous/dropbox/","title":"Moving Dropbox to SUND","text":"This is a step-by-step guide how I moved our Dropbox into SUND organized by KU IT. In first attempt I have tried moving the files into OneDrive, but because there might be issues with long filenames I eventually ran into more and more problems
Simpler solution is just to move things to SAMBA drives.
First, ssh into the server
ssh danhead01fl\ntmux new -s dropbox-transfer\nmodule load rclone/1.65.1\n
"},{"location":"miscellaneous/dropbox/#linking-remotes","title":"Linking remotes","text":""},{"location":"miscellaneous/dropbox/#dropbox","title":"Dropbox","text":"> n\n> dropbox\n> client_id <ENTER>\n> client_secret <ENTER>\n> y\nforward port `ssh -fNL localhost:53682:localhost:53682 danhead01fl` and access the website locally\n
"},{"location":"miscellaneous/dropbox/#onedrive","title":"Onedrive","text":"> n\n> onedrive\n> client_id <ENTER>\n> client_secret <ENTER>\n> region <ENTER>\n> y\nforward port `ssh -fNL localhost:53682:localhost:53682 danhead01fl` and access the website locally\n> config_type 3\n> https://alumni.sharepoint.com/sites/UCPH_BrickmanLab\n> y\n
"},{"location":"miscellaneous/dropbox/#test-connections","title":"Test connections","text":"rclone lsd Dropbox:\nrclone lsd dropbox_jb:\nrclone lsd Onedrive:\n
"},{"location":"miscellaneous/dropbox/#copy-files","title":"Copy files","text":"I have started first with manual folders because we had to many folders and sometimes there are timeout issues.
rclone copy --progress --checksum Dropbox:Computerome ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Computerome\nrclone copy --progress --checksum Dropbox:Courses ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Courses\nrclone copy --progress --checksum Dropbox:Grants ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Grants\nrclone copy --progress --checksum Dropbox:Other ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Other\nrclone copy --progress --checksum Dropbox:Papers ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Papers\nrclone copy --progress --checksum Dropbox:Pictures ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/Pictures\nrclone copy --progress --checksum Dropbox:People ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/People\nrclone copy --progress --checksum Dropbox:sc_seq_analysis ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/sc_seq_analysis\n
After the initial copy, I ran again copy this time of all the folders, most of them should be present already. This is to make sure all files were moved.
rclone copy \\\n --progress --checksum \\\n --exclude=\"People/Fung/Home/IRCMS_interview_2024**\" \\\n --exclude=\"People/Fung/Home/MB1016613_backup**\" \\\n --exclude=\"GEO_data/**\" \\\n Dropbox: ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/\n\nrclone copy --progress --checksum Dropbox:GEO_data ~/ucph/ndir/SUN-RENEW-Brickman/GEO_data/\nrclone copy --progress --checksum dropbox_jb: ~/ucph/ndir/SUN-RENEW-Brickman/Dropbox/JoshBrickman\n
"},{"location":"miscellaneous/ku-computer/","title":"KU computer setup","text":""},{"location":"miscellaneous/ku-computer/#conda","title":"Conda","text":"Go here and download Miniconda PKG not BASH. If you're running M1/2 please follow this guideline.
"},{"location":"miscellaneous/ku-computer/#example-for-chip-seq-setup","title":"Example for CHIP-seq setup","text":"conda create --name chipseq python=3.6\nconda activate chipseq\nconda install -c bioconda deeptools bedtools\npip install intervene\n
"},{"location":"miscellaneous/podman/","title":"Podman","text":""},{"location":"miscellaneous/podman/#setup","title":"Setup","text":"Storage for Podman needs to be configured to fix UID errors when running on UTF filesystem:
mkdir -p ~/.config/containers\ncp /maps/projects/dan1/apps/podman/4.0.2/storage.conf $HOME/.config/containers/\n
Rootless Podman also requires username and allowed UID range to be listed in /etc/subuid and /etc/subgid
List running containers and run a publically available container image to confirm Podman is working:
podman ps\npodman run -it docker.io/library/busybox\n
"},{"location":"miscellaneous/podman/#running-the-ku-sund-dangpu-nf-core-config-with-podman","title":"Running the KU SUND DANGPU nf-core config with Podman","text":"Currently this is not practical because file permissions cause the following error:
error during container init: error setting cgroup config for procHooks process: cannot set memory limit: container could not join or create cgroup\n
The nf-core config file, podman.config, can be found at /scratch/Brickman/pipelines/
Specify podman.config in nextflow run options to run a pipeline with Podman, e.g. for the rnaseq test profile:
nextflow run nf-core/rnaseq -r 3.8.1 -c podman.config -profile test --outdir nfcore_test\n
"},{"location":"tools_and_packages/alphafold2/","title":"Alphafold 2","text":""},{"location":"tools_and_packages/alphafold2/#1-running","title":"1. Running","text":""},{"location":"tools_and_packages/alphafold2/#11-create-a-target-file","title":"1.1 Create a target file","text":"# cat target.fasta\n>query\nMAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH\n
"},{"location":"tools_and_packages/alphafold2/#12-setup-environments","title":"1.2. Setup environments","text":"srun -N 1 --ntasks-per-node=10 --gres=gpu:2 --pty bash\nmodule load miniconda/latest cuda/11.4 cudnn/8.2.2\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\n\ncd /maps/projects/dan1/data/Brickman/alphafold\nexport AF2_DATA_DIR=\"~/projects/data/Alphafold2/24022023\"\n
"},{"location":"tools_and_packages/alphafold2/#13-run-monomer-cli","title":"1.3. Run monomer (cli)","text":"python run_alphafold.py \\\n --fasta_paths=~/projects/data/Brickman/target_01.fasta \\\n --output_dir=/scratch/tmp/alphatest \\\n --model_preset=monomer \\\n --db_preset=full_dbs \\\n --data_dir=$AF2_DATA_DIR \\\n --uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n --uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n --mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n --pdb70_database_path=$AF2_DATA_DIR/pdb70/pdb70 \\\n --template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n --obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n --bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n --max_template_date=2022-01-01 \\\n --use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#14-run-multimer-cli","title":"1.4. Run multimer (cli)","text":"The example below generates 10 models.
python run_alphafold.py \\\n --fasta_paths=/home/fdb589/projects/data/Brickman/WTPU_1_WTC_EBPa.fasta \\\n --output_dir=/scratch/tmp/alphatest \\\n --model_preset=multimer \\\n --db_preset=full_dbs \\\n --data_dir=$AF2_DATA_DIR \\\n --uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n --uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n --mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n --template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n --obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n --pdb_seqres_database_path=$AF2_DATA_DIR/pdb_seqres/pdb_seqres.txt \\\n --uniprot_database_path=$AF2_DATA_DIR/uniprot/uniprot.fasta \\\n --bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n --max_template_date=2022-01-01 \\\n --num_multimer_predictions_per_model=10 \\\n --use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#15-example-sbatch-script","title":"1.5. Example SBATCH script","text":"#!/bin/bash\n#SBATCH --job-name=AF2\n#SBATCH --gres=gpu:2\n#SBATCH --cpus-per-task=10\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest cuda/11.4 cudnn/8.2.2\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\ncd ~/projects/data/Brickman/alphafold\nmkdir -p /scratch/tmp/alphatest\nexport AF2_DATA_DIR=\"~/projects/data/Alphafold2/24022023\"\n\nsrun python run_alphafold.py \\\n--fasta_paths=~/projects/data/Brickman/target_01.fasta \\\n--output_dir=/scratch/tmp/alphatest \\\n--model_preset=monomer \\\n--db_preset=full_dbs \\\n--data_dir=$AF2_DATA_DIR \\\n--uniref30_database_path=$AF2_DATA_DIR/uniref30/UniRef30_2021_03 \\\n--uniref90_database_path=$AF2_DATA_DIR/uniref90/uniref90.fasta \\\n--mgnify_database_path=$AF2_DATA_DIR/mgnify/mgy_clusters_2022_05.fa \\\n--pdb70_database_path=$AF2_DATA_DIR/pdb70/pdb70 \\\n--template_mmcif_dir=$AF2_DATA_DIR/pdb_mmcif/mmcif_files/ \\\n--obsolete_pdbs_path=$AF2_DATA_DIR/pdb_mmcif/obsolete.dat \\\n--bfd_database_path=$AF2_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \\\n--max_template_date=2022-01-01 \\\n--use_gpu_relax\n
"},{"location":"tools_and_packages/alphafold2/#2-installation","title":"2. Installation","text":"conda create --prefix /maps/projects/dan1/data/Brickman/conda/envs/af2 python=3.8\nsource activate /maps/projects/dan1/data/Brickman/conda/envs/af2\n\nmamba install hmmer\npip install py3dmol\nmamba install pdbfixer==1.7\nmamba install -c conda-forge openmm=7.5.1\n\ncd /maps/projects/dan1/data/Brickman/\ngit clone --branch main https://github.com/deepmind/alphafold alphafold\npip install -r ./alphafold/requirements.txt\npip install --no-dependencies ./alphafold\n\n# stereo chemical props needs to be in common folder\nwget \u2013q \u2013P /maps/projects/dan1/data/Brickman/alphafold/alphafold/common/ https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt\n\n# skipping content part\nmkdir -p ./alphafold/data/params && cd ./alphafold/data/params\nwget https://storage.googleapis.com/alphafold/alphafold_params_colab_2022-12-06.tar\ntar --extract --verbose --preserve-permissions --file alphafold_params_colab_2022-12-06.tar\npip install ipykernel ipywidgets tqdm\npip install --upgrade scprep phate\n\n# Install jax\nmodule load miniconda/latest\nmodule load cuda/11.4 cudnn/8.2.2\nexport CUDA_VISIBLE_DEVICES='3'\npip install \"jax[cuda11_cudnn82]\" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html\n\n# fix last issues\nmamba install -c conda-forge -c bioconda hhsuite\nmamba install -c bioconda kalign3\npip install numpy==1.21.6\n
"},{"location":"tools_and_packages/alphafold2/#21-download-references","title":"2.1. Download references","text":"Note
Downloading references will not work on one try, had to do a lot of manual re-running of scripts.
# create folder\nmkdir -p ~/projects/data/Alphafold2/24022023\ncd ~/projects/data/Alphafold2/24022023\n\n# Download all databases\nsh download_all_data.sh ~/projects/data/Alphafold2/24022023/ > download.log 2> download_all.log\n\n# Some fix-ups\n# mmCIF will not work because the firewall blocks the port, so I found this workaroud online\n# ref: https://github.com/deepmind/alphafold/issues/196\nwget -e robots=off -r --no-parent -nH --cut-dirs=7 -q ftp://ftp.ebi.ac.uk/pub/databases/pdb/data/structures/divided/mmCIF/ -P \"${RAW_DIR}\"\n\n# Last step is to fix all the permissions\nchmod -R 755 24022023/\n
"},{"location":"tools_and_packages/alphafold2/#references","title":"References","text":"If your nascent RNA-seq data is already aligned, bw suitable for use with dReg can be prepared using Danko-Lab RunOnBamToBigWig
If you have fastq files from PRO-seq, GRO-seq, or CHrO-seq, run the Danko-Lab's mapping pipeline using the shared dReg_dataprep
conda environment
Example SBATCH script for mapping pipeline
#!/bin/bash\n\n#SBATCH --job-name=pro_align\n#SBATCH -c 20\n#SBATCH --mem=30gb\n#SBATCH --time=00-24:00:00\n#SBATCH --output=01_proseq_alignment.out\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest\nsource activate dReg_dataprep\n\nPROSEQ=(\"/maps/projects/dan1/data/Brickman/proseq2.0/proseq2.0.bsh\")\nGENO=(\"/scratch/Brickman/references/mus_musculus/ensembl/GRCm38_102/\")\nRESL=(\"/maps/projects/dan1/data/Brickman/projects/NAME_DATE/data/external/proseq/\")\nSAMPLES=(\"SRX14164616_SRR18010280 SRX14164617_SRR18010278\")\n\nfor sample in ${SAMPLES}; do\n bash ${PROSEQ} -i ${GENO}bwa \\\n -c ${GENO}GRCm38.102.genome \\\n -PE --RNA5=R2_5prime --UMI1=6 \\\n -O ${RESL} \\\n -I ${sample} \\\n --thread=20\ndone\n
"},{"location":"tools_and_packages/dReg/#gpu-check","title":"GPU check","text":"Check available GPUs and running processes before using dReg. GPU 0 is reserved for Brickman group
nvidia-smi\n
"},{"location":"tools_and_packages/dReg/#example-dreg-script","title":"Example dReg script","text":"#!/bin/bash\n\n#SBATCH --job-name=dREG\n#SBATCH -c 30\n#SBATCH --mem=30gb\n#SBATCH --time=00-24:00:00\n#SBATCH --output=01-1_dREG.out\n#SBATCH --mail-type=BEGIN,END\n#SBATCH --mail-user=YOUR-EMAIL\n\nmodule load miniconda/latest cuda/11.8-dangpu cudnn/8.6.0-dangpu\nsource activate dReg\n\nBW=(\"../data/assays/RNA_INITIAL_DATE/processed/bw/\")\nRESL=(\"../results/01/dREG/\")\ndREG=(\"/projects/dan1/data/Brickman/dREG/run_dREG.bsh\")\nMODEL=(\"/projects/dan1/data/Brickman/dREG/resources/asvm.gdm.6.6M.20170828.rdata\")\n\n\nSAMPLES=(\"0h_A 0h_B 2h_A 2h_B\")\n\nfor sample in ${SAMPLES}; do\n bash ${dREG} ${BW}${sample}_sorted_filt_dedup_plus.bw ${BW}${sample}_sorted_filt_dedup_minus.bw \\\n ${RESL}${sample}_test ${MODEL} \\\n 30 0\ndone\n
"},{"location":"tools_and_packages/dReg/#2-installation","title":"2. Installation","text":""},{"location":"tools_and_packages/dReg/#installing-dreg","title":"Installing dReg","text":"Note: Python version in conda env must be 3.8, and R version < 4.0
cd /maps/projects/dan1/data/Brickman/conda/\nmodule load miniconda/latest\nmamba env create -p /projects/dan1/data/Brickman/conda/envs/dReg -f dREG.yml\nsource activate dReg\n\ncd /maps/projects/dan1/data/Brickman/\ngit clone https://github.com/Danko-Lab/dREG\ncd dREG\nmake R_dependencies\n\nR\ndevtools::install_github(\"CshlSiepelLab/RPHAST\")\ndevtools::install_version(\"MASS\", version=\"7.3-51.5\", repos=\"https://mirrors.dotsrc.org/cran/\")\ninstall.packages(\"e1071\", repos=\"https://mirrors.dotsrc.org/cran/\")\ndevtools::install_version(\"randomForest\", version=\"4.6-14\", repos=\"https://mirrors.dotsrc.org/cran/\")\nquit()\n\nmake dreg\nmkdir resources\ncd resources\nwget ftp://cbsuftp.tc.cornell.edu/danko/hub/dreg.models/asvm.gdm.6.6M.20170828.rdata\n
"},{"location":"tools_and_packages/dReg/#installing-rgtsvm","title":"Installing Rgtsvm","text":"Rgtsvm is required for dReg to use GPU resources
# make sure in dREG repo and that dReg environment is activated\ncd /maps/projects/dan1/data/Brickman/dREG\nsource activate dReg\n\nR\ninstall.packages(c(\"bit64\", \"snow\", \"SparseM\"), repos=\"https://mirrors.dotsrc.org/cran/\")\ndevtools::install_version(\"lattice\", version=\"0.20-41\", repos=\"https://mirrors.dotsrc.org/cran/\")\ninstall.packages(\"Matrix\", repos=\"https://mirrors.dotsrc.org/cran/\")\nquit()\nmamba install -c conda-forge boost=1.70.0\n\nmkdir third-party\ncd third-party\ngit clone https://github.com/Danko-Lab/Rgtsvm.git\ncd Rgtsvm\n\nmodule load cuda/11.8-dangpu\nmodule load cudnn/8.6.0-dangpu\n\nR CMD INSTALL --configure-args=\"--with-boost-home=$CONDA_PREFIX\" Rgtsvm\n
"},{"location":"tools_and_packages/packages/","title":"Bioinformatics tools","text":"Tool Description NGS Language Link Functional enrichment on genomic regions CHIP-seq ATAC-seq R https://github.com/jokergoo/rGREAT Pseudotime inference scRNA-seq Python https://github.com/LouisFaure/scFates nan Single-cell analysis package scRNA-seq Python https://github.com/scverse/scanpy nan AI probabilistic package for transfer learning DR and more scRNA-seq Python https://github.com/scverse/scvi-tools Gene set enrichment analysis on steroids scRNA-seq Python https://github.com/zqfang/GSEApy nan UpsetR on stereoids (complicated Venn Diagrams) Plotting R https://github.com/krassowski/complex-upset nan Complex heatmap Plotting Python https://github.com/DingWB/PyComplexHeatmap nan"},{"location":"tools_and_packages/ucsc_liftover/","title":"UCSC liftover tool","text":"Documentation for UCSC liftover.
"},{"location":"tools_and_packages/ucsc_liftover/#issue-separate-peaks-map-to-same-coordinates-after-liftover","title":"Issue: separate peaks map to same coordinates after liftover","text":"Remove any peaks with overlapping coordinates after liftover before using the lifted over peak file:
#!/bin/bash\n\nmodule load bedtools\n\nEXTL=(\"../data/external/\")\n\n# Sort the lifted over peakfile for use with bedtools\nsort -k1,1 -k2,2n ${EXTL}wong_fig3c_peaks_GRCh38.bed > ${EXTL}peaks.tmp && mv ${EXTL}peaks.tmp ${EXTL}wong_fig3c_peaks_GRCh38.bed\n\n# Bedtools merge count rows contributing to merged peaks (overlapping peaks will have count > 1)\nbedtools merge -i ${EXTL}wong_fig3c_peaks_GRCh38.bed -c 1 -o count > ${EXTL}counted.bed\n\n# Get non-overlapping peaks\nawk '/\\t1$/{print}' ${EXTL}counted.bed > ${EXTL}filtered.bed\n\n# Intersect original file with non-overlapping peaks and output overlapping peaks\nbedtools intersect -wa -a ${EXTL}wong_fig3c_peaks_GRCh38.bed -b ${EXTL}filtered.bed > ${EXTL}wong_fig3c_peaks_GRCh38_correct_liftover.bed\nbedtools intersect -v -a ${EXTL}wong_fig3c_peaks_GRCh38.bed -b ${EXTL}filtered.bed > ${EXTL}wong_fig3c_peaks_GRCh38_overlapping.bed\n
"}]}
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 86adead17c36d78fce94b5d04f3e51a0be8345ab..d2fe56b82e27a38a1e278a6ef8498ac390b72be6 100644
GIT binary patch
delta 14
Vcmb=g=aBE_;Aq&un?8}F8~`BI1a$xa
delta 14
Vcmb=g=aBE_;Apt_H*F$EIRGY=1(E;&
diff --git a/tools_and_packages/alphafold2/index.html b/tools_and_packages/alphafold2/index.html
index 4104135..3728e46 100644
--- a/tools_and_packages/alphafold2/index.html
+++ b/tools_and_packages/alphafold2/index.html
@@ -16,7 +16,7 @@
-
+
diff --git a/tools_and_packages/dReg/index.html b/tools_and_packages/dReg/index.html
index 0a73bb5..cf39f86 100644
--- a/tools_and_packages/dReg/index.html
+++ b/tools_and_packages/dReg/index.html
@@ -16,7 +16,7 @@
-
+
diff --git a/tools_and_packages/packages/index.html b/tools_and_packages/packages/index.html
index bb4cadf..e280ee4 100644
--- a/tools_and_packages/packages/index.html
+++ b/tools_and_packages/packages/index.html
@@ -16,7 +16,7 @@
-
+
diff --git a/tools_and_packages/ucsc_liftover/index.html b/tools_and_packages/ucsc_liftover/index.html
index fa0a8f9..eb91633 100644
--- a/tools_and_packages/ucsc_liftover/index.html
+++ b/tools_and_packages/ucsc_liftover/index.html
@@ -14,7 +14,7 @@
-
+