Skip to content

Commit

Permalink
Update readmes (#28)
Browse files Browse the repository at this point in the history
* fix cdna-genertor

* refactor: update folder structure

* update READMEs
  • Loading branch information
balajtimate authored Oct 27, 2023
1 parent 1747d65 commit a810311
Show file tree
Hide file tree
Showing 51 changed files with 275 additions and 327 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -174,3 +174,5 @@ poetry.toml
pyrightconfig.json

# End of https://www.toptal.com/developers/gitignore/api/python

.vscode
25 changes: 24 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,27 @@
# Simulating single cell RNA library generation (scRNA-seq)
# scRNAsim: Simulating single cell RNA (scRNA-seq) library generation
The projects implements a simulation of single cell RNA sequencing (scRNA-seq), accounting for some common sources noise that complicate the analysis of the resulting data.

### Setting up the virtual environment

Create and activate the environment with necessary dependencies with Conda:

```bash
conda env create -f environment.yml
conda activate scrnasim-toolz
```

### Tools

The tools available in this repo are:
1. Transcript sampler
2. Structure generator
3. Sequence extractor
4. Priming site predictor
5. cDNA generator
6. Fragment selector
7. Read sequencer

### Description

Although all cells in a multicellular organism carry the same genomic information, they differ a lot in their function, due to the fact that they are equipped with distinct toolboxes of molecular functions, implemented by different proteins and RNAs. Thus, being able to detect and measure the abundance of gene products (RNAs and/or proteins) in individual cells holds the key to understanding how organisms are organized and function. In the past decade, much progress has been made in the development of technologies for single cell RNA sequencing. They make use of microfluidic devices that allow RNA-seq sample preparation for individual cells encapsulated in droplets, followed by pooling of the resulting DNA fragments and sequencing. The broadly used 10x Genomics technology uses oligo-dT primers to initiate the cDNA sequencing from the poly(A) tails of fragmented RNAs. Subsequent sequencing yields *libraries* of relatively short (100-200 nucleotides) *reads* that come predominantly from the 3’ ends of RNAs given the priming on the poly(A) tail. As in the ideal case (no amplification bias) each read came from the end of one mRNA, simply counting the reads that map to mRNAs of individual genes provides estimates of the expression levels of those genes within the respective cell. Currently, typical data sets cover thousands of genes in tens-to-hundreds of thousands of cells. However, we are still far from being able to prepare ideal libraries, for many reasons. First, as gene expression is a bursty, stochastic process, there will be fluctuations in the number of RNAs (corresponding to a given gene) that are present in any one cell at the time of sampling, even when the time-average of those RNA numbers were to be the same across cells. Secondly, the sample preparation steps are carried out by various enzymes with limited efficiency. This leads to substantial fluctuations in the number of molecules that are “captured” for a gene in a given cell, even if all cells were to have the same abundance of these molecules at the time of sampling. Third, the biochemical reactions that are part of sample preparation do not have absolute specificity. A clear example is the priming of the cDNA synthesis with oligo(dT): although the primer is intended for the poly(A) tails at the 3’ ends of RNAs, it is clear that the primer also binds to A-rich stretches that are internal to transcripts, and especially located in intronic regions. Finally, a conceptual issue with the single cell data is that one cannot apply the principle of averaging measurement values across replicate experiments to obtain more precise estimates, because we do not know which cells could be considered replicates of each other (if that is at all conceivable).

Expand Down
42 changes: 25 additions & 17 deletions scRNAsim_toolz/cdna_generator/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,31 @@
# cDNA Generator module
## Usage
```
usage: cdna-generator [-h] -ifa INPUT_FASTA -igtf INPUT_GTF -icpn INPUT_COPY_NUMBER -ofa OUTPUT_FASTA -ocsv OUTPUT_CSV [-v]
Generate cDNA sequences based on primer probabilities.
options:
-h, --help show this help message and exit
-ifa INPUT_FASTA, --input_fasta INPUT_FASTA
genome fasta file
-igtf INPUT_GTF, --input_gtf INPUT_GTF
gtf file
-icpn INPUT_COPY_NUMBER, --input_copy_number INPUT_COPY_NUMBER
input copy number (csv) file
-ofa OUTPUT_FASTA, --output_fasta OUTPUT_FASTA
output fasta file
-ocsv OUTPUT_CSV, --output_csv OUTPUT_CSV
output fasta file
```
Example:
```
cdna-generator -ifa tests/cdna_generator/files/transcript.fasta -igtf tests/cdna_generator/files/Example_GTF_Input.GTF -icpn tests/cdna_generator/files/copy_number_input.csv -ofa cdna_seq.fa -ocsv cdna_counts.csv
```

## Overview
Generate cDNA based on mRNA transcript sequences and the coresponding priming probabilities.

## Example usage
A simple example can be run from the test_files directory:

python ../cdna/cli.py -ifa yeast_example.fa -icpn copy_number_input.csv -igt Example_GTF_Input.GTF -ofa cDNA.fasta -ocsv cDNA.csv

## Installation

pip install .

## Docker
A docker image is available, to fetch this image:

docker pull ericdb/my-image

To run a simple example using this image:

docker run my-image python cdna/cli.py -ifa test_files/yeast_example.fa -icpn test_files/copy_number_input.csv -igt test_files/Example_GTF_Input.GTF -ofa test_files/cDNA.fasta -ocsv test_files/cDNA.csv

## License

Expand Down
157 changes: 78 additions & 79 deletions scRNAsim_toolz/cdna_generator/cdna.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,10 @@ def complement(res: str) -> str:


def seq_complement(sequence: str) -> Optional[str]:
"""Return the corresponding cDNA sequence by finding the complementary \
base pairs and returning the reversed sequence.
"""Return the corresponding cDNA sequence.
Find the complementary base pairs and
returning the reversed sequence.
Args:
sequence: sequence to be converted into cDNA.
Expand Down Expand Up @@ -78,99 +80,39 @@ def run(self) -> None:
Returns: None
"""
self.read_csv()
self.read_fasta()
self.read_gtf()
self.process_csv()
self.process_fasta()
self.process_gtf()
self.add_sequences()
self.add_complement()
self.add_records()
self.write_fasta()
self.write_csv()

def add_records(self) -> None:
"""Add data records to fasta file.
Adds the copy number information to the fasta records.
Returns: None
"""
self.fasta_records = []
for _, row in self.gtf_df.iterrows():
if row["complement"] is not None:
copy_number = row["Transcript_Copy_Number"]
for _ in range(int(copy_number)):
record = SeqRecord(
Seq(row["complement"]),
row["cdna_ID"],
f"Transcript copy number: {copy_number}",
"",
)
self.fasta_records.append(record)

def add_sequences(self) -> None:
"""Add the sequence for a given priming site.
Returns: None
"""
self.gtf_df["priming_site"] = self.gtf_df.apply(
lambda row: self.read_primingsite(row["seqname"], row["start"]),
axis=1,
)

def add_complement(self) -> None:
"""Add the complementary cDNA sequence.
Returns: None
"""
self.gtf_df["complement"] = self.gtf_df["priming_site"].apply(
seq_complement
)

def read_primingsite(self, sequence: str, end: int) -> None:
"""Read a fasta file from a given start character.
Reads a fasta sequence with ID (sequence) and returns the
sequence starting from the index start.
def process_csv(self) -> None:
"""Read a given copy number csv file.
Args:
sequence: sequence ID to be read.
end: end index of the priming site.
Wrapper for Pandas read_csv.
Returns: None
"""
if sequence not in self.fasta_dict.keys():
return None
return self.fasta_dict[sequence].seq[:end]
df_csv = pd.read_csv(self.cpn, index_col=False)
df_csv = df_csv.reset_index() # make sure indexes pair with number of rows # noqa: E501
self.csv_df = df_csv

def read_fasta(self) -> None:
def process_fasta(self) -> None:
"""Read a given fasta file.
Wrapper for SeqIO.parse.
Returns: None
"""
record = SeqIO.parse(self.fasta, "fasta")
records = list(record)
records = list(SeqIO.parse(self.fasta, "fasta"))
self.fasta_dict = {x.name: x for x in records}

def read_csv(self) -> None:
"""Read a given copy number csv file.
Wrapper for Pandas read_csv.
Returns: None
"""
df_csv = pd.read_csv(self.cpn, index_col=False)
df_csv = df_csv.reset_index() # make sure indexes pair with number of rows # noqa: E501
self.csv_df = df_csv

def read_gtf(self) -> None:
def process_gtf(self) -> None:
"""Read and process the GTF file.
Reads a GTF file and determines copy numbers from
Expand All @@ -183,9 +125,7 @@ def read_gtf(self) -> None:
# "feature", "seqname", "start", "end"
# alongside the names of any optional keys
# which appeared in the attribute column
gtf_df = read_gtf(self.gtf)

gtf_df = gtf_df.to_pandas() # convert polars df to pandas df
gtf_df = read_gtf(self.gtf, result_type="pandas") # from gtfparse

gtf_df["Binding_Probability"] = pd.to_numeric(
gtf_df["Binding_Probability"]
Expand All @@ -205,14 +145,14 @@ def read_gtf(self) -> None:
else:
count = 0 # reset count
# CSV transcript ID
id_csv = str(row["seqname"]).split("_")[1]
id_csv = f"{id_}_{count}"
# Calculate Normalized_Binding_Probability and add to GTF dataframe
gtf_df.loc[index, "Normalized_Binding_Probability"] = (
row["Binding_Probability"] / df_norm_bind_prob[id_]
)
# Calculate Normalized_Binding_Probability and add to GTF dataframe
csv_transcript_copy_number = self.csv_df.loc[
self.csv_df.iloc[:, 1] == int(id_csv),
self.csv_df["ID of transcript"] == id_csv,
"Transcript copy number",
].iloc[0] # pop the first value in the frame
gtf_df.loc[index, "Transcript_Copy_Number"] = round(
Expand All @@ -227,6 +167,65 @@ def read_gtf(self) -> None:
].astype(int)
self.gtf_df = gtf_df

def add_sequences(self) -> None:
"""Add the sequence for a given priming site.
Returns: None
"""
self.gtf_df["priming_site"] = self.gtf_df.apply(
lambda row: self.read_primingsite(row["seqname"], row["start"]),
axis=1,
)

def read_primingsite(self, sequence: str, end: int) -> None:
"""Read a fasta file from a given start character.
Reads a fasta sequence with ID (sequence) and returns the
sequence starting from the index start.
Args:
sequence: sequence ID to be read.
end: end index of the priming site.
Returns: None
"""
if sequence not in self.fasta_dict.keys():
return None
return self.fasta_dict[sequence].seq[:end]

def add_complement(self) -> None:
"""Add the complementary cDNA sequence.
Returns: None
"""
self.gtf_df["complement"] = self.gtf_df["priming_site"].apply(
seq_complement
)

def add_records(self) -> None:
"""Add data records to fasta file.
Adds the copy number information to the fasta records.
Returns: None
"""
self.fasta_records = []
for _, row in self.gtf_df.iterrows():
if row["complement"] is not None:
copy_number = row["Transcript_Copy_Number"]
for _ in range(int(copy_number)):
record = SeqRecord(
Seq(row["complement"]),
row["cdna_ID"],
f"Transcript copy number: {copy_number}",
"",
)
self.fasta_records.append(record)

def write_fasta(self) -> None:
"""Write cDNA fasta records to file.
Expand Down
2 changes: 1 addition & 1 deletion scRNAsim_toolz/cdna_generator/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def main():
"""
parser = argparse.ArgumentParser(
prog="cDNA generator",
prog="cdna-generator",
description="Generate cDNA sequences based on primer probabilities.",
)
parser.add_argument(
Expand Down
44 changes: 27 additions & 17 deletions scRNAsim_toolz/fragment_selector/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,40 @@
# Terminal fragment selecting
# Terminal fragment selector

## Usage
```
usage: fragment-selector [-h] --fasta FASTA --counts COUNTS -o OUTPUT [--mean MEAN] [--std STD] [-s SIZE] [--sep SEP] [-v]
Takes as input FASTA file of cDNA sequences, a CSV/TSV with sequence counts, and mean and std. dev. of fragment lengths and 4 nucleotide probabilities for the cuts. Outputs most terminal fragment (within desired length
range) for each sequence.
options:
-h, --help show this help message and exit
--fasta FASTA Path to FASTA file with cDNA sequences
--counts COUNTS Path to CSV/TSV file with sequence counts
-o OUTPUT, --output OUTPUT
output file path
--mean MEAN Mean fragment length (default: 300)
--std STD Standard deviation fragment length (defafult: 60)
-s SIZE, --size SIZE Chunk size for batch processing
--sep SEP Sequence counts file separator.
```

Example:

`fragment-selector --fasta tests/fragment_selector/files/test.fasta --counts tests/fragment_selector/files/test.csv --mean 50 --output fragments.fa`
## Overview
Simulating single cell RNA library generation (scRNA-seq)

This repository is as part of the Uni Basel course <E3: Programming for Life Science – 43513>. To test the accuracy of scRNA-seq data we generated the *synthetic data*. That is, we reconstruct the properties of the experimental data set and determine whether the computational analysis can recover properties of the data that was assumed in the simulation. This is never trivial since setting the ground truth is much needed in the computational method to evaluate the result.

# Synopsis

As part of the sub-project, we implemented python code for selecting terminal fragments. Detailed distribution used for the selecting fragments can be found below, summarised in [this paper](https://www.nature.com/articles/srep04532#MOESM1).
> Next Generation Sequencing (NGS) technology is based on cutting DNA into small fragments and their massive parallel sequencing. The multiple overlapping segments termed “reads” are assembled into a contiguous sequence. To reduce sequencing errors, every genome region should be sequenced several dozen times. This sequencing approach is based on the assumption that genomic DNA breaks are random and sequence-independent. However, previously we showed that for the sonicated restriction DNA fragments the rates of double-stranded breaks depend on the nucleotide sequence. In this work we analyzed genomic reads from NGS data and discovered that fragmentation methods based on the action of the hydrodynamic forces on DNA, produce similar bias. Consideration of this non-random DNA fragmentation may allow one to unravel what factors and to what extent influence the non-uniform coverage of various genomic regions.
As a whole project, we implemented a procedure for sampling reads from mRNA sequences, incorporating a few sources of “noise”. These include the presence of multiple transcript isoforms from a given gene, some that are incompletely spliced, stochastic binding of primers to RNA fragments and stochastic sampling of DNA fragments for sequencing. We will then use standard methods to estimate gene expression from the simulated data. We will repeat the process multiple times, each time corresponding to a single cell. We will then compare the estimates obtained from the simulated cells with the gene expression values assumed in the simulation. We will also try to explore which steps in the sample preparation have the largest impact on the accuracy of gene expression estimates.


# Usage

CLI arguments:
- fasta (required): Path to FASTA file with cDNA sequences
- counts (required): Path to CSV/TSV file with sequence counts
Expand All @@ -26,20 +50,6 @@ CLI arguments:
Output:
- Text file with most terminal fragments for each sequence.

To install package, run

```
pip install .
```


# Development

To build Docker image, run

```
docker build -t terminal_fragment_selector .
```

# License

Expand Down
Loading

0 comments on commit a810311

Please sign in to comment.