User Tools

Site Tools


from_nanopore_to_gene_prediction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
from_nanopore_to_gene_prediction [2019/07/12 10:55] 134.190.235.39from_nanopore_to_gene_prediction [2019/07/24 11:47] (current) 134.190.235.39
Line 1: Line 1:
-====== From Nanopore to Gene Prediction: a pathway ======+======From Nanopore to Gene Prediction: a pathway====== 
 +By Greg and Jon
  
  
Line 9: Line 10:
  
 The following are **REQUIRED flags**: The following are **REQUIRED flags**:
 +<code>--in_dir</code>[the directory that your minion data has been outputted to]
  
-in_dir [the directory that your minion data has been outputted to] +<code>--out_dir</code>  
- +[the directory you want the program to place the folders that it will sort the reads into.]
-out_dir [the directory you want the program to place the folders that it will sort the reads into.]+
  
 Deepbinner is designed to be run in parallel with the sequencing, which is to say that in real time, deepbinner can take the output from nanopore and sort it. While we don’t usually do this in this lab, it’s a feature to keep in mind.  Deepbinner is designed to be run in parallel with the sequencing, which is to say that in real time, deepbinner can take the output from nanopore and sort it. While we don’t usually do this in this lab, it’s a feature to keep in mind. 
  
-There are a number of additional flags that can help refine the program’s behaviour to your needs.+There are a number of additional flags that can help refine the program’s behavior to your needs. 
 + 
 +Importantly, the fast5 reads will be physically transferred from the input folder to the new respective folders, so don't be alarmed when the inputs 'disappear'!
  
 **Model presets:** **Model presets:**
  
-nativepreset for EXP-NBD103 read and end models. This is the default.+<code>--native</code> 
 +preset for EXP-NBD103 read and end models. This is the default.
  
-rapidpreset for SQK-RBK004 read start model. +<code>--rapid</code> 
 +preset for SQK-RBK004 read start model. 
  
 **Models:** **Models:**
Line 28: Line 33:
 These are used if the presets are not being used, and can be invoked with: These are used if the presets are not being used, and can be invoked with:
  
-s or start_model    for a model trained on the starts of reads+<code>-s</code> or <code>--start_model</code> 
 +for a model trained on the starts of reads
  
 and and
  
-e or end_model      for a model trained on the ends of reads.+<code>-e</code> or <code>--end_model</code> 
 +for a model trained on the ends of reads.
  
 Largely we don’t have to worry about this in this lab. Largely we don’t have to worry about this in this lab.
Line 40: Line 47:
 Two flags: Two flags:
  
-scan_size [value] This flag determines how much of a read’s start and end will be examined for barcode signals. **Defaults to 6144.** +<code>--scan_size [value]</code>  
 +This flag determines how much of a read’s start and end will be examined for barcode signals. **Defaults to 6144.** 
  
-score_diff [value] This flag determines how much of a difference between the best and second best barcode scores will be allowed, in order for classification to occur. **Default is 0.5**+<code>--score_diff [value]</code> 
 +This flag determines how much of a difference between the best and second best barcode scores will be allowed, in order for classification to occur. **Default is 0.5**
  
 **two model (read start and read end) behaviour:** **two model (read start and read end) behaviour:**
  
 Three mutually exclusive flags which determine how lenient the program is in classifying a read. They are, listed here from most lenient to most stringent: Three mutually exclusive flags which determine how lenient the program is in classifying a read. They are, listed here from most lenient to most stringent:
- +<code> 
-require_either  +--require_either  
- +--require_start  
-require_start  +--require_both 
- +</code>
-require_both +
 The first flag will allow the program to classify a read based on the barcode call of either the start or end of the read, so long as they do not disagree. The first flag will allow the program to classify a read based on the barcode call of either the start or end of the read, so long as they do not disagree.
  
-The second flag will classify a read based on a start barcode, and having an end barcode is opitional. **This is the default behaviour**+The second flag will classify a read based on a start barcode, and having an end barcode is optional. **This is the default behaviour**
  
 The third flag requires the same barcode on both ends of the read in order for it to be classified. The third flag requires the same barcode on both ends of the read in order for it to be classified.
Line 64: Line 71:
 There are four flags here that govern how the program runs. You probably will have no reason to alter these from their default. There are four flags here that govern how the program runs. You probably will have no reason to alter these from their default.
  
-batch_size [value] this is the neural network batch size **default is 256**+<code>--batch_size [value]</code> 
 +This is the neural network batch size **default is 256**
  
-intra_op_paraelism_threads [value]  TensorFlow’s intra_op_parallelism_threads config option. **Default is 12**+<code>--intra_op_paraelism_threads [value]</code> 
 +TensorFlow’s intra_op_parallelism_threads config option. **Default is 12**
  
-inter_op_parallelism_threads  TensorFlow’s inter_op_parallelism_threads config option. **Default is 1**+<code>--inter_op_parallelism_threads</code> 
 +TensorFlow’s inter_op_parallelism_threads config option. **Default is 1**
  
-device_count [value] TensorFlow’s device_count config option. **Default is 1.**+<code>--device_count [value]</code>TensorFlow’s device_count config option. **Default is 1.**
  
-omp_num_threads [value] OMP_NUM_THREADS environment variable. **Default is 12.**+<code>--omp_num_threads [value]</code> OMP_NUM_THREADS environment variable. **Default is 12.**
  
 **Other:** **Other:**
  
-stop automatically stops deepbinner when it runs out of reads to process. **By default, the program will continue to run until manually stopped.**+<code>--stop</code>Automatically stops deepbinner when it runs out of reads to process. **By default, the program will continue to run until manually stopped.**
  
-h or help Shows a help message+<code>-h or --help</code>Shows a help message
  
 Here is an example shell script for deepbinner. A copy can be found at /home/gseaton/public_scripts/ Here is an example shell script for deepbinner. A copy can be found at /home/gseaton/public_scripts/
-   #!/bin/bash+   <code>#!/bin/bash
    #$ -S /bin/bash    #$ -S /bin/bash
    . /etc/profile    . /etc/profile
Line 93: Line 103:
    --start_model /scratch2/software/Deepbinner-git-June-27-2019/models/EXP-NBD103_read_starts \    --start_model /scratch2/software/Deepbinner-git-June-27-2019/models/EXP-NBD103_read_starts \
    --end_model /scratch2/software/Deepbinner-git-June-27-2019/models/EXP-NBD103_read_ends --stop    --end_model /scratch2/software/Deepbinner-git-June-27-2019/models/EXP-NBD103_read_ends --stop
-   source deactivate+   source deactivate</code>
  
  
Line 102: Line 112:
  
  
-Nanopore technology generates a truly absurd amount of data files, which can be unwieldy to use, and for programs like Terminal to handle when trying to look into a folder with such huge amounts of data. Therefore, we will use another script to combine fast5 files into multi fast5 files. +Nanopore technology generates a truly absurd amount of data files, which can be unwieldy to use, both in the sense of day to day work, as well as for programs like Terminal to handle. Therefore, we will use another script to combine fast5 files into multi fast5 files. This is the default method currently, however programs like Deepbinner work with the individual files, and most of our older data sets are still in single file format, which is important to keep in mind.
  
-   #!/bin/bash+   <code>#!/bin/bash
    #$ -S /bin/bash    #$ -S /bin/bash
    . /etc/profile    . /etc/profile
Line 114: Line 124:
    single_to_multi_fast5 --input_path <path to barcode folder created by deepbinner IE: barcode03> \    single_to_multi_fast5 --input_path <path to barcode folder created by deepbinner IE: barcode03> \
    --save_path <path to the directory the output should be saved as>    --save_path <path to the directory the output should be saved as>
-   --filename_base <the prefix for the files, ie Species_barcode03 --batch_size 4000 --recursive+   --filename_base <the prefix for the files, ie Species_barcode03 --batch_size 8000 --recursive
        
-   conda deactivate+   conda deactivate</code>
        
 Some notes: Some notes:
    
-batch_size refers to the number of fast5 the program will combine together into one single file. 4000 is a good default to work with.+<code>--batch_size</code> Refers to the number of fast5 the program will combine together into one single file. 8000 is a good default to work with.
  
-recursive will run the shell on both the files in the directory you specify, as well as in any directories that are inside the directory you specified. IE /scratch3/yourname/MINION/deepbinner/barcode03/another_file_level+<code>--recursive</code> will run the shell on both the files in the directory you specify, as well as in any directories that are inside the directory you specified. IE /scratch3/yourname/MINION/deepbinner/barcode03/another_file_level
  
-Additionally, there is another command, multi_to_single_fast5 that can be run using the ont-fast5-api program. As the name implies, it does the reverse process, breaking apart a single multi fast5 file into individiual fast5 files. +Additionally, there is another command, multi_to_single_fast5 that can be run using the ont-fast5-api program. As the name implies, it does the reverse process, breaking apart a single multi fast5 file into individual fast5 files. Deepbinner will do this for you if it detects a multi-fast5 file, however it is always good to know how to do it by hand as well.
  
 I have also created a script in /home/gseaton/public_scripts that combines both deepbinner and single-to-multi-fast5 called deepbinner-combopack. This will launch both deepbinner and combine the files into single fast5 files. I have also created a script in /home/gseaton/public_scripts that combines both deepbinner and single-to-multi-fast5 called deepbinner-combopack. This will launch both deepbinner and combine the files into single fast5 files.
Line 136: Line 146:
 Once we've binned the reads, and reduced the number of files for easier use, we now have to //basecall// the fast5 to obtain usable results. Nanopore technology records the electrical differences in the membrane as the DNA strand (or RNA) passes through the pore, and as such, it takes some doing to decode this information into an actual sequence of bases. Once we've binned the reads, and reduced the number of files for easier use, we now have to //basecall// the fast5 to obtain usable results. Nanopore technology records the electrical differences in the membrane as the DNA strand (or RNA) passes through the pore, and as such, it takes some doing to decode this information into an actual sequence of bases.
  
-There are multiple different basecalling software that can be used, but here we outline the use of Guppy, a relatively recent one+Guppy is the current basecaller from Oxford Nanopore Technologies and is currently being developed and updated
  
 The input for guppy will be the contents of the binned reads (example: /barcode04/). Below is a sample script that can be found at /home/gseaton/public_scripts/ The input for guppy will be the contents of the binned reads (example: /barcode04/). Below is a sample script that can be found at /home/gseaton/public_scripts/
  
-   #!/bin/bash+   <code>#!/bin/bash
    #$ -S /bin/bash    #$ -S /bin/bash
    . /etc/profile    . /etc/profile
Line 154: Line 164:
    --num_callers 20 --cpu_threads_per_caller 2 \    --num_callers 20 --cpu_threads_per_caller 2 \
    -barcode_kits "EXP-NBD103" \    -barcode_kits "EXP-NBD103" \
-   --trim_strategy dna+   --trim_strategy dna</code>
  
 Other than the input and output directory, most of this probably will not need to be altered. The flags flowcell and kit refer to the make of the flowcell used as well the as kit used in preparing the sample. Importantly, the num_callers and cpu_threads_per_caller values must multiple together to get the value specified in the line #$ -pe threaded. Here, we're dedicating 2 threads to each of the 20 callers we're using.  Other than the input and output directory, most of this probably will not need to be altered. The flags flowcell and kit refer to the make of the flowcell used as well the as kit used in preparing the sample. Importantly, the num_callers and cpu_threads_per_caller values must multiple together to get the value specified in the line #$ -pe threaded. Here, we're dedicating 2 threads to each of the 20 callers we're using. 
Line 169: Line 179:
 The next step is to trim the barcodes from the samples. While assembly programs might catch these elements and remove them, it's usually better to remove them as a separate step apart from assembly. The next step is to trim the barcodes from the samples. While assembly programs might catch these elements and remove them, it's usually better to remove them as a separate step apart from assembly.
  
-Before this step, however, enter the merge all the files together into one single .fastq file for each barcode directory created by guppy. This is the file you will use for porechop.+Before this step, however, you should merge all the files together into one single .fastq file for each barcode directory created by guppy, using the 'cat' command. 
 + 
 +On the command line, write: 
 +<code> cat *.fastq > merged_reads.fq</code> 
 +This will take all files ending in the fastq suffix and merge them into a single file. You can name the output however you like. This is the file you will use for porechop.
  
 Here we use porechop, which uses fastq files and trims off the barcodes. Additionally, by the default outlines here any barcodes detected /within/ a read results in that read being discarded. This is not the only option: porechop has flags that will have the program attempt to split the read containing a barcode middle into two reads and keep them. But this is usually more trouble than its worth. Here we use porechop, which uses fastq files and trims off the barcodes. Additionally, by the default outlines here any barcodes detected /within/ a read results in that read being discarded. This is not the only option: porechop has flags that will have the program attempt to split the read containing a barcode middle into two reads and keep them. But this is usually more trouble than its worth.
  
-    #!/bin/bash+    <code>#!/bin/bash
     #$ -S /bin/bash     #$ -S /bin/bash
     #$ -cwd     #$ -cwd
Line 186: Line 200:
     conda deactivate     conda deactivate
        
-For the output file, it's usually a good idea to include some sort of indication that the file has been trimmed, such as species.fq > species.chop.fq.+For the output file, it's usually a good idea to include some sort of indication that the file has been trimmed, such as species.fq > species.chop.fq.</codee>
  
 An example script has been provided in /home/gseaton/public_scripts/ An example script has been provided in /home/gseaton/public_scripts/
  
-==== Filtlong ====+===== Filtlong =====
  
-This step is important if you have lots and lots of data. Here, filtlong attempts to take the 'best' of the given data set, and creates a file containing that for later use. The 'best'ness can be determined by the researcher with different flags. For example, if one wanted to take only the most accurate reads, this program would do that for you.+This step is important if you have lots and lots of data. Here, filtlong attempts to take the 'best' of the given data set, and creates a file containing that for later use. The 'bestness' can be determined by the researcher with different flags. For example, if one wanted to take only the most accurate reads, this program would do that for you.
  
 In our lab, we typically use filtlong to obtain the longest reads. In our lab, we typically use filtlong to obtain the longest reads.
Line 198: Line 212:
 An example script can be found in /home/gseaton/public_scripts An example script can be found in /home/gseaton/public_scripts
  
-   #!/bin/bash+   <code>#!/bin/bash
    #$ -S /bin/bash    #$ -S /bin/bash
    #$ -cwd    #$ -cwd
Line 209: Line 223:
    <input file> > <output file name>    <input file> > <output file name>
        
-   conda deactivate+   conda deactivate</code>
        
 Additional commands can be found [[https://github.com/rrwick/Filtlong#full-usage|here.]] Additional commands can be found [[https://github.com/rrwick/Filtlong#full-usage|here.]]
Line 215: Line 229:
 Example script can be found at /home/gseaton/public_scripts Example script can be found at /home/gseaton/public_scripts
  
-=== Assembly ===+===== Assembly =====
 ====Flye==== ====Flye====
 There are a number of assembly tools that can be used. Here, we will use Flye, which is relatively quick, and fairly decent at assembly. There are a number of assembly tools that can be used. Here, we will use Flye, which is relatively quick, and fairly decent at assembly.
Line 223: Line 237:
 Shell can be found at /home/gseaton/public_scripts/  Shell can be found at /home/gseaton/public_scripts/ 
  
-   #!/bin/bash+   <code>#!/bin/bash
    #$ -S /bin/bash    #$ -S /bin/bash
    . /etc/profile    . /etc/profile
Line 232: Line 246:
    source activate flye    source activate flye
        
-   fly --nano-raw \ +   flye --nano-raw \ 
    <input file. Should be trimmed, and if necessary, filtered using filtlong> \    <input file. Should be trimmed, and if necessary, filtered using filtlong> \
    --meta \    --meta \
-   --genome-size <estimate it in the format 20m> --out-dir <out directorary> --threads 30 --iterations 2+   --genome-size <estimate it in the format 20m> --out-dir <out directory> --threads 30 --iterations 2
        
-   conda deactivate.+   conda deactivate</code>
        
- ====Canu ==== + ====Canu==== 
-Canu is another assembly program with higher fidelity at the cost of greater run time per assembly. It is specialized in assembling NanoPore data and BioPac data, which makes it a good choice for our lab and work. There are three stages to Canu: correction, trimming and assembly. This means it can take the raw nanopore data and process it through most of the steps described above, should one choose. It can also restart an assembly should the worse happen and power or program failure occurs. +Canu is another assembly program with higher fidelity at the cost of greater run time per assembly. It is specialized in assembling nanopore data and PacBio data, which makes it a good choice for our lab and work. There are three stages to Canu: correction, trimming and assembly. This means it can take the raw nanopore data and process it through most of the steps described above, should one choose. It can also restart an assembly should the worse happen and power or program failure occurs. 
  
 A basic example of the overall layout of canu looks like this: (from the [[https://canu.readthedocs.io/en/latest/tutorial.html|the tutorial]]. A basic example of the overall layout of canu looks like this: (from the [[https://canu.readthedocs.io/en/latest/tutorial.html|the tutorial]].
-    canu [-correct | -trim | -assemble | -trim-assemble] \+    <code>canu [-correct | -trim | -assemble | -trim-assemble] \
       [-s <assembly-specificiations-file>] \       [-s <assembly-specificiations-file>] \
       -p <assembly-prefix> \       -p <assembly-prefix> \
Line 249: Line 263:
       genomeSize=<number>[g|m|k] \       genomeSize=<number>[g|m|k] \
       [other-options] \       [other-options] \
-      [<these are the type of data your feeding it> -pacbio-raw | -pacbio-corrected | -nanopore-raw | -nanopore-corrected] *fastq+      [<these are the type of data your feeding it> -pacbio-raw | -pacbio-corrected | -nanopore-raw | -nanopore-corrected] *fastq</code>
              
  
-The p flag is required, as it has to do with the naming of the output files, while the d flag is not manatory. Without it, canu will run within the currect directory. However, since it is **not possible** to run two assemblies in one directory, it's best to fill this option with something unique. The s flag allows you to import parameters if you have them, which allows commonly used parameters to be replaced with ones specific to your assembly. +The p flag is required, as it has to do with the naming of the output files, while the d flag is not mandatory. Without it, canu will run within the current directory. However, since it is **not possible** to run two assemblies in one directory, it's best to fill this option with something unique. The s flag allows you to import parameters if you have them, which allows commonly used parameters to be replaced with ones specific to your assembly. 
  
 Parameters are written as 'something=value', and the most common ones are maxMemory and maxThreads. genomeSize is always required, while all others can be set to default.  Parameters are written as 'something=value', and the most common ones are maxMemory and maxThreads. genomeSize is always required, while all others can be set to default. 
Line 259: Line 273:
  
 Below is an example script, and can be accessed /home/gseaton/public_scripts: Below is an example script, and can be accessed /home/gseaton/public_scripts:
-    #!/bin/bash+    <code>#!/bin/bash
     #$ -S /bin/bash     #$ -S /bin/bash
     . /etc/profile     . /etc/profile
Line 273: Line 287:
     'corMinCoverage=[number]' 'corOutCoverage=all' 'corMhapSensitivity=high' 'corMaxEvidenceErate=0 to 1' correctedErrorRate=0 to 1' 'genomeSize=<size>' 'corMaxEvidenceCoverageLocal=#' corMaxEvidenceCoverageGlobal=#' oeaMemory=#' batMemory=#'     'corMinCoverage=[number]' 'corOutCoverage=all' 'corMhapSensitivity=high' 'corMaxEvidenceErate=0 to 1' correctedErrorRate=0 to 1' 'genomeSize=<size>' 'corMaxEvidenceCoverageLocal=#' corMaxEvidenceCoverageGlobal=#' oeaMemory=#' batMemory=#'
     -nanopore-raw <path to the guppy assemblied chopped fastq file \     -nanopore-raw <path to the guppy assemblied chopped fastq file \
-    UseGrid=false+    UseGrid=false</code>
  
-**Parameters explained:**+//**Parameters explained:**//
  
-*corMinCoverage*; this governs the correction: if set to 0, the correction is non-lossy such that the output is the same length as the input. This keeps errors, which will be removed later in the process. Default is 4.+<code>corMinCoverage</code> this governs the correction: if set to 0, the correction is non-lossy such that the output is the same length as the input. This keeps errors, which will be removed later in the process. Default is 4.
  
-*corOutCoverage*: this can have an integer value or 'all'. A higher than-your-total input coverage or All forces canu to correct and assemble all the input. Takes much longer to run.Default is 40.+<code>corOutCoverage</code>: this can have an integer value or 'all'. A higher than-your-total input coverage or All forces canu to correct and assemble all the input. Takes much longer to run.Default is 40.
  
-*corMhapSensitivity*: Settings: low/normal/high. Governs how sensitivity Mhap is, in a rough fashion. It should be set based on the read coverage: >60 = low, 60 to 30 = normal, <30 = high+<code>corMhapSensitivity</code>: Settings: low/normal/high. Governs how sensitivity Mhap is, in a rough fashion. It should be set based on the read coverage: >60 = low, 60 to 30 = normal, <30 = high
  
-*corMaxEvidenceErate*: Setting this limits read correction to only overlaps at or below this value (between 0 and 1). Default is unlimited.+<code>corMaxEvidenceErate</code> Setting this limits read correction to only overlaps at or below this value (between 0 and 1). Default is unlimited.
  
-*correctedErrorRate*: Setting is a fraction (between 0 and 1). Default depends on the technology used: 0.045 for PacBio, or 0.144 for Nanopore reads. It is recommended that for low coverage data, that the default value should be increased by 1%, and for high coverage, decreased by 1%. +<code>correctedErrorRate</code> Setting is a fraction (between 0 and 1). Default depends on the technology used: 0.045 for PacBio, or 0.144 for Nanopore reads. It is recommended that for low coverage data, that the default value should be increased by 1%, and for high coverage, decreased by 1%. 
  
-*genomeSize*: This is the only required parameter. It has no default value, and is based on the size of the genome you're working with.+<code>genomeSize</code> This is the only **required parameter**. It has no default value, and is based on the size of the genome you're working with.
  
-*corMaxEvidenceCoverageGlobal*: Limits reads used for correction to support at most this coverage. Default is 1.0 times estimated coverage+<code>corMaxEvidenceCoverageGlobal</code> Limits reads used for correction to support at most this coverage. Default is 1.0 times estimated coverage
  
-*corMaxEvidenceCoverageLocal*: limits reads being corrected to at most this much evidence coverage. Default is 10 times estimated coverage. +<code>corMaxEvidenceCoverageLocal</code> limits reads being corrected to at most this much evidence coverage. Default is 10 times estimated coverage. 
  
-*[prefix]Memory*: the memory limit, Set in gigabytes, and by default is unset. prefixes are used to set the memory limits for specific tasks:+<code>[prefix]Memory</code> the memory limit, Set in gigabytes, and by default is unset. prefixes are used to set the memory limits for specific tasks:
  
 Prefix: Prefix:
-/oea:/ error adjustment in overlaps 
  
-/red:/ error detection in reads+<code>oea</code> error adjustment in overlaps 
 + 
 +<code>red</code>error detection in reads
  
-/bat:/ unitig/contig construction+<code>bat</code> unitig/contig construction
  
 +A fuller list can be found here: [[https://canu.readthedocs.io/en/latest/parameter-reference.html#mhapsensitivity|parameter-reference]]
from_nanopore_to_gene_prediction.1562939756.txt.gz · Last modified: by 134.190.235.39