Wiki » History » Revision 7
Revision 6 (Katie Lennard, 05/02/2018 03:09 PM) → Revision 7/12 (Katie Lennard, 05/02/2018 03:37 PM)
# Wiki
# Background
Ulas Karaoz from Lawrence Berkeley National Laboratory (one of Heather Jaspan's collaborators) has been advising me on how to setup a WGS pipeline for UCT as this is his area of expertise. The pipeline would be primarily aimed at profiling relatively complex metagenomics samples, both in terms of taxonomic composition and gene content. Accordingly a study profiling infant stool samples was selected as the test dataset.
# Test pipeline (from Ulas Karaoz) starting material: software installed & input data
*Input data: https://www.ncbi.nlm.nih.gov/bioproject/PRJNA290380 The relevant paper for this study is attached. This is a longitudinal study on 11 infants' stool samples Ulas suggested selecting 1 or 2 infants with maximum number of longitudinal samples (keeping it less than 8); The input data size is 3.25^10 basepairs (using this tool https://github.com/billzt/readfq), which is about 30gB (raw) which Ulas estimates will require 50-100GB memory to assemble.
*Andrew (HPC) suggests running this on the hex high mem machine: which consists of two nodes each of which has 1TB memory
*For QC: FastQC (based on the scripts from our 16S pipeline: requires a base script (fastqc.single.sh) a batch script (fastqc.batch.sh), a config file and a file listing all files to be checked (with full file path)
*Read trimming: Trimmomatic (Cutadapt perhaps a better option: more intuitive, trimmomatic unpredictable)
*Co-assembly: Megahit (https://github.com/voutcn/megahit)
*Index reference sequences (Megahit output): bowtie2
*Map reads to assembled scaffolds: bedtools
*Prepare input file for Concoct: custom script from Ulas (input files = coverage table + contigs file)
*Binning (based on tetranucleotide and coverage based clustering): Concoct (https://github.com/BinPro/CONCOCT)
*Evaluate bins visually with the R script ClusterPlot.R (supplied with Concoct)
*Validate binning using single copy core genes: CheckM (http://ecogenomics.github.io/CheckM/)
# Methods research - for potential future implementation
*Metagenomics research and software development is progressing rapidly with several tools available for each step in the pipeline with no clear gold standards
*"Critically, methodological improvements are difficult to gauge due to the lack of a general standard for comparison" This issue is currently being addressed by a community driven initiative called Critical Assessment of Metagenome Interpretation (CAMI) with the aim being an independent, comprehensive and bias-free evaluation of methods https://www.nature.com/articles/nmeth.4458.pdf https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607242/pdf/13742_2015_Article_87.pdf
* CAMI requires software containerization and standardization of user interfaces (using docker and biobox) See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607242/pdf/13742_2015_Article_87.pdf for details on biobox
* Current CAMI recommendations on methods:
> * Assembly (tested MEGAHIT, Minia, Meraga (Meraculous + MEGAHIT), A*(using the OperaMS Scaffolder), Ray Meta and Velour) - recommend MEGAHIT, Minia OR Meraga
> * Binning (tested MyCC, MaxBin 2.0, MetaBAT, MetaWatt 3.5, CONCOCT, PhyloPythiaS+, taxator-tk, MEGAN6, and Kraken2) - "MetaWatt 3.5, followed by MaxBin 2.0, recovered the most genomes with high purity and completeness from all data sets"
> * Taxonomic profiling (tested CLARK; Common Kmers (an early version of MetaPalette);DUDes; FOCUS; MetaPhlAn 2.0; MetaPhyler; mOTU; a combination of Quikr, ARK and SEK (abbreviated Quikr); Taxy-Pro and TIPP) - "On the basis of the average of precision and recall, over all samples and taxonomic ranks, Taxy-Pro version 0 (mean = 0.616), MetaPhlAn 2.0 (mean = 0.603) and DUDes version 0 (mean = 0.596) performed best."
# Background
Ulas Karaoz from Lawrence Berkeley National Laboratory (one of Heather Jaspan's collaborators) has been advising me on how to setup a WGS pipeline for UCT as this is his area of expertise. The pipeline would be primarily aimed at profiling relatively complex metagenomics samples, both in terms of taxonomic composition and gene content. Accordingly a study profiling infant stool samples was selected as the test dataset.
# Test pipeline (from Ulas Karaoz) starting material: software installed & input data
*Input data: https://www.ncbi.nlm.nih.gov/bioproject/PRJNA290380 The relevant paper for this study is attached. This is a longitudinal study on 11 infants' stool samples Ulas suggested selecting 1 or 2 infants with maximum number of longitudinal samples (keeping it less than 8); The input data size is 3.25^10 basepairs (using this tool https://github.com/billzt/readfq), which is about 30gB (raw) which Ulas estimates will require 50-100GB memory to assemble.
*Andrew (HPC) suggests running this on the hex high mem machine: which consists of two nodes each of which has 1TB memory
*For QC: FastQC (based on the scripts from our 16S pipeline: requires a base script (fastqc.single.sh) a batch script (fastqc.batch.sh), a config file and a file listing all files to be checked (with full file path)
*Read trimming: Trimmomatic (Cutadapt perhaps a better option: more intuitive, trimmomatic unpredictable)
*Co-assembly: Megahit (https://github.com/voutcn/megahit)
*Index reference sequences (Megahit output): bowtie2
*Map reads to assembled scaffolds: bedtools
*Prepare input file for Concoct: custom script from Ulas (input files = coverage table + contigs file)
*Binning (based on tetranucleotide and coverage based clustering): Concoct (https://github.com/BinPro/CONCOCT)
*Evaluate bins visually with the R script ClusterPlot.R (supplied with Concoct)
*Validate binning using single copy core genes: CheckM (http://ecogenomics.github.io/CheckM/)
# Methods research - for potential future implementation
*Metagenomics research and software development is progressing rapidly with several tools available for each step in the pipeline with no clear gold standards
*"Critically, methodological improvements are difficult to gauge due to the lack of a general standard for comparison" This issue is currently being addressed by a community driven initiative called Critical Assessment of Metagenome Interpretation (CAMI) with the aim being an independent, comprehensive and bias-free evaluation of methods https://www.nature.com/articles/nmeth.4458.pdf https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607242/pdf/13742_2015_Article_87.pdf
* CAMI requires software containerization and standardization of user interfaces (using docker and biobox) See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607242/pdf/13742_2015_Article_87.pdf for details on biobox
* Current CAMI recommendations on methods:
> * Assembly (tested MEGAHIT, Minia, Meraga (Meraculous + MEGAHIT), A*(using the OperaMS Scaffolder), Ray Meta and Velour) - recommend MEGAHIT, Minia OR Meraga
> * Binning (tested MyCC, MaxBin 2.0, MetaBAT, MetaWatt 3.5, CONCOCT, PhyloPythiaS+, taxator-tk, MEGAN6, and Kraken2) - "MetaWatt 3.5, followed by MaxBin 2.0, recovered the most genomes with high purity and completeness from all data sets"
> * Taxonomic profiling (tested CLARK; Common Kmers (an early version of MetaPalette);DUDes; FOCUS; MetaPhlAn 2.0; MetaPhyler; mOTU; a combination of Quikr, ARK and SEK (abbreviated Quikr); Taxy-Pro and TIPP) - "On the basis of the average of precision and recall, over all samples and taxonomic ranks, Taxy-Pro version 0 (mean = 0.616), MetaPhlAn 2.0 (mean = 0.603) and DUDes version 0 (mean = 0.596) performed best."