Project

General

Profile

Actions

Wiki » History » Revision 2

« Previous | Revision 2/12 (diff) | Next »
Katie Lennard, 04/20/2018 02:37 PM


Wiki

Background

Ulas Karaoz from Lawrence Berkeley National Laboratory (one of Heather Jaspan's collaborators) has been advising me on how to setup a WGS pipeline for UCT as this is his area of expertise. The pipeline would be primarily aimed at profiling relatively complex metagenomics samples, both in terms of taxonomic composition and gene content. Accordingly a study profiling infant stool samples was selected as the test dataset.

Starting material: software installed & input data

*Input data: [[[https://www.ncbi.nlm.nih.gov/bioproject/PRJNA290380]]] The relevant paper for this study is attached. This is a longitudinal study on 11 infants' stool samples Ulas suggested selecting 1 or 2 infants with maximum number of longitudinal samples (keeping it less than 8); The input data size is 3.2510 basepairs (using this tool [[[https://github.com/billzt/readfq]]] ), which is about 30gB (raw) which Ulas estimates will require 50-100GB memory to assemble.
*Andrew (HPC) suggests running this on the hex high mem machine: which consists of two nodes each of which has 1TB memory
*For QC: FastQC
*Read trimming: Trimmomatic OR cutadapt (cutadapt more intuitive, preferred)
*Co-assembly: Megahit (https://github.com/voutcn/megahit)
*Index reference sequences (Megahit output): bowtie2
*Map reads to assembled scaffolds: bedtools
*Prepare input file for CONCOCT: custom script from Ulas
*Binning (based on tetranucleotide and coverage based clustering): CONCOCT (https://github.com/BinPro/CONCOCT)
*Validate binning using single copy core genes: CheckM (http://ecogenomics.github.io/CheckM/)

Updated by Katie Lennard about 7 years ago · 2 revisions