Project FOMC0000_Endo services include NGS sequencing of the V4V4 region of the 16S rRNA gene amplicons from the samples. First and foremost, please
download this report, as well as the sequence raw data from the download links provided below.
These links will expire after 60 days. We cannot guarantee the availability of your data after 60 days.
Full Bioinformatics analysis service was requested. We provide many analyses, starting from the raw sequence quality and noise filtering, pair reads merging, as well as chimera filtering for the sequences, using the
DADA2 denosing algorithm and pipeline.
We also provide many downstream analyses such as taxonomy assignment, alpha and beta diversity analyses, and differential abundance analysis.
For taxonomy assignment, most informative would be the taxonomy barplots. We provide an interactive barplots to show the relative abundance of microbes at different taxonomy levels (from Phylum to species) that you can choose.
If you specify which groups of samples you want to compare for differential abundance, we provide both ANCOM and LEfSe differential abundance analysis.
The samples were processed and analyzed with the ZymoBIOMICS® Service: Targeted
Metagenomic Sequencing (Zymo Research, Irvine, CA).
DNA Extraction: If DNA extraction was performed, one of three different DNA
extraction kits was used depending on the sample type and sample volume and were
used according to the manufacturer’s instructions, unless otherwise stated. The kit used
in this project is marked below:
☐
ZymoBIOMICS® DNA Miniprep Kit (Zymo Research, Irvine, CA)
☐
ZymoBIOMICS® DNA Microprep Kit (Zymo Research, Irvine, CA)
☐
ZymoBIOMICS®-96 MagBead DNA Kit (Zymo Research, Irvine, CA)
☑
N/A (DNA Extraction Not Performed)
Elution Volume: 50µL
Additional Notes: NA
Targeted Library Preparation: The DNA samples were prepared for targeted
sequencing with the Quick-16S™ NGS Library Prep Kit (Zymo Research, Irvine, CA).
These primers were custom designed by Zymo Research to provide the best coverage
of the 16S gene while maintaining high sensitivity. The primer sets used in this project
are marked below:
☐
Quick-16S™ Primer Set V1-V2 (Zymo Research, Irvine, CA)
☐
Quick-16S™ Primer Set V1-V3 (Zymo Research, Irvine, CA)
☑
Quick-16S™ Primer Set V3-V4 (Zymo Research, Irvine, CA)
☐
Quick-16S™ Primer Set V4 (Zymo Research, Irvine, CA)
☐
Quick-16S™ Primer Set V6-V8 (Zymo Research, Irvine, CA)
☐
Other: NA
Additional Notes: NA
The sequencing library was prepared using an innovative library preparation process in
which PCR reactions were performed in real-time PCR machines to control cycles and
therefore limit PCR chimera formation. The final PCR products were quantified with
qPCR fluorescence readings and pooled together based on equal molarity. The final
pooled library was cleaned up with the Select-a-Size DNA Clean & Concentrator™
(Zymo Research, Irvine, CA), then quantified with TapeStation® (Agilent Technologies,
Santa Clara, CA) and Qubit® (Thermo Fisher Scientific, Waltham, WA).
Control Samples: The ZymoBIOMICS® Microbial Community Standard (Zymo
Research, Irvine, CA) was used as a positive control for each DNA extraction, if
performed. The ZymoBIOMICS® Microbial Community DNA Standard (Zymo Research,
Irvine, CA) was used as a positive control for each targeted library preparation.
Negative controls (i.e. blank extraction control, blank library preparation control) were
included to assess the level of bioburden carried by the wet-lab process.
Sequencing: The final library was sequenced on Illumina® MiSeq™ with a V3 reagent kit
(600 cycles). The sequencing was performed with 10% PhiX spike-in.
Absolute Abundance Quantification*: A quantitative real-time PCR was set up with a
standard curve. The standard curve was made with plasmid DNA containing one copy
of the 16S gene and one copy of the fungal ITS2 region prepared in 10-fold serial
dilutions. The primers used were the same as those used in Targeted Library
Preparation. The equation generated by the plasmid DNA standard curve was used to
calculate the number of gene copies in the reaction for each sample. The PCR input
volume (2 µl) was used to calculate the number of gene copies per microliter in each
DNA sample.
The number of genome copies per microliter DNA sample was calculated by dividing
the gene copy number by an assumed number of gene copies per genome. The value
used for 16S copies per genome is 4. The value used for ITS copies per genome is 200.
The amount of DNA per microliter DNA sample was calculated using an assumed
genome size of 4.64 x 106 bp, the genome size of Escherichia coli, for 16S samples, or
an assumed genome size of 1.20 x 107 bp, the genome size of Saccharomyces
cerevisiae, for ITS samples. This calculation is shown below:
Calculated Total DNA = Calculated Total Genome Copies × Assumed Genome Size (4.64 × 106 bp) ×
Average Molecular Weight of a DNA bp (660 g/mole/bp) ÷ Avogadro’s Number (6.022 x 1023/mole)
* Absolute Abundance Quantification is only available for 16S and ITS analyses.
The absolute abundance standard curve data can be viewed in Excel here:
The absolute abundance standard curve is shown below:
The complete report of your project, including all links in this report, can be downloaded by clicking the link provided below. The downloaded file is a compressed ZIP file and once unzipped, open the file “REPORT.html” (may only shown as "REPORT" in your computer) by double clicking it. Your default web browser will open it and you will see the exact content of this report.
Please download and save the file to your computer storage device. The download link will expire after 60 days upon your receiving of this report.
Complete report download link:
To view the report, please follow the following steps:
1.
Download the .zip file from the report link above.
2.
Extract all the contents of the downloaded .zip file to your desktop.
3.
Open the extracted folder and find the "REPORT.html" (may shown as only "REPORT").
4.
Open (double-clicking) the REPORT.html file. Your default browser will open the top age of the complete report. Within the
report, there are links to view all the analyses performed for the project.
The raw NGS sequence data is available for download with the link provided below. The data is a compressed ZIP file and can be unzipped to individual sequence files.
Since this is a pair-end sequencing, each of your samples is represented by two sequence files, one for READ 1,
with the file extension “*_R1.fastq.gz”, another READ 2, with the file extension “*_R1.fastq.gz”.
The files are in FASTQ format and are compressed. FASTQ format is a text-based data format for storing both a biological sequence
and its corresponding quality scores. Most sequence analysis software will be able to open them.
The Sample IDs associated with the R1 and R2 fastq files are listed in the table below:
Sample ID
Original Sample ID
Read 1 File Name
Read 2 File Name
F0000.S10
original sample ID here
zr0000_10_V3V4_R1.fastq.gz
zr0000_10_V3V4_R2.fastq.gz
F0000.S11
original sample ID here
zr0000_11_V3V4_R1.fastq.gz
zr0000_11_V3V4_R2.fastq.gz
F0000.S12
original sample ID here
zr0000_12_V3V4_R1.fastq.gz
zr0000_12_V3V4_R2.fastq.gz
F0000.S13
original sample ID here
zr0000_13_V3V4_R1.fastq.gz
zr0000_13_V3V4_R2.fastq.gz
F0000.S14
original sample ID here
zr0000_14_V3V4_R1.fastq.gz
zr0000_14_V3V4_R2.fastq.gz
F0000.S15
original sample ID here
zr0000_15_V3V4_R1.fastq.gz
zr0000_15_V3V4_R2.fastq.gz
F0000.S16
original sample ID here
zr0000_16_V3V4_R1.fastq.gz
zr0000_16_V3V4_R2.fastq.gz
F0000.S17
original sample ID here
zr0000_17_V3V4_R1.fastq.gz
zr0000_17_V3V4_R2.fastq.gz
F0000.S18
original sample ID here
zr0000_18_V3V4_R1.fastq.gz
zr0000_18_V3V4_R2.fastq.gz
F0000.S19
original sample ID here
zr0000_19_V3V4_R1.fastq.gz
zr0000_19_V3V4_R2.fastq.gz
F0000.S01
original sample ID here
zr0000_1_V3V4_R1.fastq.gz
zr0000_1_V3V4_R2.fastq.gz
F0000.S20
original sample ID here
zr0000_20_V3V4_R1.fastq.gz
zr0000_20_V3V4_R2.fastq.gz
F0000.S21
original sample ID here
zr0000_21_V3V4_R1.fastq.gz
zr0000_21_V3V4_R2.fastq.gz
F0000.S22
original sample ID here
zr0000_22_V3V4_R1.fastq.gz
zr0000_22_V3V4_R2.fastq.gz
F0000.S23
original sample ID here
zr0000_23_V3V4_R1.fastq.gz
zr0000_23_V3V4_R2.fastq.gz
F0000.S24
original sample ID here
zr0000_24_V3V4_R1.fastq.gz
zr0000_24_V3V4_R2.fastq.gz
F0000.S25
original sample ID here
zr0000_25_V3V4_R1.fastq.gz
zr0000_25_V3V4_R2.fastq.gz
F0000.S26
original sample ID here
zr0000_26_V3V4_R1.fastq.gz
zr0000_26_V3V4_R2.fastq.gz
F0000.S27
original sample ID here
zr0000_27_V3V4_R1.fastq.gz
zr0000_27_V3V4_R2.fastq.gz
F0000.S28
original sample ID here
zr0000_28_V3V4_R1.fastq.gz
zr0000_28_V3V4_R2.fastq.gz
F0000.S29
original sample ID here
zr0000_29_V3V4_R1.fastq.gz
zr0000_29_V3V4_R2.fastq.gz
F0000.S02
original sample ID here
zr0000_2_V3V4_R1.fastq.gz
zr0000_2_V3V4_R2.fastq.gz
F0000.S30
original sample ID here
zr0000_30_V3V4_R1.fastq.gz
zr0000_30_V3V4_R2.fastq.gz
F0000.S31
original sample ID here
zr0000_31_V3V4_R1.fastq.gz
zr0000_31_V3V4_R2.fastq.gz
F0000.S32
original sample ID here
zr0000_32_V3V4_R1.fastq.gz
zr0000_32_V3V4_R2.fastq.gz
F0000.S33
original sample ID here
zr0000_33_V3V4_R1.fastq.gz
zr0000_33_V3V4_R2.fastq.gz
F0000.S34
original sample ID here
zr0000_34_V3V4_R1.fastq.gz
zr0000_34_V3V4_R2.fastq.gz
F0000.S35
original sample ID here
zr0000_35_V3V4_R1.fastq.gz
zr0000_35_V3V4_R2.fastq.gz
F0000.S36
original sample ID here
zr0000_36_V3V4_R1.fastq.gz
zr0000_36_V3V4_R2.fastq.gz
F0000.S37
original sample ID here
zr0000_37_V3V4_R1.fastq.gz
zr0000_37_V3V4_R2.fastq.gz
F0000.S38
original sample ID here
zr0000_38_V3V4_R1.fastq.gz
zr0000_38_V3V4_R2.fastq.gz
F0000.S39
original sample ID here
zr0000_39_V3V4_R1.fastq.gz
zr0000_39_V3V4_R2.fastq.gz
F0000.S03
original sample ID here
zr0000_3_V3V4_R1.fastq.gz
zr0000_3_V3V4_R2.fastq.gz
F0000.S40
original sample ID here
zr0000_40_V3V4_R1.fastq.gz
zr0000_40_V3V4_R2.fastq.gz
F0000.S41
original sample ID here
zr0000_41_V3V4_R1.fastq.gz
zr0000_41_V3V4_R2.fastq.gz
F0000.S42
original sample ID here
zr0000_42_V3V4_R1.fastq.gz
zr0000_42_V3V4_R2.fastq.gz
F0000.S43
original sample ID here
zr0000_43_V3V4_R1.fastq.gz
zr0000_43_V3V4_R2.fastq.gz
F0000.S44
original sample ID here
zr0000_44_V3V4_R1.fastq.gz
zr0000_44_V3V4_R2.fastq.gz
F0000.S45
original sample ID here
zr0000_45_V3V4_R1.fastq.gz
zr0000_45_V3V4_R2.fastq.gz
F0000.S46
original sample ID here
zr0000_46_V3V4_R1.fastq.gz
zr0000_46_V3V4_R2.fastq.gz
F0000.S47
original sample ID here
zr0000_47_V3V4_R1.fastq.gz
zr0000_47_V3V4_R2.fastq.gz
F0000.S48
original sample ID here
zr0000_48_V3V4_R1.fastq.gz
zr0000_48_V3V4_R2.fastq.gz
F0000.S49
original sample ID here
zr0000_49_V3V4_R1.fastq.gz
zr0000_49_V3V4_R2.fastq.gz
F0000.S04
original sample ID here
zr0000_4_V3V4_R1.fastq.gz
zr0000_4_V3V4_R2.fastq.gz
F0000.S50
original sample ID here
zr0000_50_V3V4_R1.fastq.gz
zr0000_50_V3V4_R2.fastq.gz
F0000.S51
original sample ID here
zr0000_51_V3V4_R1.fastq.gz
zr0000_51_V3V4_R2.fastq.gz
F0000.S52
original sample ID here
zr0000_52_V3V4_R1.fastq.gz
zr0000_52_V3V4_R2.fastq.gz
F0000.S53
original sample ID here
zr0000_53_V3V4_R1.fastq.gz
zr0000_53_V3V4_R2.fastq.gz
F0000.S54
original sample ID here
zr0000_54_V3V4_R1.fastq.gz
zr0000_54_V3V4_R2.fastq.gz
F0000.S55
original sample ID here
zr0000_55_V3V4_R1.fastq.gz
zr0000_55_V3V4_R2.fastq.gz
F0000.S56
original sample ID here
zr0000_56_V3V4_R1.fastq.gz
zr0000_56_V3V4_R2.fastq.gz
F0000.S57
original sample ID here
zr0000_57_V3V4_R1.fastq.gz
zr0000_57_V3V4_R2.fastq.gz
F0000.S58
original sample ID here
zr0000_58_V3V4_R1.fastq.gz
zr0000_58_V3V4_R2.fastq.gz
F0000.S59
original sample ID here
zr0000_59_V3V4_R1.fastq.gz
zr0000_59_V3V4_R2.fastq.gz
F0000.S05
original sample ID here
zr0000_5_V3V4_R1.fastq.gz
zr0000_5_V3V4_R2.fastq.gz
F0000.S60
original sample ID here
zr0000_60_V3V4_R1.fastq.gz
zr0000_60_V3V4_R2.fastq.gz
F0000.S61
original sample ID here
zr0000_61_V3V4_R1.fastq.gz
zr0000_61_V3V4_R2.fastq.gz
F0000.S62
original sample ID here
zr0000_62_V3V4_R1.fastq.gz
zr0000_62_V3V4_R2.fastq.gz
F0000.S63
original sample ID here
zr0000_63_V3V4_R1.fastq.gz
zr0000_63_V3V4_R2.fastq.gz
F0000.S64
original sample ID here
zr0000_64_V3V4_R1.fastq.gz
zr0000_64_V3V4_R2.fastq.gz
F0000.S65
original sample ID here
zr0000_65_V3V4_R1.fastq.gz
zr0000_65_V3V4_R2.fastq.gz
F0000.S66
original sample ID here
zr0000_66_V3V4_R1.fastq.gz
zr0000_66_V3V4_R2.fastq.gz
F0000.S67
original sample ID here
zr0000_67_V3V4_R1.fastq.gz
zr0000_67_V3V4_R2.fastq.gz
F0000.S68
original sample ID here
zr0000_68_V3V4_R1.fastq.gz
zr0000_68_V3V4_R2.fastq.gz
F0000.S69
original sample ID here
zr0000_69_V3V4_R1.fastq.gz
zr0000_69_V3V4_R2.fastq.gz
F0000.S06
original sample ID here
zr0000_6_V3V4_R1.fastq.gz
zr0000_6_V3V4_R2.fastq.gz
F0000.S70
original sample ID here
zr0000_70_V3V4_R1.fastq.gz
zr0000_70_V3V4_R2.fastq.gz
F0000.S71
original sample ID here
zr0000_71_V3V4_R1.fastq.gz
zr0000_71_V3V4_R2.fastq.gz
F0000.S72
original sample ID here
zr0000_72_V3V4_R1.fastq.gz
zr0000_72_V3V4_R2.fastq.gz
F0000.S73
original sample ID here
zr0000_73_V3V4_R1.fastq.gz
zr0000_73_V3V4_R2.fastq.gz
F0000.S74
original sample ID here
zr0000_74_V3V4_R1.fastq.gz
zr0000_74_V3V4_R2.fastq.gz
F0000.S75
original sample ID here
zr0000_75_V3V4_R1.fastq.gz
zr0000_75_V3V4_R2.fastq.gz
F0000.S76
original sample ID here
zr0000_76_V3V4_R1.fastq.gz
zr0000_76_V3V4_R2.fastq.gz
F0000.S77
original sample ID here
zr0000_77_V3V4_R1.fastq.gz
zr0000_77_V3V4_R2.fastq.gz
F0000.S78
original sample ID here
zr0000_78_V3V4_R1.fastq.gz
zr0000_78_V3V4_R2.fastq.gz
F0000.S79
original sample ID here
zr0000_79_V3V4_R1.fastq.gz
zr0000_79_V3V4_R2.fastq.gz
F0000.S07
original sample ID here
zr0000_7_V3V4_R1.fastq.gz
zr0000_7_V3V4_R2.fastq.gz
F0000.S80
original sample ID here
zr0000_80_V3V4_R1.fastq.gz
zr0000_80_V3V4_R2.fastq.gz
F0000.S81
original sample ID here
zr0000_81_V3V4_R1.fastq.gz
zr0000_81_V3V4_R2.fastq.gz
F0000.S82
original sample ID here
zr0000_82_V3V4_R1.fastq.gz
zr0000_82_V3V4_R2.fastq.gz
F0000.S83
original sample ID here
zr0000_83_V3V4_R1.fastq.gz
zr0000_83_V3V4_R2.fastq.gz
F0000.S84
original sample ID here
zr0000_84_V3V4_R1.fastq.gz
zr0000_84_V3V4_R2.fastq.gz
F0000.S85
original sample ID here
zr0000_85_V3V4_R1.fastq.gz
zr0000_85_V3V4_R2.fastq.gz
F0000.S86
original sample ID here
zr0000_86_V3V4_R1.fastq.gz
zr0000_86_V3V4_R2.fastq.gz
F0000.S87
original sample ID here
zr0000_87_V3V4_R1.fastq.gz
zr0000_87_V3V4_R2.fastq.gz
F0000.S88
original sample ID here
zr0000_88_V3V4_R1.fastq.gz
zr0000_88_V3V4_R2.fastq.gz
F0000.S89
original sample ID here
zr0000_89_V3V4_R1.fastq.gz
zr0000_89_V3V4_R2.fastq.gz
F0000.S08
original sample ID here
zr0000_8_V3V4_R1.fastq.gz
zr0000_8_V3V4_R2.fastq.gz
F0000.S90
original sample ID here
zr0000_90_V3V4_R1.fastq.gz
zr0000_90_V3V4_R2.fastq.gz
F0000.S09
original sample ID here
zr0000_9_V3V4_R1.fastq.gz
zr0000_9_V3V4_R2.fastq.gz
Please download and save the file to your computer storage device. The download link will expire after 60 days upon your receiving of this report.
DADA2 is a software package that models and corrects Illumina-sequenced amplicon errors.
DADA2 infers sample sequences exactly, without coarse-graining into OTUs,
and resolves differences of as little as one nucleotide. DADA2 identified more real variants
and output fewer spurious sequences than other methods.
DADA2’s advantage is that it uses more of the data. The DADA2 error model incorporates quality information,
which is ignored by all other methods after filtering. The DADA2 error model incorporates quantitative abundances,
whereas most other methods use abundance ranks if they use abundance at all.
The DADA2 error model identifies the differences between sequences, eg. A->C,
whereas other methods merely count the mismatches. DADA2 can parameterize its error model from the data itself,
rather than relying on previous datasets that may or may not reflect the PCR and sequencing protocols used in your study.
DADA2 pipeline includes several tools for read quality control, including quality filtering, trimming, denoising, pair merging and chimera filtering. Below are the major processing steps of DADA2:
Step 1. Read trimming based on sequence quality
The quality of NGS Illumina sequences often decreases toward the end of the reads.
DADA2 allows to trim off the poor quality read ends in order to improve the error
model building and pair mergicing performance.
Step 2. Learn the Error Rates
The DADA2 algorithm makes use of a parametric error model (err) and every
amplicon dataset has a different set of error rates. The learnErrors method
learns this error model from the data, by alternating estimation of the error
rates and inference of sample composition until they converge on a jointly
consistent solution. As in many machine-learning problems, the algorithm must
begin with an initial guess, for which the maximum possible error rates in
this data are used (the error rates if only the most abundant sequence is
correct and all the rest are errors).
Step 3. Infer amplicon sequence variants (ASVs) based on the error model built in previous step. This step is also called sequence "denoising".
The outcome of this step is a list of ASVs that are the equivalent of oligonucleotides.
Step 4. Merge paired reads. If the sequencing products are read pairs, DADA2 will merge the R1 and R2 ASVs into single sequences.
Merging is performed by aligning the denoised forward reads with the reverse-complement of the corresponding
denoised reverse reads, and then constructing the merged “contig” sequences.
By default, merged sequences are only output if the forward and reverse reads overlap by
at least 12 bases, and are identical to each other in the overlap region (but these conditions can be changed via function arguments).
Step 5. Remove chimera.
The core dada method corrects substitution and indel errors, but chimeras remain. Fortunately, the accuracy of sequence variants
after denoising makes identifying chimeric ASVs simpler than when dealing with fuzzy OTUs.
Chimeric sequences are identified if they can be exactly reconstructed by
combining a left-segment and a right-segment from two more abundant “parent” sequences. The frequency of chimeric sequences varies substantially
from dataset to dataset, and depends on on factors including experimental procedures and sample complexity.
Results
1. Read Quality Plots NGS sequence analaysis starts with visualizing the quality of the sequencing. Below are the quality plots of the first
sample for the R1 and R2 reads separately. In gray-scale is a heat map of the frequency of each quality score at each base position. The mean
quality score at each position is shown by the green line, and the quartiles of the quality score distribution by the orange lines.
The forward reads are usually of better quality. It is a common practice to trim the last few nucleotides to avoid less well-controlled errors
that can arise there. The trimming affects the downstream steps including error model building, merging and chimera calling. FOMC uses an empirical
approach to test many combinations of different trim length in order to achieve best final amplicon sequence variants (ASVs), see the next
section “Optimal trim length for ASVs”.
2. Optimal trim length for ASVs The final number of merged and chimera-filtered ASVs depends on the quality filtering (hence trimming) in the very beginning of the DADA2 pipeline.
In order to achieve highest number of ASVs, an empirical approach was used -
Create a random subset of each sample consisting of 5,000 R1 and 5,000 R2 (to reduce computation time)
Trim 10 bases at a time from the ends of both R1 and R2 up to 50 bases
For each combination of trimmed length (e.g., 300x300, 300x290, 290x290 etc), the trimmed reads are
subject to the entire DADA2 pipeline for chimera-filtered merged ASVs
The combination with highest percentage of the input reads becoming final ASVs is selected for the complete set of data
Below is the result of such operation, showing ASV percentages of total reads for all trimming combinations (1st Column = R1 lengths in bases; 1st Row = R2 lengths in bases):
R1/R2
251
241
231
221
211
201
250
58.60%
73.09%
73.47%
73.55%
73.66%
73.80%
240
60.03%
75.09%
75.52%
75.63%
75.78%
68.05%
230
60.42%
75.72%
76.10%
76.20%
68.48%
30.41%
220
60.55%
75.90%
76.33%
68.75%
30.44%
29.16%
210
60.90%
76.30%
68.90%
30.45%
29.16%
0.00%
200
61.14%
68.58%
30.52%
29.34%
0.00%
0.00%
Based on the above result, the trim length combination of R1 = 220 bases and R2 = 231 bases (highlighted red above), was chosen for generating final ASVs for all sequences.
This combination generated highest number of merged non-chimeric ASVs and was used for downstream analyses, if requested.
3. Error plots from learning the error rates
After DADA2 building the error model for the set of data, it is always worthwhile, as a sanity check if nothing else, to visualize the estimated error rates.
The error rates for each possible transition (A→C, A→G, …) are shown below. Points are the observed error rates for each consensus quality score.
The black line shows the estimated error rates after convergence of the machine-learning algorithm.
The red line shows the error rates expected under the nominal definition of the Q-score.
The ideal result would be the estimated error rates (black line) are a good fit to the observed rates (points), and the error rates drop
with increased quality as expected.
Forward Read R1 Error Plot
Reverse Read R2 Error Plot
The PDF version of these plots are available here:
4. DADA2 Result Summary The table below shows the summary of the DADA2 analysis,
tracking paired read counts of each samples for all the steps during DADA2 denoising process -
including end-trimming (filtered), denoising (denoisedF, denoisedF), pair merging (merged) and chimera removal (nonchim).
Sample ID
F0000.S01
F0000.S02
F0000.S03
F0000.S04
F0000.S05
F0000.S06
F0000.S07
F0000.S08
F0000.S09
F0000.S10
F0000.S11
F0000.S12
F0000.S13
F0000.S14
F0000.S15
F0000.S16
F0000.S17
F0000.S18
F0000.S19
F0000.S20
F0000.S21
F0000.S22
F0000.S23
F0000.S24
F0000.S25
F0000.S26
F0000.S27
F0000.S28
F0000.S29
F0000.S30
F0000.S31
F0000.S32
F0000.S33
F0000.S34
F0000.S35
F0000.S36
F0000.S37
F0000.S38
F0000.S39
F0000.S40
F0000.S41
F0000.S42
F0000.S43
F0000.S44
F0000.S45
F0000.S46
F0000.S47
F0000.S48
F0000.S49
F0000.S50
F0000.S51
F0000.S52
F0000.S53
F0000.S54
F0000.S55
F0000.S56
F0000.S57
F0000.S58
F0000.S59
F0000.S60
F0000.S61
F0000.S62
F0000.S63
F0000.S64
F0000.S65
F0000.S66
F0000.S67
F0000.S68
F0000.S69
F0000.S70
F0000.S71
F0000.S72
F0000.S73
F0000.S74
F0000.S75
F0000.S76
F0000.S77
F0000.S78
F0000.S79
F0000.S80
F0000.S81
F0000.S82
F0000.S83
F0000.S84
F0000.S85
F0000.S86
F0000.S87
F0000.S88
F0000.S89
F0000.S90
Row Sum
Percentage
input
166,456
75,885
86,458
78,672
106,139
90,395
48,272
78,318
78,532
900,010
59,271
96,341
131,183
117,076
69,324
87,357
123,519
6,101
32,264
27,881
18,606
21,832
95,427
99,779
94,209
5,634
55,432
74,323
10,479
6,989
51,512
38,002
13,313
11,519
125,058
47,143
71,567
39,282
113,309
85,819
58,822
70,238
36,691
64,494
64,007
32,543
127,686
101,932
18,971
12,061
72,713
89,593
128,726
96,870
110,340
84,729
85,639
91,531
45,197
82,664
158,590
69,954
57,084
73,542
116,693
106,886
77,425
91,697
56,726
84,119
65,042
86,795
65,567
99,034
78,867
44,119
82,843
68,762
106,344
82,966
95,061
75,290
78,337
77,648
89,995
91,642
126,503
83,766
103,877
59,056
7,568,365
100.00%
filtered
158,205
74,990
84,832
77,393
104,868
88,911
47,571
77,040
39,841
883,359
29,935
95,242
129,134
115,342
68,152
85,706
120,569
5,995
31,660
27,394
18,309
21,440
94,305
98,049
92,809
5,527
54,540
73,136
10,335
6,790
26,445
37,222
13,052
11,377
123,492
46,243
70,586
38,699
111,058
84,387
30,362
69,083
19,105
32,630
62,967
31,811
125,891
100,007
18,418
11,788
71,694
87,279
125,663
94,373
108,959
82,610
83,837
89,727
43,839
81,193
155,489
68,480
56,254
72,111
113,222
102,212
76,401
89,628
55,266
81,954
63,958
85,499
64,221
94,950
77,310
43,113
81,674
67,365
104,574
80,587
92,436
73,755
76,782
75,889
87,167
88,962
124,112
82,114
101,846
58,089
7,250,596
95.80%
denoisedF
156,633
73,957
83,742
76,169
103,987
87,904
46,763
75,563
39,374
878,697
29,245
94,836
128,477
114,809
67,703
84,920
119,001
5,940
31,376
27,166
18,113
21,142
93,491
97,333
91,928
5,389
54,129
71,973
10,217
6,676
26,056
36,753
12,926
11,289
114,204
45,209
70,126
38,426
109,714
83,325
29,862
68,398
18,590
31,761
62,428
31,262
125,225
98,732
18,123
11,452
71,231
85,433
124,230
92,839
108,115
80,975
82,986
89,148
43,127
80,113
154,375
67,641
55,469
71,577
112,006
99,594
75,086
88,189
54,571
80,562
63,153
84,488
63,272
92,462
75,587
42,079
80,712
65,739
103,359
78,769
91,310
72,881
75,243
74,371
86,020
86,877
122,753
80,491
100,923
56,837
7,155,107
94.54%
denoisedR
153,637
73,374
82,301
75,593
103,541
87,483
46,371
75,034
38,974
873,275
28,872
94,230
127,263
114,402
67,402
84,456
118,213
5,877
31,187
27,020
18,052
21,050
93,151
96,626
90,846
5,305
53,714
71,527
10,129
6,565
25,767
35,863
12,816
11,250
112,695
44,789
69,328
38,093
109,022
82,897
29,535
67,731
18,304
31,354
61,035
31,040
124,065
97,286
17,826
11,187
70,398
84,398
123,313
91,530
107,344
80,022
82,283
88,335
42,333
79,551
152,131
66,698
54,512
70,514
109,549
97,516
74,284
86,763
53,473
79,169
62,533
83,528
62,226
90,547
73,994
41,431
79,634
64,680
102,113
77,038
89,710
71,540
73,795
73,123
83,810
83,198
120,998
78,638
99,037
55,525
7,068,567
93.40%
merged
149,875
67,740
77,984
71,365
100,311
81,968
42,450
67,713
37,619
845,245
25,890
93,271
126,148
113,076
66,694
81,968
113,474
5,727
30,685
26,673
17,762
20,659
91,392
95,443
89,819
5,182
53,005
66,476
9,997
6,428
24,184
35,120
12,642
11,115
89,406
40,478
68,514
37,713
105,314
78,797
26,960
62,910
15,608
28,109
60,030
30,468
123,044
91,640
17,522
10,752
69,798
77,005
118,728
83,437
103,504
75,822
80,371
87,630
41,697
73,766
150,674
63,685
52,470
69,566
106,998
90,070
67,768
79,482
49,624
73,707
58,578
76,368
59,449
81,990
66,195
36,896
74,726
57,715
94,103
70,213
83,997
67,609
67,274
66,716
78,538
73,730
114,875
73,056
94,122
48,747
6,713,064
88.70%
nonchim
137,338
50,546
57,697
56,522
82,436
57,650
26,459
45,907
31,271
699,128
17,448
91,679
124,018
109,751
65,916
71,898
83,788
5,690
30,427
26,476
17,675
20,518
88,264
94,712
89,419
5,091
52,447
54,281
9,986
6,315
19,261
34,813
12,591
11,040
82,068
28,012
68,387
37,643
69,512
57,533
20,939
41,056
12,537
20,762
59,221
30,312
122,843
73,693
17,410
10,510
69,702
56,724
103,063
51,796
89,958
62,947
75,338
87,034
41,568
52,919
149,866
52,520
38,079
65,785
95,084
48,770
48,351
58,493
39,161
50,716
45,465
53,289
46,573
55,236
47,088
23,291
56,220
40,623
62,166
55,685
53,444
52,380
51,736
49,212
49,424
50,482
81,589
55,685
58,756
30,504
5,497,618
72.64%
This table can be downloaded as an Excel table below:
5. DADA2 Amplicon Sequence Variants (ASVs). A total of 6696 unique merged and chimera-free ASV sequences were identified, and their corresponding
read counts for each sample are available in the "ASV Read Count Table" with rows for the ASV sequences and columns for sample. This read count table can be used for
microbial profile comparison among different samples and the sequences provided in the table can be used to taxonomy assignment.
The species-level, open-reference 16S rRNA NGS reads taxonomy assignment pipeline
Version 20210310
1. Raw sequences reads in FASTA format were BLASTN-searched against a combined set of 16S rRNA reference sequences.
It consists of MOMD (version 0.1), the HOMD (version 15.2 http://www.homd.org/index.php?name=seqDownload&file&type=R ),
HOMD 16S rRNA RefSeq Extended Version 1.1 (EXT), GreenGene Gold (GG)
(http://greengenes.lbl.gov/Download/Sequence_Data/Fasta_data_files/gold_strains_gg16S_aligned.fasta.gz) ,
and the NCBI 16S rRNA reference sequence set (https://ftp.ncbi.nlm.nih.gov/blast/db/16S_ribosomal_RNA.tar.gz).
These sequences were screened and combined to remove short sequences (<1000nt), chimera, duplicated and sub-sequences,
as well as sequences with poor taxonomy annotation (e.g., without species information).
This process resulted in 1,015 from HOMD V15.22, 495 from EXT, 3,940 from GG and 18,044 from NCBI, a total of 25,120 sequences.
Altogether these sequence represent a total of 15,601 oral and non-oral microbial species.
The NCBI BLASTN version 2.7.1+ (Zhang et al, 2000) was used with the default parameters.
Reads with ≥ 98% sequence identity to the matched reference and ≥ 90% alignment length
(i.e., ≥ 90% of the read length that was aligned to the reference and was used to calculate
the sequence percent identity) were classified based on the taxonomy of the reference sequence
with highest sequence identity. If a read matched with reference sequences representing
more than one species with equal percent identity and alignment length, it was subject
to chimera checking with USEARCH program version v8.1.1861 (Edgar 2010). Non-chimeric reads with multi-species
best hits were considered valid and were assigned with a unique species
notation (e.g., spp) denoting unresolvable multiple species.
2. Unassigned reads (i.e., reads with < 98% identity or < 90% alignment length) were pooled together and reads < 200 bases were
removed. The remaining reads were subject to the de novo
operational taxonomy unit (OTU) calling and chimera checking using the USEARCH program version v8.1.1861 (Edgar 2010).
The de novo OTU calling and chimera checking was done using 98% as the sequence identity cutoff, i.e., the species-level OTU.
The output of this step produced species-level de novo clustered OTUs with 98% identity.
Representative reads from each of the OTUs/species were then BLASTN-searched
against the same reference sequence set again to determine the closest species for
these potential novel species. These potential novel species were pooled together with the reads that were signed to specie-level in
the previous step, for down-stream analyses.
Reference:
Edgar RC. Search and clustering orders of magnitude faster than BLAST.
Bioinformatics. 2010 Oct 1;26(19):2460-1. doi: 10.1093/bioinformatics/btq461. Epub 2010 Aug 12. PubMed PMID: 20709691.
3. Designations used in the taxonomy:
1) Taxonomy levels are indicated by these prefixes:
k__: domain/kingdom
p__: phylum
c__: class
o__: order
f__: family
g__: genus
s__: species
Example:
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__Blautia;s__faecis
2) Unique level identified – known species:
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__Roseburia;s__hominis
The above example shows some reads match to a single species (all levels are unique)
3) Non-unique level identified – known species:
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__Roseburia;s__multispecies_spp123_3
The above example “s__multispecies_spp123_3” indicates certain reads equally match to 3 species of the
genus Roseburia; the “spp123” is a temporally assigned species ID.
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__multigenus;s__multispecies_spp234_5
The above example indicates certain reads match equally to 5 different species, which belong to multiple genera.;
the “spp234” is a temporally assigned species ID.
4) Unique level identified – unknown species, potential novel species:
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__Roseburia;s__ hominis_nov_97%
The above example indicates that some reads have no match to any of the reference sequences with
sequence identity ≥ 98% and percent coverage (alignment length) ≥ 98% as well. However this groups
of reads (actually the representative read from a de novo OTU) has 96% percent identity to
Roseburia hominis, thus this is a potential novel species, closest to Roseburia hominis.
(But they are not the same species).
5) Multiple level identified – unknown species, potential novel species:
k__Bacteria;p__Firmicutes;c__Clostridia;o__Clostridiales;f__Lachnospiraceae;g__Roseburia;s__ multispecies_sppn123_3_nov_96%
The above example indicates that some reads have no match to any of the reference sequences
with sequence identity ≥ 98% and percent coverage (alignment length) ≥ 98% as well.
However this groups of reads (actually the representative read from a de novo OTU)
has 96% percent identity equally to 3 species in Roseburia. Thus this is no single
closest species, instead this group of reads match equally to multiple species at 96%.
Since they have passed chimera check so they represent a novel species. “sppn123” is a
temporary ID for this potential novel species.
4. The taxonomy assignment algorithm is illustrated in this flow char below:
Read Taxonomy Assignment - Result Summary *
Code
Category
MPC=0% (>=1 read)
MPC=0.01%(>=548 reads)
A
Total reads
5,497,618
5,497,618
B
Total assigned reads
5,487,409
5,487,409
C
Assigned reads in species with read count < MPC
0
116,160
D
Assigned reads in samples with read count < 500
0
0
E
Total samples
90
90
F
Samples with reads >= 500
90
90
G
Samples with reads < 500
0
0
H
Total assigned reads used for analysis (B-C-D)
5,487,409
5,371,249
I
Reads assigned to single species
4,169,265
4,116,319
J
Reads assigned to multiple species
1,120,669
1,102,299
K
Reads assigned to novel species
197,475
152,631
L
Total number of species
2,979
303
M
Number of single species
888
198
N
Number of multi-species
312
83
O
Number of novel species
1,779
22
P
Total unassigned reads
10,209
10,209
Q
Chimeric reads
571
571
R
Reads without BLASTN hits
51
51
S
Others: short, low quality, singletons, etc.
9,587
9,587
A=B+P=C+D+H+Q+R+S
E=F+G
B=C+D+H
H=I+J+K
L=M+N+O
P=Q+R+S
* MPC = Minimal percent (of all assigned reads) read count per species, species with read count < MPC were removed.
* Samples with reads < 500 were removed from downstream analyses.
* The assignment result from MPC=0.1% was used in the downstream analyses.
Read Taxonomy Assignment - ASV Species-Level Read Counts Table
This table shows the read counts for each sample (columns) and each species identified based on the ASV sequences.
The downstream analyses were based on this table.
The species listed in the table has full taxonomy and a dynamically assigned species ID specific to this report.
When some reads match with the reference sequences of more than one species equally (i.e., same percent identiy and alignmnet coverage),
they can't be assigned to a particular species. Instead, they are assigned to multiple species with the species notaton
"s__multispecies_spp2_2". In this notation, spp2 is the dynamic ID assigned to these reads that hit multiple sequences and the "_2"
at the end of the notation means there are two species in the spp2.
You can look up which species are included in the multi-species assignment, in this table below:
Another type of notation is "s__multispecies_sppn2_2", in which the "n" in the sppn2 means it's a potential novel species because all the reads in this species
have < 98% idenity to any of the reference sequences. They were grouped together based on de novo OTU clustering at 98% identity cutoff. And then
a representative sequence was chosed to BLASTN search against the reference database to find the closest match (but will still be < 98%). This representative
sequence also matched equally to more than one species, hence the "spp" was given in the label.
In ecology, alpha diversity (α-diversity) is the mean species diversity in sites or habitats at a local scale.
The term was introduced by R. H. Whittaker[1][2] together with the terms beta diversity (β-diversity)
and gamma diversity (γ-diversity). Whittaker's idea was that the total species diversity in a landscape
(gamma diversity) is determined by two different things, the mean species diversity in sites or habitats
at a more local scale (alpha diversity) and the differentiation among those habitats (beta diversity).
Diversity measures are affected by the sampling depth. Rarefaction is a technique to assess species richness from the results of sampling. Rarefaction allows
the calculation of species richness for a given number of individual samples, based on the construction
of so-called rarefaction curves. This curve is a plot of the number of species as a function of the
number of samples. Rarefaction curves generally grow rapidly at first, as the most common species are found,
but the curves plateau as only the rarest species remain to be sampled.
The two main factors taken into account when measuring diversity are richness and evenness.
Richness is a measure of the number of different kinds of organisms present in a particular area.
Evenness compares the similarity of the population size of each of the species present. There are
many different ways to measure the richness and evenness. These measurements are called "estimators" or "indices".
Below is a diversity of 3 commonly used indices showing the values for all the samples (dots) and in groups (boxes).
 
Alpha Diversity Box Plots for All Groups
 
 
 
Alpha Diversity Box Plots for Individual Comparisons
Comparison 1
Endo_Before vs Endo_After vs Perio_Before vs Perio_After vs Control_Endo_Before vs Control_Perio_Before
To test whether the alpha diversity among different comparison groups are different statistically, we use the Kruskal Wallis H test
provided the "alpha-group-significance" fucntion in the QIIME 2 "diversity" package. Kruskal Wallis H test is the non-parametric alternative
to the One Way ANOVA. Non-parametric means that the test doesn’t assume your data comes from a particular distribution. The H test is used
when the assumptions for ANOVA aren’t met (like the assumption of normality). It is sometimes called the one-way ANOVA on ranks,
as the ranks of the data values are used in the test rather than the actual data points. The H test determines whether the medians of two
or more groups are different.
Below are the Kruskal Wallis H test results for each comparison based on three different alpha diversity measures: 1) Observed species (features),
2) Shannon index, and 3) Simpson index.
 
 
Comparison 1.
Endo_Before vs Endo_After vs Perio_Before vs Perio_After vs Control_Endo_Before vs Control_Perio_Before
Beta diversity compares the similarity (or dissimilarity) of microbial profiles between different
groups of samples. There are many different similarity/dissimilarity metrics.
In general, they can be quantitative (using sequence abundance, e.g., Bray-Curtis or weighted UniFrac)
or binary (considering only presence-absence of sequences, e.g., binary Jaccard or unweighted UniFrac).
They can be even based on phylogeny (e.g., UniFrac metrics) or not (non-UniFrac metrics, such as Bray-Curtis, etc.).
For microbiome studies, species profiles of samples can be compared with the Bray-Curtis dissimilarity,
which is based on the count data type. The pair-wise Bray-Curtis dissimilarity matrix of all samples can then be
subject to either multi-dimensional scaling (MDS, also known as PCoA) or non-metric MDS (NMDS).
MDS/PCoA is a
scaling or ordination method that starts with a matrix of similarities or dissimilarities
between a set of samples and aims to produce a low-dimensional graphical plot of the data
in such a way that distances between points in the plot are close to original dissimilarities.
NMDS is similar to MDS, however it does not use the dissimilarities data, instead it converts them into
the ranks and use these ranks in the calculation.
In our beta diversity analysis, Bray-Curtis dissimilarity matrix was first calculated and then plotted by the PCoA and
NMDS separately. Below are beta diveristy results for all groups together:
 
 
NMDS and PCoA Plots for All Groups
 
 
 
 
 
The above PCoA and NMDS plots are based on count data. The count data can also be transformed into centered log ratio (CLR)
for each species. The CLR data is no longer count data and cannot be used in Bray-Curtis dissimilarity calculation. Instead
CLR can be compared with Euclidean distances. When CLR data are compared by Euclidean distance, the distance is also called
Aitchison distance.
Below are the NMDS and PCoA plots of the Aitchison distances of the samples:
 
 
 
 
 
 
 
NMDS and PCoA Plots for Individual Comparisons
 
Comparison No.
Comparison Name
NMDA
PCoA
Bray-Curtis
CLR Euclidean
Bray-Curtis
CLR Euclidean
Comparison 1
Endo_Before vs Endo_After vs Perio_Before vs Perio_After vs Control_Endo_Before vs Control_Perio_Before
Interactive 3D PCoA Plots - Bray-Curtis Dissimilarity
 
 
 
Interactive 3D PCoA Plots - Euclidean Distance
 
 
 
Interactive 3D PCoA Plots - Correlation Coefficients
 
 
 
Group Significance of Beta-diversity Indices
To test whether the between-group dissimilarities are significantly greater than the within-group dissimilarities,
the "beta-group-significance" function provided in the QIIME 2 "diversity" package was used with PERMANOVA
(permutational multivariate analysis of variance) as the group significant testing method.
Three beta diversity matrics were used: 1) Bray–Curtis dissimilarity 2) Correlation coefficient matrix , and 3) Aitchison distance
(Euclidean distance between clr-transformed compositions).
 
 
Comparison 1.
Endo_Before vs Endo_After vs Perio_Before vs Perio_After vs Control_Endo_Before vs Control_Perio_Before
16S rRNA next generation sequencing (NGS) generates a fixed number of reads that reflect the proportion of different
species in a sample, i.e., the relative abundance of species, instead of the absolute abundance.
In Mathematics, measurements involving probabilities, proportions, percentages, and ppm can all
be thought of as compositional data. This makes the microbiome read count data “compositional”
(Gloor et al, 2017). In general, compositional data represent parts of a whole which only
carry relative information (http://www.compositionaldata.com/).
The problem of microbiome data being compositional arises when comparing two groups of samples for
identifying “differentially abundant” species. A species with the same absolute abundance between two
conditions, its relative abundances in the two conditions (e.g., percent abundance) can become different
if the relative abundance of other species change greatly. This problem can lead to incorrect conclusion
in terms of differential abundance for microbial species in the samples.
When studying differential abundance (DA), the current better approach is to transform the read count
data into log ratio data. The ratios are calculated between read counts of all species in a sample to
a “reference” count (e.g., mean read count of the sample). The log ratio data allow the detection of DA
species without being affected by percentage bias mentioned above
In this report, a compositional DA analysis tool “ANCOM” (analysis of composition of microbiomes)
was used. ANCOM transforms the count data into log-ratios and thus is more suitable for comparing
the composition of microbiomes in two or more populations. "ANCOM" generates a table of features with
W-statistics and whether the null hypothesis is rejected. The “W” is the W-statistic, or number of
features that a single feature is tested to be significantly different against. Hence the higher the "W"
the more statistical sifgnificant that a feature/species is differentially abundant.
References:
Gloor GB, Macklaim JM, Pawlowsky-Glahn V, Egozcue JJ. Microbiome Datasets Are Compositional: And This Is Not Optional. Front Microbiol.
2017 Nov 15;8:2224. doi: 10.3389/fmicb.2017.02224. PMID: 29187837; PMCID: PMC5695134.
Mandal S, Van Treuren W, White RA, Eggesbø M, Knight R, Peddada SD. Analysis of composition of
microbiomes: a novel method for studying microbial composition. Microb Ecol Health Dis.
2015 May 29;26:27663. doi: 10.3402/mehd.v26.27663. PMID: 26028277; PMCID: PMC4450248.
Lin H, Peddada SD. Analysis of compositions of microbiomes with bias correction.
Nat Commun. 2020 Jul 14;11(1):3514. doi: 10.1038/s41467-020-17041-7.
PMID: 32665548; PMCID: PMC7360769.
Starting with version V1.2, we include the results of ANCOM-BC (Analysis of Compositions of
Microbiomes with Bias Correction) (Lin and Peddada 2020). ANCOM-BC is an updated version of "ANCOM" that:
(a) provides statistically valid test with appropriate p-values,
(b) provides confidence intervals for differential abundance of each taxon,
(c) controls the False Discovery Rate (FDR),
(d) maintains adequate power, and
(e) is computationally simple to implement.
The bias correction (BC) addresses a challenging problem of the bias introduced by differences in
the sampling fractions across samples. This bias has been a major hurdle in performing DA analysis of microbiome data.
ANCOM-BC estimates the unknown sampling fractions and corrects the bias induced by their differences among samples.
The absolute abundance data are modeled using a linear regression framework.
Starting with version V1.43, ANCOM-BC2 is used instead of ANCOM-BC, So that multiple pairwise directional test can be performed (if there are more than two gorups in a comparison).
When performning pairwise directional test, the mixed directional false discover rate (mdFDR) is taken into account. The mdFDR
is the combination of false discovery rate due to multiple testing, multiple pairwise comparisons, and directional tests within
each pairwise comparison. The mdFDR is adopted from (Guo, Sarkar, and Peddada 2010; Grandhi, Guo, and Peddada 2016). For more detail
explanation and additional features of ANCOM-BC2 please see author's documentation.
References:
Lin H, Peddada SD. Analysis of compositions of microbiomes with bias correction.
Nat Commun. 2020 Jul 14;11(1):3514. doi: 10.1038/s41467-020-17041-7.
PMID: 32665548; PMCID: PMC7360769.
Guo W, Sarkar SK, Peddada SD. Controlling false discoveries in multidimensional directional decisions, with applications to gene expression data on ordered categories. Biometrics. 2010 Jun;66(2):485-92. doi: 10.1111/j.1541-0420.2009.01292.x. Epub 2009 Jul 23. PMID: 19645703; PMCID: PMC2895927.
Grandhi A, Guo W, Peddada SD. A multiple testing procedure for multi-dimensional pairwise comparisons with application to gene expression studies. BMC Bioinformatics. 2016 Feb 25;17:104. doi: 10.1186/s12859-016-0937-5. PMID: 26917217; PMCID: PMC4768411.
LEfSe (Linear Discriminant Analysis Effect Size) is an alternative method to find "organisms, genes, or
pathways that consistently explain the differences between two or more microbial communities" (Segata et al., 2011).
Specifically, LEfSe uses rank-based Kruskal-Wallis (KW) sum-rank test to detect features with significant
differential (relative) abundance with respect to the class of interest. Since it is rank-based, instead of proportional based,
the differential species identified among the comparison groups is less biased (than percent abundance based).
Reference:
Segata N, Izard J, Waldron L, Gevers D, Miropolsky L, Garrett WS, Huttenhower C. Metagenomic biomarker discovery and explanation. Genome Biol. 2011 Jun 24;12(6):R60. doi: 10.1186/gb-2011-12-6-r60. PMID: 21702898; PMCID: PMC3218848.
 
Endo_Before vs Endo_After vs Perio_Before vs Perio_After vs Control_Endo_Before vs Control_Perio_Before
To analyze the co-occurrence or co-exclusion between microbial species among different samples, network correlation
analysis tools are usually used for this purpose. However, microbiome count data are compositional. If count data are normalized to the total number of counts in the
sample, the data become not independent and traditional statistical metrics (e.g., correlation) for the detection
of specie-species relationships can lead to spurious results. In addition, sequencing-based studies typically
measure hundreds of OTUs (species) on few samples; thus, inference of OTU-OTU association networks is severely
under-powered. Here we use SPIEC-EASI (SParse InversECovariance Estimation
for Ecological Association Inference), a statistical method for the inference of microbial
ecological networks from amplicon sequencing datasets that addresses both of these issues (Kurtz et al., 2015).
SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model
inference framework that assumes the underlying ecological association network is sparse. SPIEC-EASI provides
two algorithms for network inferencing – 1) Meinshausen-Bühlmann's neighborhood selection (MB method) and inverse covariance selection
(GLASSO method, i.e., graphical least absolute shrinkage and selection operator). This is fundamentally distinct from SparCC, which essentially estimate pairwise correlations. In addition
to these two methods, we provide the results of a third method - SparCC (Sparse Correlations for Compositional Data)(Friedman & Alm 2012), which
is also a method for inferring correlations from compositional data. SparCC estimates the linear Pearson correlations between
the log-transformed components.
References:
Kurtz ZD, Müller CL, Miraldi ER, Littman DR, Blaser MJ, Bonneau RA. Sparse and compositionally robust inference of microbial ecological networks. PLoS Comput Biol. 2015 May 7;11(5):e1004226. doi: 10.1371/journal.pcbi.1004226. PMID: 25950956; PMCID: PMC4423992.
The results of this analysis are for research purpose only. They are not intended to diagnose, treat, cure, or prevent any disease. Forsyth and FOMC
are not responsible for use of information provided in this report outside the research area.