What’s in the portion of reads that don’t map to a reference?

One of the first steps in the analysis of most next generation sequencing datasets (unless you’re doing a novel genome or transcript assembly) is mapping to a reference genome. Mapping is a procedure that determines the location in the genome that each sequencing read came from. If you have good sequencing data, most of the reads will be mapped by the program you chose to use.

What about the small (usually <5%) portion of reads that fail to map, then? What can we learn from these reads? Can they be used for quality control or actual analyses?

As it turns out, a lot can be learned by analyzing unmapping reads. Let’s start by understanding how a read can fail to map to a reference genome.

  • Low quality or complexity: Some sequencing reads are filled with low quality base calls – either several ‘N’ bases in the reads or poor quality scores. These reads are usually eliminated by a filtering step before any downstream analysis. Low complexity reads – homopolymer and heteropolymer repeats, for example – are also impossible to align. Both examples don’t encode any useful information, but can be important in determining the quality of the sequencing library before further analysis. Trimming the low quality bases (if in a consistent position across the dataset) is one way to improve alignment.
  • Ambiguous alignment: Reads from repetitive parts of the genome may align to more than one position. In humans, this can be a large portion of the sequencing data, since over 50% of the human genome is repetitive DNA. Depending on the aligner and parameters you choose, reads with ambiguous alignments may be reported in one position or fail to map. Bowtie2, for example, reports a single alignment for ambiguous reads by default; it chooses between the best possible alignments with a random number generator.

    How can they be useful?

    Ambiguous reads can be used to find information on the repetitive part of the genome – what many scientists once called ‘junk DNA’. Repetitive sequences are actually important for

  • Discordant alignment (paired end sequencing): Paired end reads should be separated by a certain number of bases (plus or minus some standard deviation) when they map to a genome. This is because paired end protocols generate molecules of roughly the same length of which both ends are sequenced. Once again, the reporting of discordant alignments differs with the program and parameters.

    What can you do with them?

    Discordant alignments can give information about genome rearrangements, such as deletions, insertions and duplications. For example, If there’s strong evidence for two reads aligning at a distance greater than the insert size, it’s possible some DNA between the two loci was deleted. The inverse is also true: reads aligning at a distance less than the insert size can indicate novel insertions, such as retrotransposons. Peter Park’s lab at Harvard has been developing algorithms to detect these events in NGS data and has applied them to look at genome rearrangements in cancer.

  • The read came from another organism: A tissue sample isn’t always a pure culture of the cells you want to look at. Humans are host to a huge number of microbes, viruses and parasites that inevitably end up in a tissue sample. This is called the microbiome, which has been increasingly studied and found to be very important in health and disease. If other organisms are present in a tissue sample that’s being sequenced, some of their DNA will be sequenced as well. These reads won’t map to the reference genome.

    What can they tell us?

    Sequencing reads from the microbiome can tell you a lot about the communities of bacteria, fungi and viruses living in a sample. Several studies have compared the microbiome of individuals using next generation sequencing data.

That’s all the cases I can think of for why a read wouldn’t map to the reference, although it’s possible I missed some. In my next post I’ll talk about the analysis I’ve been doing on the unmaping portion of sequencing data and some interesting results!

Biotech and software companies at ISMB 2014

In my past three posts I talked about the highlights of the 2014 Intelligent Systems for Molecular Biology conference in Boston, MA. In addition to all the academic talks and events, there were a few industry presentations that stood out. I was pleased with the industry presence at the conference. As a student potentially looking for an industry job after I graduate, I enjoyed the chance to talk with some potential employers and see what kinds of positions are available for people with a bachelors degree in comp bio.

Good news: every company I talked with seemed willing to hire a programmer or data scientist with a bachelors degree. These positions typically weren’t advertised on their websites, so I get the feeling it takes some networking to actually get hired. It was definitely an encouraging experience, though!

A few of the industry partners gave presentations during the workshop sessions at ISMB. I attended two interesting talks:

Appistry and the “pipeline challenge”
Appistry (St. Louis, MO) develops high performance computing solutions and software for genomic analysis. Have you heard of the Genome Analysis ToolKit, the software developed by the Broad Institute for variant discovery and genotyping next generation sequencing data? Well, the Broad chose Appistry as the commercial partner for the GATK. 

The speaker first highlighted Ayrris, Appistry’s high performance computing platform. I didn’t get the technical details, but it sounds like Ayrris has built-in support for troublshooting genomics pipelines (something I spend so much of my time doing).

He then talked about the Pipeline Challenge. Appistry is sponsoring a contest for the best genomic analysis pipeline ideas. The winner will receive $70k in bioinformatics software and computer hardware. The Neretti lab has developed a few pipelines and ideas that would fit this contest well… I’m going to look into submitting one! Perhaps the new work we’ve been doing on the human “virome” and its role in cancer and disease?

Seven Bridges Genomics on bioinformatic reproducibility
Seven Bridges Genomics (Cambridge, MA) also develops software and pipelines for bioinformatic analysis. The focus of their presentation wasn’t on the actual pipelines, though, but rather methods and software they’re developing to increase reproducibility in genomics analysis. The speaker made a good point early on: publications often cite an image of a software pipeline in the methods section. When other researchers try to replicate the results, either with their own software or with the code published along with the paper, the analyses often don’t line up (and sometimes fail entirely). This is a huge problem in computational biology and bioinformatics – Titus Brown frequently blogs about reproducibility and most of the BOSC Special Interest Group focused on it as well.

Seven Bridges is proposing a solution based on their software platform Rabix.  They plan to use docker images to distribute the software as well as any dependencies used to do analysis in a publication. Docker is a lightweight way to distribute software and ensure it will run in any software environment – an alternative to bulky virtual machines that are sometimes published in an effort to distribute code. According to Seven Bridges, “With Rabix, data, tools and pipelines can be published in open repositories which will enable the community to both host and reuse them on their own infrastructure. This way, we can share the analysis itself, show instead of tell, and create reproducible building blocks to further research.”

 

Highlights from ISMB – Day 3

Today was the third and final day of the main ISMB conference! I slept in until noon (attending these things is surprisingly tiring) so I missed some of the morning sessions, but it was still a good day. Some highlights:

Workshop on alternative methods of peer review
The talks in this workshop focused on open access in publishing and scientific reproducibility. An increasingly popular topic is open peer review, where all aspects of the peer review process are published. This means the names of the reviewers, their comments and the author’s responses are all published with the online version of the article. In theory, this is a great idea. It increases openness, ensures readers are aware of problems with the article (both past and present) and lets authors know who is reviewing their article.

In practice, though, open peer review is difficult to implement. Some members of the audience brought up points of contention. For example, reviewers of a “big name” paper might be hesitant to criticize their superiors in the scientific community. Open peer review may also make it more difficult for editors to find reviewers for articles. The data say otherwise, though – since BMJ opened up the reveiw process, only 2% of editors have declined to review an article because of the change in policy. Other journals like F1000Research also operate on the open peer review model and seem to be doing just fine. I think the “openness” trend has just started to gain momentum and acceptance within the scientific community – it’ll be interesting to see how both authors and publishers respond in the future.

Final keynote by Russ Altman
The Altman lab at Stanford is doing some excellent work using informatics approaches to understand drug response. The ultimate goal is true pharmacogenetics and personalized medicine – imagine a doctor genotyping you in the office and picking a specific drug and dosage known to work best with your specific genes. His lab is doing a lot of machine learning and data mining on FDA drug interaction data and other publicly available sets. Along with some creative use of Amazon mechanical turk, they created a database of gene/drug relationships and ranked the side effects by severity.

The Altman lab is also working on predicting novel drug binding sites using protein structure. The method was complex and used a lot of interesting machine learning techniques (another reason I want all these talks to be online – going back to review and understand all the methods). In the end, they could predict small molecules most likely to interact with a protein’s active site and potentially inhibit it. These small molecules could be synthesized as part of a drug or found on another drug to re-purpose it.

The symposium then concluded with some closing remarks by the ISCB board members and awards for various posters and presentations. Overall, I was very pleased by the past few days and happy I attended. The personal and professional connections I made will help me in my search for a job and/or grad program. I saw some inspiring and interesting research, learned of cutting edge methods in the field and got to meet scientists I’ve only seen on paper before this week. A huge thanks to the ISCB members who organized this conference, as well as the Student Council for giving me the chance to present my research on Friday.

Highlights from ISMB – Day 2

I’ll be continuing my updates on the Intelligent Systems for Molecular Biology (ISMB) conference with some of the research that I saw today.

Tracking Cells in 4D by live cell imaging
Terumasa Tokunaga from the Institute of Statistical Mathematics in Japan presented a novel algorithm for tracking the positions of cells in 3D over time. He applied the algorithm to track neurons in C. elegans which had been florescently labeled and captured through fluorescence microscopy. Briefly, the algorithm identifies cells as the local maxima in density after smoothing with a kernel density function. A “repulsion” parameter aids in capturing local maxima rather than converging on the absolute maximum of the density. A maximum spanning tree is then built between cells and used to track the movement over time. This helps avoid merging trackers of individual cells or having trackers switch from one cell to another.

This method has the potential to be applied to study differentiation in more complicated organisms after the authors can allow for cell division. They did initial work on C. elegans because there are a fixed number of cells in adults, and they could assume no cell division.

ISCB and Student Council business meeting
I was interested in this meeting because I want to volunteer with the ISCB student council in planning the next conference (in Dublin, no less)! The student council also presented awards to the best oral presentation and two best poster presentations to three very talented students who I met on Friday.

Keynote by ISCB Overton Prize winner Dana Pe’er
This was, hands down, one of the most interesting scientific talks I’ve ever seen. Pe’er is answering questions about cell differentiation and heterogeneity though high-throughput single cell analysis methods. The concentrations of several biomarkers can be quantified at the single cell level through a technique known as mass cytrometry. Measuring these quantities in single cells puts statisticians back in the environment they are comfortable with – a small number of variables and many samples (as opposed to the big p, small situation so common in genomics). She also introduced some new methods for multivariate statistical analysis and dimensionality reduction (notable the Wonderlust and DREMI algorithms) that deserve a blog post of their own!

Reception at MIT
The ISCB was nice enough to give the volunteers a ticket to a reception at the MIT museum after the talks finished today. This was a great chance to socialize with the students and others in a cool environment, see the museum (it’s changed a lot since I was there 6 years ago!) and munch on some hors d’evours. I could definitely get used to all of these kind of events at conferences!

ISMB finishes tomorrow. Stay tuned for some more highlights and a post about the industry members I’ve been talking with.

Highlights from ISMB – Day 1

Today was the first day of the main ISMB conference in Boston. Between volunteering duties I had a chance to catch a few interesting talks. Here’s what I thought was interesting on Sunday, July 13.

Keynote by Eugene Myers
Eugene Myers was given the Senior Scientist Award by ISCB this year and gave an excellent keynote in the afternoon. The list of projects he’s contributed to is impressive – basically solving a huge problem in computer science or biology every 10 years. In 1990 he invented in the suffix array data structure, in 2000 he was essential to the human genome assembly effort at Celera Genomics, and recently he’s been working on visualization and microscopy problems in neuroscience. His keynote focused on genome assembly – first about the process used at Celera and the developments leading to the algorithms, then about the quality of the recent published assemblies.

His main point was that the quality of recent assemblies have been decreasing. The short reads produced by NGS tech are resulting in more contigs and increased gaps in the assemblies. In order to to real comparative genomics and look for structural variation and gene duplication, high-quality continuous genomes are necessary, according to Myers.

Luckily, new technology (PacBio et al.) is producing longer reads that should allow for better assemblies in the future. Myers also talked some about error rates – as long as errors are randomly distributed, they shouldn’t effect the quality of the resulting assembly at all. This is good news for PacBio, with its 10% or so relative error. Myers is also working on a new assembler for long reads called DAZZLER, which he didn’t get to describe in detail (and I haven’t had the time to look into the actual methods yet) but it seems interesting. Check out his blog here.

Compressive Genomics by Bonnie Berger’s lab
This presentation was in a session chaired by Michael Waterman and my teacher Sorin Istrail celebrating 20 years of the Journal of Computational Biology. The Berger lab is working on new algorithms for data compression and processing to facilitate the massive amount of biological data being generated these days (which is growing faster than our capacity for data storage, I should add). First, they presented a pipeline that eliminates 95% of quality scores in .fastq files before downstream processing. Only quality values of bases that are called as mutations or otherwise interesting are retained, the rest are transformed to the mean quality. This decreases the file size and can apparently increase downstream accuracy.

Second, a compressive BLAST algorithm that can speed up alignments in large databases. Their method first computes clusters of entries in the BLAST database, and creates a representative sequence for each cluster. A query is then compared against these representative sequences, and only compared against the constituent subjects in the clusters nearest to it. This drastically shrinks the number of alignments done and speeds up the BLAST search. It turns out there are some problems with the math behind computing clusters (the measures aren’t truly distance and don’t satisfy the triangle inequality) but since BLAST is an approximate algorithm anyway, it turns out this doesn’t matter!

These descriptions were done from memory, but there’s more information at the Berger Lab page.

Watching the World Cup with world-class scientists
There’s nothing better than watching scientists you look up to cheer for their favorite soccer team. ISMB was nice enough to set up a big projection screen and stream the world cup for us:

2014-07-13 16.49.17

The room exploded when Germany scored in overtime — I think there’s a lot more people from Europe here than from South America!

ISMB 2014 continues tomorrow. Stay tuned for more updates!

 

 

ISCB Student Council Symposium 2014

I had an excellent opportunity to present my mycobacteriophage kmer usage research at the ISCB Student Council Symposium earlier today. I was one of 12 students from around the world who gave oral oral presentations, which spanned all walks of computational biology and bioinformatics. I thought the symposium was a huge success! Some highlights:

  • Great keynote speakers
    Dr. David Bartel (Whitehead/MIT/HHMI) gave a talk on developments in microRNA research and some really creative tech for sequencing poly(A)-tails. The technique uses a two-step imaging process on an Illumina sequencer to determine the length of the tail and the sequence of the microRNA.
    Dr. Ashlee Earl (Broad) discussed how her lab is using genomics to track pathogencity and drug resistance in TB and other bugs. She also talked about Pilon, software developed by the Broad for improving assemblies of microbial genomes.
  • Scientific speed dating
    This was a novel concept – chat with a fellow scientist and try to describe your research in two minutes or less. The goal isn’t to find a relationship, but a new collaboration!
  • Networking opportunities
    Abhishek Pratap from Sage Bionetworks talked about software called Synapse they’ve been developing to help computational analysis of NGS data be more open and well documented. The student council is also a fan of networking in social settings, and took us all out to a pub after the symposium was finished.

Starting tomorrow, I’m volunteering at the main ISMB conference (what a great way to go to a conference when you don’t have grant support). Stay tuned for updates on interesting research that I see over the next few days!

Thoughts from the SEA-PHAGES symposium

What a weekend! The past two days have been filled with excellent student presentations, ample opportunities for networking and fruitful conversations about future research and teaching ideas. Chen and I presented our poster about alignment-free sequence analysis techniques applied to mycobacteriophage genomes on Saturday night. We must have done something right, because we came back to Janelia Farm this morning with a first place ribbon on our poster! Chen also gave his oral presentation this morning and absolutely knocked it out of the park – people have been coming up to us all day and asking how the animations were done.

We’re going to be putting up a web page summarizing our presentation, poster, results and methods in the next few days. For now, you can view the poster and check out our (unfinished) code at my GitHub. I’ll make another post here when everything is ready!

I was also very impressed with some of the research happening at other schools in the SEA-PHAGES program, and will be writing about some of them in the next few days. For now, check out some photos from Janelia: Continue reading

SEA-PHAGES symposium 2014

This weekend I’m down at HHMI’s Janelia Farm Research Campus at the SEA-PHAGES undergraduate research symposium. The phage hunters class I TA is administered through HHMI and is taught at over 70 schools around the US and internationally. This symposium is a chance for undergraduates from all the schools to get together, present their research and be exposed to new ideas. Chen (one of the first year students and I are presenting our research into tetranculeotide usage in mycobacteriophage genomes. We’ll have a poster at the session on Saturday night and Chen will be giving an oral presentation on Sunday morning.

Janelia Farm is an inspiring place to visit – something about the beautiful architecture coupled with cutting edge research really sticks with you. I hope to come back to Providence with new connections, ideas and inspirations.

Check out the poster we’ll be presenting and feel free to leave a comment with any questions about the research, phage hunters, or the symposium in general.

Counting tetranucleotides in mycobacteriophages

As a teaching assistant in Brown’s first year seminar “Phage Hunters” I lead several freshman biology and computer science students in an independent bioinformatics research project. We began the semester looking for evidence of CRISPR protospacers in mycobateriophage genomes. The idea was to use blast and other tools to get students introduced to the bioinformatics investigation process. We covered the basics of the CRISPR/Cas system, wrote a python script to download genome sequences from phagesdb.org, and made a local blast database on Brown’s computer cluster.

Things were going well with the project, but a few weeks in I was having doubts as to how statistically valid our protospacer predictions were. Then, I re-read a paper by one of the leaders in the field and discovered a) they had already looked for protospacers, and b) found no conclusive evidence in mycobacteriophages. The author of the paper was also going to be at the SEA-PHAGES symposium we were planning to present our class results at, so that really spelled the end of the CRISPR project. We needed  a new idea though – the course instructors were counting on the bioinformatics team to generate some research we could bring to the symposium. My solution: frantic searching on Google Scholar for anything relevant to bioinformatics and bacteriophages.

Within a few minutes I came upon a paper (1) that looked at the the usage of tetranucleotides in viral and bacterial genomes. The idea is that closely related genomes have similar signals in terms of tetranucelotide usage, and this signal can be used to look at relationships independent of alignment-based techniques. I had found a new idea for the project! This kind of analysis was also perfect for teaching bioinformatics. It introduces a lot of the concepts and language used in the field, like kmer counting and normalization. It is fairly straightforward to program, easy to apply to bacteriophage genomes and doesn’t require complicated statistics in a first level investigation.

I ran with this idea for the bioinformatics project and the results were quite exciting. We found tetranucleotide usage was well conserved within mycobacteriophage cluster (a way to group phage based on pariwise nucleotide alignment and gene content comparisons) and divergent between clusters. We built phylogenetic trees that closely corresponded to published trees, looked for horizontal gene transfer and were able to accurately cluster unknown phage – all based on the usage of 4-letter words within the genomes. For a more detailed overview of the work, check out the abstract I submitted for the International Society for Computational Biology Student Council conference.

One of the first year students, Chen Ye, and I are also going to be presenting this research at the SEA-PHAGES symposium at HHMI’s Janelia Farm this weekend. Check back for an update with our poster and other thoughts from the conference!

1. Pride, D.T., Wassenaar, T.M., Ghose, C., and Blaser, M.J. (2006). Evidence of host-virus co-evolution in tetranucleotide usage patterns of bacteriophages and eukaryotic viruses. BMC Genomics 7, 8.

ISCB student council 2014

I submitted some research I’ve been working on (as a byproduct of TAing a first year seminar and leading some students in an independent bioinformatics project) to the International Society for Computational Biology student council symposium. Yesterday I found out it was selected for an oral presentation! This is the first chance I’ve had to present independent research, so needless to say I’m pretty excited.

The talk is titled “Tetranucleotide usage in mycobacteriophage genomes: alignment-free methods to cluster phage and infer evolutionary relationships” Read on for the full abstract.

Continue reading