Genetic and transcriptional evolution alters cancer cell line drug response

Are your cell lines evolving right under your eyes?
Credit : Lauren Solomon and Susanna M. Hamilton, Broad Communications

As a scientific researcher, you expect experimental reagents to be delivered the way you ordered. 99.9% pure means 99.9% pure, and a cell line advertised with specific growth characteristics and genetic features should reflect just that. However, recently published work by Uri Ben-David, me and a team of researchers shows this isn’t necessarily true.

Cancer cell lines – immortalized cells derived from a cancer patient that can theoretically proliferate indefinitely – are a workhorse of biomedical research because they’re models for human tumorsCell lines can be manipulated in vitro and easily screened for vulnerabilities to certain drugs. In the past, research involving cancer cell lines has been difficult to replicate. Attempts to find drugs that selectively target cancer cell lines couldn’t be reproduced in different labs, or didn’t translate to animal experiments, for example.

Our team, led by Uri Ben-David and Todd Golub in the Cancer Program at the Broad Institute, thought that underlying genetic changes could be responsible for the failure of study replication. This isn’t necessarily a new hypothesis, and researchers have demonstrated genetic instability in cell lines before. However, we wanted to put the issue to rest forever.

We began by profiling 27 isolates of the breast cancer cell line MCF7 that came from different commercial vendors and different labs. Most were wild type, but some had undergone supposedly neutral genetic manipulations, such as the introduction of genes to produce fluorescence markers. First, we found significant and correlated changes in genetics (SNPs and copy number variants) and gene transcription levels. To test if these changes were important or just a curiosity, we subjected the 27 isolates to a panel of different drugs, some of which were expected to kill the cells and some of which should have had no effect. The results were striking – drug responses were so variable that MCF7 could have been called susceptible or entirely resistant to many of these drugs, simply by changing the source of the cell line. I hope you can appreciate how variability like this would throw a wrench in any drug discovery pipeline.

To check if this was simply a feature of MCF7, we repeated many of the same experiments on the lung cancer cell line A549, and smaller-scale classifications on 11 additional cell lines. We found similar levels of variation in every example tested. This is the largest and most detailed characterization of cell line variation to date, and will serve as a resource for researchers working with these lines. We also designed a web-based tool called Cell STRAINER which allows researchers to compare cell lines in their lab to references, revealing how much the lines have diverged from what you expect.

Is it all bad news if you’re a researcher working with cancer cell lines? Definitely not. Now that we have a better idea of how cell lines diverge over time, there are a few steps you can take to minimize the effect:

  • Serial passaging and genetic manipulation causes the largest changes. Maintaining a stock in the freezer over many years has a much smaller effect.
  • Characterize any cell line you receive from a collaborator, or the same line periodically over time. Low-pass whole genome sequencing (and comparison with Cell STRAINER) is a cheap and effective method.
  • Recognize that inconsistencies in cell line-based experiments may be due to underlying variability, not flawed science.

There was even one positive finding – panels of these isogenic-like cell lines can be used to reveal the mechanism of action of new drugs better than established cell line panels.

The full paper is online now at Nature. The Broad Institute published a good summary of the work, and the research was picked up by Stat News (paywalled). This was a major team effort and collaboration, all orchestrated by Uri Ben-David. I can’t thank him and the other coauthors enough for their dedication to the project!

Joining the Bhatt lab

My third lab rotation in my first year at Stanford took a different path than most of my previous experience. I came to Stanford expecting to research chromatin structure – 3D conformation, gene expression, functional consequences. My past post history shows this interest undoubtedly, and people in my class even referred to me as the “Chromatin Structure Guy.” However, approaching my third quarter lab rotation I was looking for something a little different. Rotations are a great time to try something new and a research area you’re not experienced in.

I decided to rotate in Dr. Ami Bhatt’s lab. She’s an MD/PhD broadly interested in the human microbiome and its influence on human health. With dual appointments in the departments of Genetics and Hematology, she has great clinical research projects as well. Plus, the lab does interesting method development on new sequencing technologies, DNA extraction protocols and bioinformatics techniques. The microbiome research area is rapidly expanding, as gut microbial composition has been shown to play a role in a huge range of human health conditions, from psychiatry to cancer immunotherapy response. “What a great chance to work on something new for a few months?” I told myself. “I can always go back to a chromatin lab after the rotation is over”

I never thought I would find the research so interesting, and like the lab so much.

So, I joined a poop lab. I’ll let that one sink in. We work with stool samples so much that we have to make light of it. Stool jokes are commonplace, even encouraged, in lab meeting presentations. Everyone in the lab is required to make their own “poo-moji” after they join.

My poo-moji. What a likeness!

I did my inaugural microbial DNA extraction from stool samples last week. I was expecting worse; it didn’t smell nearly as bad as I expected. Still, running this protocol always has me thinking about the potential for things to end very badly:

  1. Place frozen stool in buffer
  2. Heat to 85 degrees C
  3. Vortex violently for 1 minute
  4. ….

Yes, we have tubes full of liquid poo, heated to nearly boiling temperature, shaking about violently on the bench! You can bet I made sure those caps were on tight.

Jokes aside, my interest in this field continues to grow the more I read about the microbiome. As a start, here are some of the genomics and methods topics I find interesting at the moment:

  • Metagenomic binning. Metagenomics often centers around working on organisms without a reference genome – maybe the organism has never been sequenced before, or it has diverged so much from a reference that it’s essentially useless. Without aligning to a reference sequence, how can we cluster contigs assembled from a metagenomic sequencing experiment such that a cluster likely represents a single organism?
  • Linked reads, which provide long-range information to a typical short read genome sequencing dataset. They can massively aid in assembly and recovery of complete genomes from a metagenome.
  • k-mer analysis. How can short sequences of DNA be used to quickly classify a sequencing read, or determine if a particular organism is in a metagenomic sample? This hearkens to some research I did in undergrad on tetranucleotide usage in bacteriophage genomes. Maybe this field isn’t too foreign after all!

On the biological side, there’s almost too much to list. It seems like the microbiome plays a role in every bodily process involving metabolism or the immune system. Yes, that’s basically everything. For a start:

  • Establishment of the microbiome. A newborn’s immune system has to tolerate microbes in the gut without mounting an immune overreaction, but also has to prevent pathogenic organisms from taking hold. The delicate interplay between these processes, and how the balance is maintained, is very interesting to me.
  • The microbiome’s role in cancer immunotherapy. Mice without a microbiome respond poorly to cancer immunotherapy, and the efficacy of treatment can reliably be altered with antibiotics. Although researchers have shown certain bacterial groups are associated with better or worse outcomes in patients, I’d really like to move this research beyond correlative analysis.
  • Fecal microbial transplants (FMT) for Clostridium difficile infection. FMT is one of the most effective ways to treat C. difficile, a infection typically acquired in hospitals and nursing homes that costs tens of thousands of lives per year. Transferring microbes from a healthy donor to a infected patient is one of the best treatments, but we’re not sure of the specifics of how it works. Which microbes are necessary and sufficient to displace C. diff? Attempts to engineer a curative community of bacteria by selecting individual strains have failed, can we do better by comparing simplified microbial communities from a stool donor?

Honestly, it feels great to be done with rotations and to have a decided lab “home.” With the first year of graduate school almost over, I can now spend my time in more focused research and avoid classes for the time being. More microbiome posts to come soon!

Deep learning to understand and predict single-cell chromatin structure

In my last post, I described how to simulate ensembles of structures representing the 3D conformation of chromatin inside the nucleus. Now, I’m going to describe some of my research to use deep learning methods, particularly an autoencoder/decoder, to do some interesting things with this data:

  • Cluster structures from individual cells. The autoencoder should be able to learn a reduced-dimensionality representation of the data that will allow better clustering.
  • Reduce noise in experimental data.
  • Predict missing points in experimental data.

Something I learned early on rotating in the Kundaje lab at Stanford is that deep learning methods might seem domain specific at first. However, if you can translate your data and question into a problem that has already been studied by other researchers, you can benefit from their work and expertise. For example, if I want to use deep learning methods on 3D chromatin structure data, that will be difficult because few methods have been developed to work on point coordinates in 3D. However, the field of image processing has a wealth of deep learning research. A 3D structure can easily be represented by a 2D distance or contact map – essentially a grayscale image. By translating a 3D structure problem into a 2D image problem, we can use many of the methods and techniques already developed for image processing.

Autoencoders and decoders

The primary model I’m going to use is a convolutional autoencoder. I’m not going into depth about the model here, see this post for an excellent review. Conceptually, an autoencoder learns a reduced representation of the input by passing it through (several) layers of convolutional filters. The reverse operation, decoding, attempts to reconstruct the original information from the reduced representation. The loss function is some difference between the input and reconstructed data, and training iteratively optimizes the weights of the model to minimize the loss.

In this simple example, and autoencoder and decoder can be thought of as squishing the input image down to a compressed encoding, then reconstructing it to the original size (decoding). The reconstruction will not be perfect, but the difference between the input and output will be minimized in training. (Source)

Data processing

In this post I’m going to be using exclusively simulated 3D structures. Each structure starts as 64 ordered 3D points, an 64×3 matrix with x,y,z coordinates. Calculating the pairwise distance between all points gives a 64×64 distance matrix. The matrix is normalized to be in [0-1]. The matrix is also symmetric and has a diagonal of zero by definition. I used ten thousand structures simulated with the molecular dynamics pipeline, with an attempt to pick independent draws from the MD simulation. The data was split 80/20 between training and

Model architecture

For the clustering autoencoder, the goal is to reduce dimensionality as much as possible while still retaining good input information. We will accept modest loss for a significant reduction in dimensionality. I used 4 convolutional layers with 2×2 max pooling between layers. The final encoding layer was a dense layer. The decoder is essentially the inverse, with upscaling layers instead of max pooling. I implemented this model in python using Keras with the Theano backend.

Dealing with distance map properties

The 2D distance maps I’m working with are symmetric and have a diagonal of zero. First, I tried to learn these properties through a custom regression loss function, minimizing the distance between a point i,j and its pair j,i for example. This proved to be too cumbersome, so I simply freed the model from learning these properties by using custom layers. Details of the implementation are below, because they took me a while to figure out! One custom layer sets the diagonal to zero at the end of the decoding step, the other averages the upper and lower triangle of the matrix to enforce symmetry.

Clustering single-cell chromatin structure data

No real clustering here…

In the past I’ve attempted to visualize and cluster single-cell chromatin structure data. Pretty much any way I tried, on simulated and true experimental data, resulted the “cloud” – no real variation captured by the axes. In this t-SNE plot from simulated 3D structures collapsed to 2D maps, you can see some regions of higher density, but no true clusters emerging. The output layer of the autoencoder ideally contains much of the information in the original image, at a much reduced size. By clustering this output, we will hopefully capture more meaningful variation and better discrete grouping.

 

 

 

Groupings of similar folding in the 3D structure!

Here are the results of clustering the reduced dimensionality representations learned by the autoencoder. I’m using the PHATE method here, which seems especially applicable if chromatin is thought to have the ability to diffuse through a set of states. Each point is represented by the decoded output in this map. You can see images with similar structure, blocks that look like topologically associated domains, start to group together, indicating similarities in the input. There’s still much work to be done here, and I don’t think clear clusters would emerge even with perfect data – the space of 3D structures is just too continuous.

Denoising and inpainting

I am particularly surprised and impressed with the usage of deep learning for image superresolution and image inpainting. The results of some of the state of the art research are shocking – the network is able to increase the resolution of a blurred image almost to original quality, or find pixels that match a scene when the information is totally absent.

With these ideas in mind, I thought I could use a similar approach to reduce noise and “inpaint” missing data in simulated chromatin structures. These tasks also use an autoencoder/decoder architecture, but I don’t care about the size of the latent space representation so the model can be much larger. In experimental data obtained from high-powered fluorescence microscope, some points are outliers: they appear far away from the rest of the structure and indicate something went wrong with hybridization of fluorescence probes to chromatin or the spot fitting algorithm. Some points are entirely missed, when condensed to a 2D map these show up as entire rows and columns of missing data.
To train a model to solve these tasks, I artificially created noise or missing data in the simulated structures. Then the autoencoder/decoder was trained to predict the original, unperturbed distance matrix.

Here’s an example result. As you can see, the large scale features of the distance map are recovered, but the map remains noisy and undefined. Clearly the model is learning something, but it can’t perfectly predict the input distance map

Conclusions

By transferring a problem of 3D points to a problem of 2D distance matrices, I was able to use established deep learning techniques to work on single-cell chromatin structure data. Here I only showed simulated structures, because the experimental data is still unpublished! Using an autoencoder/decoder mode, we were able to better cluster distance maps into groups that represent shared structures in 3D. We were also able to achieve moderate denoising and inpainting with an autoencoder.

If I was going to continue this work further, there’s a few areas I would focus on:

  • Deep learning on 3D structures themselves. This has been used in protein structure prediction [ref]. You can also use a voxel representation, where each voxel can be occupied or unoccupied by a point. My friend Eli Draizen is working on a similar problem.
  • Can you train a model on simulated data, where you have effectively infinite sample size and can control properties like noise, and then apply it to real-world experimental data?
  • By working exclusively with 2D images we lose a lot of information about the input structure. For example the output distance maps don’t have to obey the triangle inequality. We could use a method like Multi-Dimensional Scaling to get a 3D structure from an outputted 2D distance map, then compute distances again, and use this in the loss function.

Overall, though this was an interesting project and a great way to learn about implementing a deep learning model in Keras!

 

Cardiff

After the short time in Lonon, I was on a bus to Cardiff. This was my first time in Wales, and it was nice to be in a calmer place. I was staying with two friends from the ISCB Student Council who showed me around the downtown and Harbor of Cardiff. It was raining when I got there, which I found to be quite common in Wales. I was actually happy for the cool weather and rain – after Istanbul’s constant 33°C sun, this was the first time I had to wear the sweater and rainjacket I brought!

The next morning I had an early flight across the way to Dublin. Last city in Europe!

Arriving in Assos

How lucky are we? Selen’s family runs a vineyard in Assos, on the Turkish coastline near the Greek island Lesvos. We piled in the car — six hours of driving and a ferry ride later we arrived at the vineyard. And just in time! We welcomed a thunderstorm rolling in from the Aegean. We all huddled under the porch to watch the lightning. Soon enough it was hailing fairly large pellets — the first time that’s happened in Assos since Selen’s family has been here.

Lightning over the Aegean

After the storm – ancient Assos in the background


The next day we explored Assos, the ancient village built on a hill over the sea. The construction dates back to 530BC, and much of it is still standing. There’s no mortar holding the walls and tower together, only perfectly carved stone blocks interlocking an supporting each other. At the top of the hill was a large temple to Athena, the god of war and wisdom. Only a few column pieces remained, reassembled here with some modern cast sections. Stand among the columns and imagine the Greeks using this temple, right where you are, 2500 years ago. And then snap out of it and pose for a photo.


Further down the hill is an amphitheater, still standing with the same timeproof construction.

We walked town to the old harbor town of Assos afterwards. It’s now a popular tourist destination, filled with hotels, restaurants, and a place to get ice cream by the water. We saw some locals serving “fish bread” to tourists from their boat. Doesn’t get fresher than this!

No trickery from this ice cream man!

Thanks for telling us you are taking a photo, Lauren!

Olive trees grow like weeds around here. Unfortunately they are nowhere near ripe. Still, we got a great sunset over the ocean and a perfect end to the day.

Not the best idea I’ve had this trip.

Harbor at sundown

Cycling around Büyükada

After so much city time, Lauren and I wanted something more quiet and active for today. Selen recommended Büyükada, the largest of the Princes’ Islands in the sea of Marmara. The day started with another boat ride, this one much longer than the last.

Cruising down the Bosphorus

We rented bikes from a local shop and set out to explore the island. There are two peaks, one topped by an old Greek orphanage that is now in decay. Fun fact: this is (or was before it starting apart) the largest wooden building in Europe and the second largest in the world! Now, it’s all blocked off.

View of the orphanage from the higher peak

The highest point on the island was a major challenge to cycle up on the poorly maintained rentals, so we had to walk most of the way. The views of the sea of Marmara and the Turkish coastline were worth it!



On the way down we found a nice secluded swimming hole. A dip, a nap and a snack later we were feeling refreshed!

The sea of Marmara was cool and clear

Later, in town, we got dinner at a seaside restaurant. They let us choose which freshly caught fish we wanted cooked up.

One freshly caught sea bass, please!

The winding route we took around the island


On the boat ride home, we had an exceptional sunset over the old city. Aya Sofia and the blue mosque looking regal here.

Istanbul’s Old City

Monday, Lauren and I were up early the next morning for a full day in Istanbul’s old city. We caught a boat from a terminal near Selen’s house and rode it all the way down the Bosphorus. Getting around by boat is common and efficient here. The terminal runs just like a metro stop – swipe your Istanbulkart to get in – and we noticed many local commuters boarding at stops from both the European and Asian sides of the Bosphorus.

Simit for breakfast!

I caught Lauren having tea and contemplating the spirit of travel

Morning fog over Anatolia

Today was the day to be tourists in the old city. First up was Hagia Sophia (pronounced Aya Sofia here), the towering monument to religion constructed by the Byzantine Emperor Justinian in 537AD.  It was a Greek Orthodox church back then, and many mosaics and symbols of Christianity are still visible inside the building. With the conquering of Constantinople (Istanbul) by the Ottoman Empire in 1453, out went Christianity and Aya Sofia was converted into a mosque. It remained a place of active worship until the 1930’s, when it was opened to the public as a museum.

Notice the massive calligraphic panels (recent additions), with the names of Allah, Muhammad and other important Islamic characters.

Across the way is the Blue Mosque, a towering building with a distinctive red carpet that is still actively used as a place of worship.

Men still kneel on the red carpet to pray at the Blue Mosque today. Lauren had to cover up to go inside!

In desperate need of snacks after only simit for breakfast, we searched the Hippodrome (a circus and chariot racing pavilion in the times of Constantinople) for some street food. Luckily a vendor selling freshly roasted chestnuts and corn was more than happy to oblige. We took a trip through the Grand Bazzar after, one of the oldest covered markets still in operation today. We stopped to peruse the hand-made silk rugs, gasp at the gold salesman and sample Turkish delight from vendors eager to take our Lira.

Service with a smile! I think we were his first customers of the day.

Wearing your wealth on your sleeve is a thing here.


Afterwards, we toured Topkapı Palace, where Ottoman Sultans lived and ruled from the 15th century onward. Most interesting was the Harem, the private quarters of the Sultan and his family. The Sultan’s mother (Queen Mother) had a ton of power back then! She dictated which women could be part of the Harem, and actually had a room between the Sultan and his wives (no doubt to keep tabs on their comings and goings).

Only the Sultans family and servants were allowed inside. A palace within the palace.

Imperial hall of the Harem.


Finally, we had to try a Turkish bath (hamam). Baths have been a staple of Istanbul’s culture since the 16th century. Walk into the bath room wearing only a towel. Çemberlitaş Hamam is one of the oldest and the obvious choice. Lay down on the hot marble slab and wait 15 minutes for your skin to soften. When your masseuse enters, roll over and try to convince yourself it’s fun as they scrape your body of its outer layers with what is basically a brillo pad. Enjoy a hot massage and soap bath following, and end with a cold shower. Wow.

You will be scrubbed clean of every single dead skin cell while laying on a marble slab from 1584. Source


Tired, but cleaner than we’ve been in days, we rode back up the Bosphorus in another commuter boat.

What’s next: Stanford Genetics!

After a long process of PhD applications, interviews and waiting for results, I’m happy to announce that I’ve settled on a home for the next 5 years. Stanford’s PhD program in genetics was exactly the right fit for my scientific, career and lifestyle interests.

I chose the genetics program over the biomedical informatics program for a few reasons. First, BMI proclaims to be focused on algorithm development and expects students to draw their main interests from algorithms. Although I find algorithm development interesting, it has to be motivated by an underlying biological problem. The genetics program will allow me to work on biological problems that excite me (probably related to chromatin structure and conformation) from a computational angle. Second, when I searched for faculty that I want to work with, they were most commonly in genetics or other biology-focused departments. That being said, I plan to do an entirely computational PhD if I can manage it. That’s where my interests and expertise lie.

Students typically complete 3 rotations their first year. At the top of my list are chromatin biologists William Greenleaf and Alistair Boettiger, and Computer Science/Genetics expert Anshul Kundaje.

A few months to finish up at the Broad, a few months of travel and a big move out west are in my future. It’s an exciting time.

Vineyard 70.3 – 4th place!

After months of training for the Vineyard 70.3 triathlon, the weekend of the race was finally here! I was slightly nervous, but mostly excited to test my training against the course and the other competitors. I took the ferry from Woods Hole to the Vineyard Saturday afternoon, had a carb-y dinner with my friend Jordan who was also competing, and got to sleep very early. Up at 5:30 on Sunday and ready to race!

The morning was cold and windy – whitecaps lapped against the beach as the sun came up. Shortly after 7:00 the race started and we were off! The swim was tough – the choppy water made it hard to sight and it took me a few minutes to get settled into a comfortable rhythm. The bike section was much better, I passed a ton of people while maintaining a reasonable level of effort. Finally, it was time to run and I was still feeling strong. I kept passing people while only being passed by a one person (who went on to take first place for the women, so I’m not even mad). I really had to push through the pain in the last few miles, especially running along the beach in a headwind.

When I crossed the line I was shocked at my time  – 5:15! I thought 6 hours was a realistic goal for the race… I blew that out of the water! I ended up placing 4th overall and 1st in the 20-24 age group. I was thrilled with the results and enjoyed a persistent runner’s high for a while after crossing the line.

Here’s a race report about what I did well and what I can improve on for future races.
Swim: 29 min. Course was short, garmin showed it as 1600yd
Bike: 2:53 (19.3 mph)
Run: 1:49 (8:23min/mi)
Total: 5:15
What I did well: Biked hard but didn’t overdo it. Hydrated and ate frequently and regularly. Ran at a consistent pace and pushed through a crushing mile 10-12 with a headwind. Transitions were smooth, especially T2.
To improve: Swim – more time in the pool, ocean swims in choppy water would have made a difference in the time. I was in the bottom half of swim finishers. I’m on a Specialized Allez roadie I bought used and also commute on. I think the move to a tri bike would make a big improvement.
Training: Averaged 8-10 hours a week for the past 5 months. Typically 1 swim/wk. Two week vacation at start of August with only 4 runs for workouts.

What’s next? Not sure. For now some R&R and big meals will do me well. I’m considering pursuing triathlon seriously depending on where I end up for grad school, but that remains to be seen. A huge thanks to everyone who supported me taining for this race, especially my parents who came to the Vineyard and cheered me on!

img_1147 14242474_10154506963808185_4976022130544699371_o 14352173_10154506963583185_6985096982362154061_o

4 things I learned from keynote lectures at SCS2016

SCS2016 featured keynote lectures by two senior scientists in computational biology. John Quackenbush and Janet Thornton each shared scientific findings and expert advice to the students listening eagerly in the audience. I came away from the lectures with a few ideas I will keep in mind for both my daily research work and future career planning:

  1. “All models are wrong, but some are useful.” John Quackenbush opened his talk with this well-known quote attributed to George Box. He explained that many of the network models researchers in his group create are inherently flawed — and that’s OK! No model is perfect, but good ones can solve the problem at hand.
    This is definitely something I can apply to my research — It’s easy to get bogged down thinking about the small flaws in models I come up with or methods I use. It’s better to ask “is this useful?” than “is this perfect?”
  2. “You have to work hard, and it only gets harder.” Tough words to hear from Janet Thornton, who described her path from student, to PI, to eventual director of European Bioinformatics Institute. Janet described how she thought each career advancement would bring a decrease in the amount of work required for success. Just the opposite, she discovered. Each transition brought more work and more responsibilities, but these were balanced out by an increase in excitement. Higher-level responsibilities and mentoring younger students made the increased workload worth it.
  3. “Every revolution in science has been driven by one and only one thing: access to data.” John Quackenbush described how data used to be siloed away in the towers of the elite. Access was hard to obtain — due to both policy and logistical constraints — and science moved slowly as a result. We are slowly entering a culture of data sharing, driven by the obvious results of collaboration and the means to be able to share globally and instantaneously. John was excited about the potential for science in the future as data sharing becomes even more common, and I am too!
  4. “Communication is the hardest part, and the most important.” Janet Thornton selected communication as the single most difficult part of her career. She named it the sole factor that could make or break a project, collaboration or organization. I thoroughly agree with this statement (enough to make it the theme of my blog on organizing SCS2016) but was surprised that she still considers it a challenge. Effective communication takes practice, but can be very rewarding — whether it’s a Nature paper, innovative collaborative project or a successful international symposium!

The keynote lectures of SCS2016 were special. Senior scientists gave us a view not only into their thoughts on research, but also their ideas about careers, communication and the scientific process as a whole. Students have a lot to learn from mentors like John Quackenbush and Janet Thornton, and these lessons will stick with me for a long time.

This post was originally posted in the PLOS Computational Biology Field Reports Blog.