Deploying redun to AWS Batch – troubleshooting

I recently went down the rabbit hole trying out the newest bioinformatics workflow manager, redun. While installation and running workflows locally went off without a hitch, I experienced some trouble getting jobs deployed to AWS Batch. Here’s a list of my troubleshooting steps, in case you experience the same issues. To start, I followed the instructions for the “05_aws_batch” example workflow.

I was deploying the workflow on my AWS account at Loyal. This may change if you’re using a new AWS account, or have different security policies in place.

Building docker images

Docker needs root access to build and push images to a registry. In practice, this often means using “sudo” before every command. You can fix this with the command sudo chmod 666 /var/run/docker.sock

Or see the longer fix in this stack overflow post.

Submitting jobs to AWS Batch

I experienced the following error when submitting jobs to AWS Batch:

upload failed: - to s3://MY-BUCKET/redun/jobs/ca27a7f20526225015b01b231bd0f1eeb0e6c7d8/status
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden

I thought this was due to an error in the “role” setting, and that was correct. I first tried using the generic role

arn:aws:iam::ACCOUNT-ID:role/aws-service-role/batch.amazonaws.com/AWSServiceRoleForBatch

but that didn’t work.

I then added a custom IAM role to AWS with S3, EC2, ECS and Batch permissions. I added the following permissions as well:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

And then everything worked as expected.

ECS unable to assume role

I heard from someone else trying redun for the first time that they were able to get the batch submission working with the (similar) instructions at this stack overflow post

I hope this helps anyone trying to deploy redun to AWS Batch for the first time!

Trying out redun – the newest workflow manager on the block

Workflow managers form the cornerstone of a modern bioinformatics stack. By enabling data provenance, portability, scalability, and re-entrancy, workflow managers accelerate the discovery process in any computational biology task. There are many workflow managers available to chose from (a community-sourced list holds over 300): Snakemake, Nextflow, and WDL… each have their relative strengths and drawbacks.

The engineering team at Insitro saw all the existing workflow managers, and then decided to invest in building their own: redun. Why? The motivation and influences docs pages lay out many of the reasons. In short, the team wanted a workflow manager written in Python that didn’t require expressing pipelines as dataflows.

I spent a few days trying out redun – working through the examples and writing some small workflows of my own. I really like the project and the energy of open source development behind it. I’m not at the point where I’m going to re-write all of my Nextflow pipelines in redun, but I’m starting to consider the benefits of doing so.

The positives I immediately noticed about redun include:

  • redun is Python. Not having to learn a domain-specific language is a huge advantage.
  • The ability to execute sub-workflows with a single command. This is helpful if you want to enter a workflow with an intermediate file type.
  • I can see redun working as a centralized way to track workflow execution and file provenance within a lab or company.
  • There are several options for the execution backend, and redun is easy to deploy to AWS Batch (with some tweaks).
  • The tutorial and example workflows were helpful for demonstrating the key concepts.

A few drawbacks, as well:

  • There hasn’t been much investment in observability or execution tracking. Compared to Nextflow Tower and other tools, redun is in the last century.
  • Similarly, there isn’t yet much community investment in redun, like there is in nf-core.
  • While redun is extremely flexible, I bet it will be more challenging for scientists to learn than Snakemake.

There will certainly be other items to add to these lists as I get more familiar with redun. For now, it’s fair to say I’m impressed, and I want to write more pipelines in redun!

Rare transmission of commensal and pathogenic bacteria in the gut microbiome of hospitalized adults (2)

When we last left off, I was peering into the -80 freezer at the hundreds of stool samples I would need to analyze. In reality, a lot of experimental design work came on this project before I ever opened up the freezer!

Designing a good experiment was one of the most important things I learned in grad school. Science is already hard enough – you need to set yourself up for success from the beginning by designing a good experiment, whether it’s wet lab or computational. I like to think about what success in this project would look like, and work backwards from success to understand the data I need to collect.

To convincingly prove that a bacterium had transmitted from the microbiome of one patient to the microbiome of another, I needed the following pieces of evidence:

  1. At a given point in time, the bacterial genome was present in the microbiome of the source patient and undetectable in the microbiome of the recipient.
  2. At a future point in time, the bacterial genome was present in the microbiome of the recipient patient, and ideally persisted for multiple future time points.

Through Stanford Hospital, I also had access to a dataset of each patient’s room history. From this, I could find when two patients were roommates. Mapping the overlapping intervals, combined with the list of samples biobanked from each patient, was a challenging data science problem. It took me about a month of work to design an experiment that would give me the best chance of observing patient-patient microbiome transmission, if it was happening.

The wet lab work for this project was long and monotonous. You can read about it in the methods section of the paper, but we did DNA extraction and 10X Genomics linked read sequencing on all of the new samples.

When the new data came back, it was time to get cracking! The processing pipeline and data analysis I had planned would take too long to run on Stanford’s HPC cluster, so I turned to Google Cloud to get everything done with quick parallelization. The process of getting our workflows to run at scale in the cloud was certainly a learning experience, and I wrote a blog post about the effort (two years ago).

After assembling bacterial genomes from hundreds of microbiome samples, comparing strain-level populations with inStrain, and generating massive matrices comparing all sets of genomes in my samples, the true data analysis began. A few key lessons from the data analysis and writing experience have stuck with me, and the challenges made me a better scientist.

  1. Scrutinize your results! When I initially looked for identical bacterial genomes in samples from different patients, I found many “transmission events” that were simply the results of barcode swapping (when samples sequenced on an Illumina machine at the same time experience a small degree of contamination). I was prepared for this outcome, and developed a method to quantify when identical genomes were likely the result of barcode swapping in the linked read data.
  2. Carefully evaluate negative findings. After eliminating all the likely false positive results, I found very few identical genomes between patients, especially antibiotic resistant pathogens. At first, this was an upsetting result. I was really hoping to find lots of transmission between patients who were roommates! However, the lack of pathogen transmission findings allowed me to focus on the potentially more interesting cases of commensal bacteria transmitted between patients. The “negative” finding here turned out to make a more interesting story.

 

Rare transmission of commensal and pathogenic bacteria in the gut microbiome of hospitalized adults (1)

My final project with the Bhatt Lab is now published! You can find the open access text at Nature Communications. I’m excited to bring this chapter of my research career to a close. The paper contains the full scientific results; here I’ll detail some of the journey and challenges along the way.

Hot off the success of my previous work studying mother-infant transmission of phages in the microbiome, I was eager to characterize other examples transmission between the microbiome of humans. While mother-infant transmission of both bacteria and phages was now understood, microbiome transmission between adults was less clear. There were some hints of it happening in the literature, but nobody had fully characterized the phenomenon at a genomic level of detail that I believed. I’m also not counting FMT as transmission here – while it certainly results in the transfer of microbiome components from donor to recipient, I was more interested in characterizing how this phenomenon happened naturally.

In our lab, we have a stool sample biobank from patients undergoing hematopoietic cell transplantation (HCT). We’ve been collecting weekly stool samples from patients undergoing transplant at Stanford Hospital, and to date we have thousands of samples from about one thousand patients. HCT patients are prime candidates to study gut-gut bacterial transmission, due to a few key factors:

  1. Long hospital stays. The conditioning, transplant and recovery process can leave a patient hospitalized for up to months at a time. The long stays provide many opportunities for transmission to occur and many longitudinal samples for us to analyze.
  2. Roommates when recovering from transplant. At Stanford Hospital, patients were placed in double occupancy rooms when there were not active contact precautions. These periods of roommate overlap could provide an increased chance for patient-patient transmission.
  3. Frequent antibiotic use. HCT patients are prescribed antibiotics both prophylactically and in response to infection. These antibiotics kill the natural colonizers of the gut microbiome, allowing antibiotic resistant pathogens to dominate, which may be more likely to be transmitted between patients. Antibiotic use may also empty the niche occupied by certain bacteria and make it more likely for new colonizers to engraft long-term.
  4. High burden of infection. HCT patients frequently have potentially life-threatening infections, and the causal bacteria can originate in the gut microbiome. However, it’s currently unknown where these antibiotic resistant bacteria originate from in the first place. Could transmission from another patient be responsible?

As we thought more about the cases of infection that were caused by gut-bloodstream transmission, we identified three possibilities:

  1. The microbes existed in the patient’s microbiome prior to entering the hospital for HCT. Then, due to antibiotic use and chemotherapy, these microbes could come to dominate the gut community.
  2. Patients acquired the microbe from the hospital environment. Many of the pathogens we’re interested in are Hospital Acquired Infections (HAIs) and known to persist for long periods of time on on hospital surfaces, in sinks, etc.
  3. Patients acquired the microbe via transmission from another patient. This was the most interesting possibility to us, as it would indicate direct gut-gut transmission.

While it’s likely that all three are responsible to some degree, finding evidence for (3) would have been the most interesting to us. Identifying patient-patient microbiome transmission would be both a slam dunk for my research, and would potentially help prevent infections in this patient population. With the clear goal in mind, I opened the door of the -80 freezer to pull out the hundreds of stool samples I would need to analyze…

More to come in part 2!

 

 

Moving into aging research – in dogs!

P – H – Done

As I finish up my PhD at Stanford and consider my next career moves, I’m positive I want to work at a small and rapidly growing biotech startup. After many interviews and some serious introspection, I settled on working at Loyal, a biotech company dedicated to extending the lifespan of dogs by developing therapeutics. It seems like a crazy idea at first, but the core thesis of doing aging research in companion canines makes a lot of sense.

I believe the aging field is at an inflection point – it’s where the microbiome research was 10 years ago. Back then, 16S rRNA sequencing was the state of the art, and the only question researchers were commonly asking of microbial communities was “who’s there.” We’ve since come to appreciate the ecological complexity of the microbiome, developed new genomic ways to study the identities and function of it’s members, and engineered microbiome therapeutics that are starting to show signs of efficacy.

At the core of the aging thesis is the idea that aging is a disease. After all, age is the largest risk factor for death, cancer, dementia, etc. Re-framing aging as a disease allows for completely new investigations, but will not be easy from a regulatory perspective.

Lifespan vs healthspan

“Why would you want to extend the number of years someone is sick at the end of their life?”

This question is frequently asked by those unfamiliar with aging research. However, I don’t believe many in the field have a desire to prolong an unhealthy end of life. Extension of lifespan is not valuable if the extra years are not lived well. Many researchers are interested in healthspan, the number of years lived in a good state of health. One way to picture this is to imagine a “rectangularization” of the survival curve. A drug that prolongs the number of years lived in good health would be very valuable, even if it had no impact on life expectancy.

Rectangularization of the survival curve – The lines should both be the same height to start, but you get the idea.

What about the ethical implications?

News about advancements in aging research are often accompanied by fear: “won’t this just make rich people live longer?” After all, immortality has been a quest for millennia. I don’t buy into many of these criticisms, for a few reasons. First, lifespan is already very stratified by income, and the wealthiest individuals already have access to advanced therapies and care that others lack. Second, advances in lifespan and healthspan are likely to be slow. No immortality drug will be developed overnight. Third, many researchers are working to develop drugs for aging that are cheap and commoditized. The CEO of Loyal, Celine Halioua, has written about this at length.

I’m not new to the aging field!

Back in my undergrad research at Brown, I worked in Nicola Neretti’s lab, which was focused on the genetic and epigenetic pathways of aging. The main paper I contributed to in undergrad studied the chromatin organization of cells as they progressed into senescence – a cellular version of aging slowdown. It’s great to be back!

What’s going on at Loyal?

I’ll be working on everything related to genomics and bioinformatics related to dogs. This means sequencing blood and saliva samples from our laboratory and companion animals, quantifying aging at the genetic and epigenetic level, building better epigenetic clocks, and researching the breed-specific epigenetic changes that accompany aging in certain dogs. It’s exciting and fast paced. And we’re hiring more! Whether your background is in aging science, vet med, computer science, or business operations, we need talented people. Drop me a line if you want to talk more.

Tail risk hedging – replication of the VXTH index

In my last post about hedging a portfolio with options, I looked at how a complicated 4-option spread could replicate the VIX index and hedge against market volatility. Now, we’re going to look at a simpler, explicit “tail risk” hedge using VIX calls. This strategy is based on the VXTH index (VIX Tail Hedge), which buys 30 delta VIX calls with 1% of the portfolio when volatility is low, and allocates the rest into the SPX index. Looking at the performance of the index below, three things are immediately clear:

  1. VXTH did well, but not stellar, in 2008-2009
  2. VXTH slightly underperformed the benchmark during the bull market of 2010-2020
  3. VXTH absolutely skyrocketed during the COVID crash of 2020. I think this played right into the strengths of the hedging program: a rapid VIX spike, followed by quick recovery of SPX.

We’re going to look at replicating the VXTH index and extending the methodology to other portfolios, including a leveraged ETF portfolio holding UPRO and TMF.

 

Equity curves of VXTH (green) compared to SPX (black) from 2006-2020.

How does VXTH work?

The methodology is simple. Each month, the look at the front month VIX futures contract and decide how to allocate to the hedge. With the specified fraction of the portfolio, buy 30 delta VIX calls with one month to expiration.

VIX future valuePortfolio allocation
X <= 150%
15 > X <= 30 1%
30 > X <= 500.5%
X > 500%

N.B. The phrase “forward value of VIX” on the CBOE website is strange and doesn’t have an explicit meaning (at least to me). I confirmed the index is looking at the front month VIX future rather than spot VIX by examining the trade log on the CBOE website.

Why hedge with VIX calls?

I think the main reason for using VIX calls as a tail risk hedge is the convexity embedded in the option. In times of low vol, the calls are cheap, and a 1% allocation can buy your portfolio many many OTM calls. But when tail risks come to fruition and VIX spikes like it did in March 2020, the value of the options goes parabolic. If you have the hedge on before everyone else in the market is trying to hedge, you’re in a great position. VIX options are also very liquid in a crisis, in times when other instruments can be illiquid and difficult to unwind for big positions.

Replicating the VXTH index

Similar to the last post, I obtained VIX option data from IvyDB and historicaloptiondata.com. /VX prices were obtained from the Quandl continuous futures dataset. Backtesting was done with a custom R program. Option transactions occur at the midpoint of the bid/ask spread and have no transaction costs (big caveat here!). I first replicated VXTH, and equity curves are below. However, I’m still experiencing some tracking error compared to the benchmark, especially in 2020. I think this could be due to differences in my price data or timing luck (see the future directions section). Still, the VXTH replication captures most of the movement of the benchmark and has no drawdown in March 2020.

Equity curves for my replicated VXTH (red) compared to the benchmarks.

Extension to a UPRO/TMF portfolio

How does adding a VIX call hedge deal with the added volatility of a leveraged portfolio? Quite well! Using the same parameters and a portfolio of 55% UPRO, 45% TMF, the equity curves are below. The outperformance in 2020 isn’t very visible on the log scale, but the VIX call hedged portfolio ends the backtest with a 30% higher balance. The stats on the hedged portfolio are also excellent – improved total and risk-adjusted return, and a comparable drawdown to holding SPX alone. So far, this looks really good!

Equity curves of hedged UPRO/TMF portfolio compared to benchmarks

 SPXVXTH (benchmark)VXTH (replicated)UPRO/TMFUPRO/TMF + VXTH
CAGR7.4912.28.817.119.2
Sharpe ratio (Annualized)0.490.670.660.700.87
StdDev (Annualized)15.218.313.525.022.4
Worst drawdown52.5%37.4%35.1%70.9%57.2%

Conclusions

Adding a small, constant allocation to VIX calls can improve the absolute and risk-adjusted returns of a portfolio of stocks or leveraged stocks/bonds, at least in the period I backtested. This method is relatively simple compared to the 4 option method I tested in the last post, and only requires management once per month, which can coincide with a monthly portfolio rebalance. There are a few optimizations I want to test before running this method live. I also need to include transaction costs and slippage into my model.

Future directions

I noticed some timing luck in replicating VXTH, specifically around the COVID crash. Slightly changing the days to expiration of the calls would result in very different outcomes, because the VIX calls could be held through the entire crash instead of sold at the “right” time. I think that’s part of why VXTH did so well in March – the VIX peak was right at an option expiration, so the position was exited at just the right time. Ideally we’d strive eliminate this timing luck from a portfolio. I can see a few ways to do this, that I’ll think about implementing in my backtests:

  1. Instead of holding to expiration, positions should be dynamically opened or closed when VIX crosses one of the allocation thresholds.
  2. Holding a “ladder” of calls with different expirations to reduce the effect of timing.
  3. Daily rebalancing (probably not a good idea in practice because of transaction costs).

I want to optimize some other parameters, while being wary of the possibility of overfitting to the relatively few “tail risk” events that have happened in my dataset.

  1. Allocation amounts (probably more hedge is better with the leveraged portfolio)
  2. Hedge thresholds. Analyzing the transition matrix from one VIX state to the next may help with this.
  3. Option delta. Lower delta options will give you more convexity when the rare crashes happen, but you may not benefit from small VIX spikes.

 

Volatility as an asset class – replication of Doran (2020) and extension to a leveraged risk-parity portfolio

Introduction

This post is going to be a departure from the usual genomics tilt of this blog. I’ve recently been interested in the science (art?) of hedging a stock portfolio against market downturns. Hedging is difficult and involves the selection of the right asset class, right allocation (holding too much of the hedge and you under perform in all markets) and right time to remove the hedge (ideally at the bottom of a correction). If the VIX (CBOE Volatility Index) were directly investable, holding it as an asset in a portfolio would provide a significant edge. However, you cannot directly “buy” the VIX, and tradable VIX products (like VXX, UVXY, etc) have notable under performance when used as a hedge (Bašta and Molnár, 2019).

A paper by James Doran (2020) proposed that a portfolio of SPX options that is highly correlated to the VIX could be held as a long-term hedge. The portfolio buys an ITM-OTM put spread and sells an ATM-OTM call spread when the VIX is at normal values, and does not hedge when the VIX is above the mean plus one standard deviation. In this way the portfolio systematically removes the hedge when vol is the most expensive and therefore more likely to revert to the mean. For example, if SPX was at 3800 and VIX was at normal levels, the portfolio would allocate 1% to the following option spreads with one month expiration. The payoff with SPX at various levels at expiration is shown below.  Importantly, this spread has positive theta, and only begins to lose if SPX closes above 3850.

 ITM/OTM %Put/CallStrike
Buy5% ITMPut3990
Sell5% OTMPut3610
SellATMCall3800
Buy5% OTMCall3990

P/L of the option spread at expiration. Cost = 8710, max gain = 29290, max loss = 27710.

I was interested in replicating the results of this paper, extending the findings to the end of 2020 (the paper stops in 2017), and finding if the option portfolio would hedge a leveraged stock portfolio holding UPRO (3X leveraged S&P500).

Step 0: Obtain data, write backtest code

Option data: I obtained end of day option prices for the SPX index from Stanford’s subscription to OptionMetrics for 1996-2019. 2020 data were purchased from historicaloptiondata.com.

Extended UPRO and TMF data: These products began trading in 2009, but we definitely want to include the early 2000s dotcom crash and 2008 financial crisis in our backtests. Someone on the bogleheads forum simulated the funds going back to 1986, and they’re available here

Backtesting: I wrote a simple program to backtest an option portfolio in R. This program buys a 30 DTE spread as described above and typically holds to expiration. When VIX is low, a fixed percentage of the portfolio value is placed into the option portion during each rebalance, which occurs when the options expire. When VIX is high (above mean plus one standard deviation), the portfolio only holds the base asset class. If VIX transitions from low to high, the hedge is immediately abandoned, and if VIX transitions from high to low, the hedge is repurchased.

Step 1: replicate the results of Doran (2020) with the SPX index

To ensure our option backtest works as expected, I first replicated the results from the Doran paper using the SPX index. I allocated a fixed 5% to the hedge. I found performance was improved by using options 10% ITM or OTM, so these were used in all backtests. Below are the returns of these portfolios from 1996-2020, starting with $100,000. Although the hedge does well in negative markets, the under performance in the bull market of the last 10 years is quite apparent. The hedge also didn’t protect much against the rapid COVID crash in March 2020 – I think because VIX spiked very quickly and the portfolio wasn’t hedged for much of the crash. My results don’t exactly match those in the paper (even using a 5% spread width). I think differences in the option prices, especially early in the dataset, are playing a role in this.

Equity curves for option hedged SPX portfolios. SPX = un-hedged. OPT: always hedged 5%. OPTsd: hedged 5% when VIX is below the mean plus one standard deviation.

 SPXOPTOPTsd
CAGR7.492.917.08
Sharpe ratio (Annualized)0.480.390.64
StdDev (Annualized)15.37.7111.23
Worst drawdown52.535.241.2

Step 2: extend the option hedge to a portfolio holding UPRO

How does the hedge work using 3X leveraged fund UPRO? I conducted the same backtest, and found that 10% allocated to the hedge is better. This makes sense – you need something with higher volatility to balance out the extreme swings in UPRO. Hedged performance is definitely better than holding UPRO alone, which has pathetic stats over this time period. Better returns than holding SPX alone, but more variance and a equivalent Sharpe ratio. Holding the VIX as an asset is still the winner here.

Equity curves for option hedged UPRO portfolios. SPX: un-hedged, UPRO: un-hedged, UPROvixsd: holding VIX as hedge when VIX is low, UPROoptsd: holding option hedge when VIX is low.

 SPXUPROUPROoptsdUPROvixsd
CAGR7.499.7115.121.6
Sharpe ratio (Annualized)0.480.200.490.53
StdDev (Annualized)15.346.831.640.5
Worst drawdown52.597.487.791.7

Comparison to a UPRO/TMF portfolio

The option-hedged portfolio needs to outperform a 55/45% UPRO/TMF portfolio for me to consider running it for real. I used portfoliovisualizer.com to easily compare these portfolios with monthly rebalancing.

Portfolio 1 (blue) : UPROoptsd   Portfolio 2 (red) : UPRO/TMF 55/45   Portfolio 3 (yellow): UPRO/VIX 70/30

The returns with TMF have less variance than the option hedged portfolio and end up almost exactly equal at the end of this time period. However, in 1996-2008, the option portfolio definitely outperformed. Holding VIX is again the clear winner in both absolute and risk-adjusted returns, but still suffers severe drawdowns.

Conclusions

I don’t think holding this portfolio will provide a significant advantage compared to a UPRO/TMF portfolio. Given the limitations below and no significant advantage in the backtest, I won’t be voting with my wallet. The option hedge portfolio did provide significant advantages in the 1996-2008 period, where it outperformed all other portfolios (even the optimal 70/30 UPRO/VIX!) with a Sharpe ratio of 1.01 and max drawdown of 47% in the dotcom crash. I may paper-trade this strategy to get a feel for position sizing, slippage and fills on these spreads, though.

Limitations: Why I won’t be hedging with this method

  1. This model assumes all transactions occur at the midpoint of the bid-ask spread and does not take into account transaction costs. While transaction costs are relatively small, SPX and XSP can have relatively wide bid-ask spreads, much wider than SPY.
  2. Options can by illiquid, only purchased in fixed quantities, and difficult to adjust. Today with SPX at 3750, Buying one SPX 30d 5% ITM-OTM put spread costs $16100. Adding the call spread brings the cost down to $9340 but brings the max loss of the position to $27340! Trading on XSP brings the cost down by a factor of 10. With a 1% hedge, this method is only good for portfolios >100k. As a 5% hedge this can be used on a portfolio as small as 20k. Still, what do you do when the optimal amount of hedge is 1.5 XSP contracts?
  3. It’s more complicated than simply rebalancing between UPRO and TMF, requiring more active management time.
  4. The option hedge didn’t even outperform UPRO/TMF in some regards!
  5. Backtests are only backward-looking and easy to overfit to your problem.

Future directions to explore

  1. Optimal hedge amount – was not optimized scientifically, I just tried a few values and decided based on returns and Sharpe ratio.
  2. Differing DTE on position opening an closing. 30 days and holding to expiration may not be optimal.
  3. Selecting strikes based on Delta instead of fixed percentage ITM/OTM. This would result in different strikes selected in times of low and high vol, but probably has a minimal impact.
  4. The max loss of these spreads can be quite high compared to the cost to enter the trade – maybe the hedge amount should be scaled based on the max loss of the position (with the remaining invested in the base asset or held in cash).

Questions? Other ideas to test? Let me know! I’ll also happily release returns or code (it’s not pretty) if you are interested.

References:
1.Doran, J. S. Volatility as an asset class: Holding VIX in a portfolio. Journal of Futures Markets 40, 841–859 (2020).
2.Ayres, I. & Nalebuff, B. J. Life-Cycle Investing and Leverage: Buying Stock on Margin Can Reduce Retirement Risk. https://papers.ssrn.com/abstract=1149340 (2008).
3.Ayres, I. & Nalebuff, B. J. Diversification Across Time. https://papers.ssrn.com/abstract=1687272 (2010).
4. Bašta, M. & Molnár, P. Long-term dynamics of the VIX index and its tradable counterpart VXX. Journal of Futures Markets 39, 322–341 (2019).

Leveraged portfolio background

The leveraged portfolio idea comes from the famous “HEDGEFUNDIE’s excellent adventure” thread on the Bogleheads forum (thread 1, thread 2) with ideas going back to “lifecycle investing” and “diversification across time” from Ayres and Nalebuff (2008, 2010). Basically, it makes sense to use leverage to obtain higher investment returns when you’re young and expect to have higher earnings in the future. You can do this with margin, futures, LEAPS options, or leveraged index funds. The leveraged funds appear to be the easiest way to obtain consistent and cheap leverage without risk of a margin call. The portfolio holds 55% UPRO and 45% TMF (3X bonds) and typically rebalances monthly. I’ve also thrown some TQQQ (3X leveraged NASDAQ) into the mix. These portfolios outperform a 100% stocks or an unleveraged 60/40 portfolio on BOTH a absolute and risk-adjusted return basis. However, if you could hold VIX as an asset to rebalance out of, performance would be even better. Hence my interest in replicating the a VIX hedge with options.

Large-scale bioinformatics in the cloud with GCP, Kubernetes and Snakemake

I recently finished a large metagenomics sequencing experiment – 96 10X Genomics linked read libraries sequenced across 25 lanes on a HiSeq4000. This was around 2TB of raw data (compressed fastqs). I’ll go into more detail about the project and data in another post, but here I’m just going to talk about processing the raw data.

We’re lucky to have a large compute cluster at Stanford for our every day work. This is shared with other labs and has a priority system for managing compute resouces. It’s fine for most tasks, but not up to the scope of this current project. 2TB of raw data may not be “big” in the scale of what places like the Broad deal with on a daily basis, but it’s definitely the largest single sequencing experiment I and our lab has done. To solve this, we had to move… TO THE CLOUD!

By utilizing cloud compute, I can easily scale the compute resources to the problem at hand. Total cost is the same if you use 1 cpu for 100 hours or 100 cpus for 1 hour… so I will parallelize this as much as possible to minimize the time taken to process the data. We use Google Cloud Comptue (GCP) for bioinformatics, but you can do something similar with Amazon’s or Microsoft’s cloud compute, too. I used ideas from this blog post to port the Bhatt lab metagenomics workflows to GCP.

Step 0: Install the GCP SDK, Configure a storage bucket.

Install the GCP SDK to manage your instances and connect to them from the command line. Create a storage bucket for data from this project – this can be done from the GCP console on the web. Then, set up authentication as described here.

Step 1: Download the raw data

Our sequencing provider provides raw data via an FTP server. I downloaded all the data from the FTP server and uploaded it to the storage bucket using the gsutil rsync command. Note that any reference databases (human genome for removing human reads, for example) need to be in the cloud too.

Step 2: Configure your workflow.

I’m going to assume you already have a snakemake workflow that works with local compute. Here, I’ll show how to transform it to work with cloud compute. I’ll use the workflow to run the 10X Genomics longranger basic program and deinterleave reads as an example. This takes in a number of samples with forward and reverse paired end reads, and outputs the processed reads as gzipped files.

The first lines import the cloud compute packages, define your storage bucket, and search for all samples matching a specific name on the cloud.

from os.path import join
from snakemake.remote.GS import RemoteProvider as GSRemoteProvider
GS = GSRemoteProvider()
GS_PREFIX = "YOUR_BUCKET_HERE"
samples, *_ = GS.glob_wildcards(GS_PREFIX + '/raw_data_renamed/{sample}_S1_L001_R1_001.fastq.gz')
print(samples)

The rest of the workflow just has a few modifications. Note that Snakemake automatically takes care of remote input and output file locations. However, you need to add the ‘GS_PREFIX’ when specifying folders as parameters. Also, if output files aren’t explicitly specified, they don’t get uploaded to remote storage. Note the use of a singularity image for the longranger rule, which automatically gets pulled on the compute node and has the longranger program in it. pigz isn’t available on the cloud compute nodes by default, so the deinterleave rule has a simple conda environment that specifies installing pigz. The full pipeline (and others) can be found at the Bhatt lab github.

rule all:
    input:
        expand('barcoded_fastq_deinterleaved/{sample}_1.fq.gz', sample=samples)

rule longranger:
    input: 
        r1 = 'raw_data_renamed/{sample}_S1_L001_R1_001.fastq.gz',
        r2 = 'raw_data_renamed/{sample}_S1_L001_R2_001.fastq.gz'
    output: 'barcoded_fastq/{sample}_barcoded.fastq.gz'
    singularity: "docker://biocontainers/longranger:v2.2.2_cv2"
    threads: 15
    resources:
        mem=30,
        time=12
    params:
        fq_dir = join(GS_PREFIX, 'raw_data_renamed'),
        outdir = join(GS_PREFIX, '{sample}'),
    shell: """
        longranger basic --fastqs {params.fq_dir} --id {wildcards.sample} \
            --sample {wildcards.sample} --disable-ui --localcores={threads}
        mv {wildcards.sample}/outs/barcoded.fastq.gz {output}
    """

rule deinterleave:
    input:
        rules.longranger.output
    output:
        r1 = 'barcoded_fastq_deinterleaved/{sample}_1.fq.gz',
        r2 = 'barcoded_fastq_deinterleaved/{sample}_2.fq.gz'
    conda: "envs/pigz.yaml"
    threads: 7
    resources: 
        mem=8,
        time=12
    shell: """
        # code inspired by https://gist.github.com/3521724
        zcat {input} | paste - - - - - - - -  | tee >(cut -f 1-4 | tr "\t" "\n" |
            pigz --best --processes {threads} > {output.r1}) | \
            cut -f 5-8 | tr "\t" "\n" | pigz --best --processes {threads} > {output.r2}
    """

Now that the input files and workflow are ready to go, we need to set up our compute cluster. Here I use a Kubernetes cluster which has several attractive features, such as autoscaling of compute resources to demand.

A few points of terminology that will be useful:

  • A cluster contains (potentially multiple) node pools.
  • A node pool contains multiple nodes of the same type
  • A node is the basic compute unit, that can contain multiple cpus
  • A pod (as in a pod of whales) is the unit or job of deployed compute on a node

To start a cluster, run a command like this. Change the parameters to the type of machine that you need. The last line gets credentials for job submission. This starts with a single node, and enables autoscaling up to 96 nodes.

export CLUSTER_NAME="snakemake-cluster-big"
export ZONE="us-west1-b"
gcloud container clusters create $CLUSTER_NAME \
    --zone=$ZONE --num-nodes=1 \
    --machine-type="n1-standard-8" \
    --scopes storage-rw \
    --image-type=UBUNTU \
    --disk-size=500GB \
    --enable-autoscaling \
    --max-nodes=96 \
    --min-nodes=0
gcloud container clusters get-credentials --zone=$ZONE $CLUSTER_NAME

For jobs with different compute needs, you can add a new node pool like so. I used two different node pools, with 8 core nodes for preprocessing the sequencing data and aligning against the human genome, and 16 core nodes for assembly. You could also create additional high memory pools, GPU pools, etc depending on your needs. Ensure new node pools are set with --scopes storage-rw to allow writing to buckets!

gcloud container node-pools create pool2 \
    --cluster $CLUSTER_NAME \
    --zone=$ZONE --num-nodes=1 \
    --machine-type="n1-standard-16" \
    --scopes storage-rw \
    --image-type=UBUNTU \
    --disk-size=500GB \
    --enable-autoscaling \
    --max-nodes=96 \
    --min-nodes=0

When you are finished with the workflow, shut down the cluster with this command. Or let autoscaling slowly move the number of machines down to zero.

gcloud container clusters delete --zone $ZONE $CLUSTER_NAME

To run the snakemake pipeline and submit jobs to the Kubernetes cluster, use a command like this:

snakemake -s 10x_longranger.snakefile --default-remote-provider GS \
    --default-remote-prefix YOUR_BUCKET_HERE --use-singularity \
    -j 99999 --use-conda --nolock --kubernetes

Add the name of your bucket prefix. The ‘-j’ here allows (mostly) unlimited jobs to be scheduled simultaneously.

Each job will be assigned to a node with available resources. You can monitor the progress and logs with the commands shown as output. Kubernetes autoscaling takes care of provisioning new nodes when more capacity is needed, and removes nodes from the pool when they’re not needed any more. There is some lag for removing nodes, so beware of the extra costs.

While the cluster is running, you can view the number of nodes allocated and the available resources all within the browser. Clicking on an individual node or pod will give an overview of the resource usage over time.

Useful things I learned while working on this project

  • Use docker and singularity images where possible. In cases where multiple tools were needed, a simple conda environment does the trick.
  • The container image type must be set to Ubuntu (see above) for singularity images to correctly work on the cluster.
  • It’s important to define memory requirements much more rigorously when working on the cloud. Compared to our local cluster, standard GCP nodes have much less memory. I had to go through each pipeline and define an appropriate amount of memory for each job, otherwise they wouldn’t schedule from Kubernetes, or would be killed when they exceeded the limit.
  • You can only reliably use n-1 cores on each node in a Kubernetes cluster. There’s always some processes running on a node in the background, and Kubernetes will not scale an excess of 100% cpu. The threads parameter in snakemake is an integer. Combine these two things and you can only really use 7 cores on an 8-core machine. If anyone has a way around this, please let me know!
  • When defining input and output files, you need to be much more specific. When working on the cluster, I would just specify a single output file out of many for a program, and could trust that the others would be there when I needed them. But when working with remote files, the outputs need to be specified explicitly to get uploaded to the bucket. Maybe this could be fixed with a call to directory() in the output files, but I haven’t tried that yet.
  • Snakemake automatically takes care of remote files in inputs and outputs, but paths specified in the params: section do not automatically get converted. I use paths here for specifying an output directory when a program asks for it. You need to add the GS_PREFIX to paths to ensure they’re remote. Again, might be fixed with a directory() call in the output files.
  • I haven’t been able to get configuration yaml files to work well in the cloud. I’ve just been specifying configuration parameters in the snakefile or on the command line.

Transmission of crAsspahge in the microbiome

Big questions in the microbiome field surround the topic of microbiome acquisition. Where do we get our first microbes from? What determines the microbes that colonize our guts form birth, and how do they change over time? What short and long term impacts do these microbes have on the immune system, allergies or diseases? What impact do delivery mode and breastfeeding have on the infant microbiome?

A key finding from the work was that mothers and infants often share identical or nearly identical crAssphage sequences, suggesting direct vertical transmission. Also, I love heatmaps.

As you might expect, a major source for microbes colonizing the infant gut is immediate family members, and the mother is thought to be the major source. Thanks to foundational studies by Bäckhed, Feretti, Yassour and others (references below), we now know that infants often acquire the primary bacterial strain from the mother’s microbiome. These microbes can have beneficial capabilities for the infant, such as the ability to digest human milk oligosaccharides, a key source of nutrients in breast milk.

The microbiome isn’t just bacteria – phages (along with fungi and archaea to a smaller extent) play key roles. Phages are viruses that predate on bacteria, depleting certain populations and exchanging genes among the bacteria they infect. Interestingly, phages were previously shown to display different inheritance patterns than bacteria, remaining individual-specific between family members and even twins (Reyes et al. 2010). CrAss-like phages are the most prevalent and abundant group of phages colonizing the human gut, and our lab was interested in the inheritance patterns of these phages.

We examined publicly available shotgun gut metagenomic datasets from two studies (Yassour et al. 2018, Bäckhed et al. 2015), containing 134 mother-infant pairs sampled extensively through the first year of life. In contrast to what has been observed for other members of the gut virome, we observed many putative transmission events of a crAss-like phage from mother to infant. The key takeaways from our research are summarized below. You can also refer my poster from the Cold Spring Harbor Microbiome meeting for the figures supporting these points. We hope to have a new preprint (and hopefully a publication) on this research out soon!

  1. CrAssphage is not detected in infant microbiomes at birth, increases in prevalence with age, but doesn’t reach the level of adults by 12 months of age.
  2. Mothers and infants share nearly identical crAssphage genomes in 40% of cases, suggesting vertical transmission.
  3. Infants have reduced crAssphage strain diversity and typically acquire the mother’s dominant strain upon transmission.
  4. Strain diversity is mostly the result of neutral genetic variation, but infants have more nonsynonymous multiallelic sites than mothers.
  5. Strain diversity varies across the genome, and tail fiber genes are enriched for strain diversity with nonsynonymous variants.
  6. These findings extend to crAss-like phages. Vaginally born infants are more likely to have crAss-lke phages than those born via C-section.

References
1. Reyes, A. et al. Viruses in the faecal microbiota of monozygotic twins and their mothers. Nature 466, 334–338 (2010).
2. Yassour, M. et al. Strain-Level Analysis of Mother-to-Child Bacterial Transmission during the First Few Months of Life. Cell Host & Microbe 24, 146-154.e4 (2018).
3. Bäckhed, F. et al. Dynamics and Stabilization of the Human Gut Microbiome during the First Year of Life. Cell Host & Microbe 17, 690–703 (2015).
4. Ferretti, P. et al. Mother-to-Infant Microbial Transmission from Different Body Sites Shapes the Developing Infant Gut Microbiome. Cell Host & Microbe 24, 133-145.e5 (2018).

What is crAssphage?

CrAssphage is a like mystery novel full of surprises. First described in 2014 by Dutilh et al., crAssphage acquired it’s (rather unfortunate, given that it colonizes the human intestine) name from the “Cross-Assembly” bioinformatics method used to characterize it. CrAssphage interests me because it’s prevalent in up to 70% of human gut microbiomes, and can make up the majority of viral sequencing reads in a metagenomics experiment. This makes it the most successful single entity colonizing human microbiomes. However, no health impacts have been demonstrated from having crAssphage in your gut – several studies (Edwards et al. 2019) have turned up negative.

Electron micrograph of a representative crAssphage, from Shkoporov et al. (2018). This phage is a member of the Podoviridae family and infects Bacteroides Intestinalis.

CrAssphage was always suspected to predate on species of the Bacteroides genus based on evidence from abundance correlation and CRISPR spacers. However, the phage proved difficult to isolate and culture. It wasn’t until recently that a crAssphage was confirmed to infect Bacteroides intestinalis (Shakoporov et al. 2018). They also got a great TEM image of the phage! With crAssphage successfully cultured in the lab, scientists have begun to answer fundamental questions about its biology. The phage appears to have a narrow host range, infecting a single B. intestinalis strain and not others or other species. The life cycle of the phage was puzzling:

“We can conclude that the virus probably causes a successful lytic infection with a size of progeny per capita higher than 2.5 in a subset of infected cells (giving rise to a false overall burst size of ~2.5), and also enters an alternative interaction (pseudolysogeny, dormant, carrier state, etc.) with some or all of the remaining cells. Overall, this allows both bacteriophage and host to co-exist in a stable interaction over prolonged passages. The nature of this interaction warrants further investigation.” (Shakoporov et al. 2018)

Further investigation showed that crAssphage is one member of an extensive family of “crAss-like” phages colonizing the human gut. Guerin et al. (2018) proposed a classification system for these phages, which contains 4 subfamilies (Alpha, Beta, Delta and Gamma) and 10 clusters. The first described “prototypical crAssphage” belongs to the Alpha subfamily, cluster 1. It struck me how diverse these phages are – different families are less than 20% identical at the protein level! When all crAss-like phages are considered, it’s estimated that up to 100% of individuals cary at least one crAss-like phage, and most people cary more than one.

Given the high prevalence of crAss-like phages and their specificity for the human gut, they do have an interesting use as a tracking device for human sewage. DNA from crAss-like phages can be used to track waste contamination into water, for example (Stachler et al. 2018). In a similar vein, our lab has used crAss-like phages to better understand how microbes are transmitted from mothers to newborn infants. The small genome sizes (around 100kb) and high prevalence/abundance make these phages good tools for doing strain-resolved metagenomics. Trust me, you’d much rather do genomic assembly and variant calling on a 100kb phage genome than a 3Mb bacterial genome!

Research into crAss-like phages is just beginning, and I’m excited to see what’s uncovered in the future. What are the hosts of the various phage clusters? How do these phages influence gut bacterial communities? Do crAss-like phages exclude other closely related phages from colonizing their niches, leading to the low strain diversity we observe? Can crAss-like phgaes be used to engineer bacteria in the microbiome, delivering precise genetic payloads? This final question in the most interesting to me, given that crAss-like phages seem relatively benign to humans, yet incredibly capable of infecting our microbes.

References
1.Dutilh, B. E. et al. A highly abundant bacteriophage discovered in the unknown sequences of human faecal metagenomes. Nature Communications 5, 4498 (2014).
2.Edwards, R. A. et al. Global phylogeography and ancient evolution of the widespread human gut virus crAssphage. Nature Microbiology 1 (2019). doi:10.1038/s41564-019-0494-6
3.Guerin, E. et al. Biology and Taxonomy of crAss-like Bacteriophages, the Most Abundant Virus in the Human Gut. Cell Host & Microbe 0, (2018).
4.Shkoporov, A. N. et al. ΦCrAss001 represents the most abundant bacteriophage family in the human gut and infects Bacteroides intestinalis. Nature Communications 9, 4781 (2018).
5.Stachler, E., Akyon, B., de Carvalho, N. A., Ference, C. & Bibby, K. Correlation of crAssphage qPCR Markers with Culturable and Molecular Indicators of Human Fecal Pollution in an Impacted Urban Watershed. Environ. Sci. Technol. 52, 7505–7512 (2018).