Optimizing CRISPR: Sub-second searches on a 3 billion base genome

Vineet Gopal Blocked Unblock Follow Following Jul 22, 2015

CRISPR is a genome-editing technique that has rapidly gained popularity in the world of synthetic biology over the last two years.

CRISPR allows scientists to edit genomes with unprecedented precision, efficiency, and flexibility. Gizmodo

It is completely transforming how medical research is done — scientists are already applying CRISPR towards curing genetic disorders, creating disease models, accelerating drug development, and more¹ . We believe software will play a crucial role in these advancements, and we’ve released the Benchling CRISPR tool to make working with CRISPR faster and more accurate.

This post focuses on how we’ve improved 15-minute batch jobs to sub-second searches on a 3-billion base genome. Faster analysis means faster feedback, and faster feedback means faster iteration. Without going into too much detail about the biology of CRISPR (see Andrew Gibiansky’s post for a great primer), let’s go over a few basics to first understand the motivation. Then we’ll dive into the code.

CRISPR Basics

We model DNA as a string of base molecules, where each base is either adenine (A), thymine (T), cytosine (C), or guanine (G). So the human genome is a long string of ACGT characters - that's it².

At its core, to work with CRISPR scientists design a 20 base string of DNA called a “guide.” This guide is integrated into a special protein called Cas9, which then cuts the genome at all occurrences of the guide. Cutting the genome at a position means actually breaking the DNA strands apart. This is where CRISPR gets its powers — by designing the right 20 bases, you can break apart specific parts of the genome.

For example, to break a gene:

Find a 20 base guide that occurs near the beginning of the gene Use the CRISPR/Cas9 system to cut at that location The cell’s natural repair mechanisms will attempt to repair the cut. Since it’s an error-prone process, continual cutting by Cas9 eventually results in an incorrect repair. The gene is essentially “corrupted” around that region, and the cell is no longer able to express that gene as intended.

However, this is also the hard part of CRISPR — given a 20 base guide, it’s possible for it to occur elsewhere in the genome. You’d like to pick your guides to target a certain gene but avoid breaking other things — while CRISPR makes this much easier, it still has to be carefully done. Additionally, a perfect match isn’t necessary for CRISPR to cut at a location — empirically, up to 4 mismatches can occur and still result in cutting.

So we want to find the optimal guide that matches near a gene of interest, without having too many false positives. This sounds like something a computer would be great at doing, and it’s one of our core CRISPR features.

We help scientists find a set of guides and, for each guide, determine where else it binds in a 3 billion base genome.

Our Problem

Let’s define a function match(guide, genome_region) which returns true if guide and genome_region are close enough that Cas9 would successfully cut at the genome_region . In other words, it will return true if they are 1) the same length, and 2) differ in at most 4 characters.

match('AAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAAATTTT') # True match('AAAAAAAAAAAAAAAAAAAA', 'AAAAAAAAAAAAAAATTTTT') # False

Given a genome G and a set of guides S (all with the same length), we'd like to find all locations in G that match at least one guide in S ³.

Let’s optimize our algorithm for the following constraints:

A genome length of 3 billion bases (the length of the human genome)

A guide length of 20

200 guides in S

Our algorithm should be both parallelizable and cost efficient. We’d like it to run, parallelized, in under 1 second.

(For readability, I’ve used Python “pseudocode” for any code snippets — all CPU times shown here are based on C++ implementations.)

Naive Solution

Let’s start with a simple solution as a baseline. We can iterate over the entire genome, and check each index to see if any guide is a match.

CPU Time: 3.25 hours

Ouch. Now, what are the constraints of our problem, and how can we take advantage of them?

We have the genome in advance. Each species has only one or two reference genomes that scientists use.

DNA strings are made up of only 4 different characters (A, C, G, and T).

The length of a guide is 20 bases.

The maximum edit distance (replaces only) is 4.

Let’s start with the first constraint.

Solution #1: Index the Genome

Since we have the genome beforehand, why don’t we just index the entire genome? We can create a hash_table that maps every 20 character string in the genome to the list of locations that it occurs.

For a 23 base genome AAAAAAAAAAAAAAAAAAAAATG , hash_table would be:

'AAAAAAAAAAAAAAAAAAAA': [0, 1]

'AAAAAAAAAAAAAAAAAAAT': [2]

'AAAAAAAAAAAAAAAAAATG': [3]

Since there are only 4 possible characters, we can represent each one with just two bits, using a simple encoding: A => 0b00 , C => 0b01 , G => 0b10 , and T => 0b11 . For example, TGGA would be represented as 0b11101000 . This means that a 20 base string can fit into one 64-bit int. That changes hash_table to:

0b0000000000000000000000000000000000000000: [0, 1] 0b0000000000000000000000000000000000000011: [2] 0b0000000000000000000000000000000000001110: [3]

Given this map, how do we find all of the matches for a given guide? We have to allow an edit distance of up to 4, so we can’t just do a single lookup for the guide. For a given guide, there are 424,996 strings[4]within edit distance 4, so we could just search for each one…

CPU Time: 400 seconds

Unfortunately, the pseudocode hides a very important detail — each lookup in hash_table is actually a disk seek. The table is too large to store in memory (around 36GB). Doing 200 * 424996 disk reads takes quite a long time, even on an SSD.

Solution #2: Preprocess everything

If the problem is the number of disk reads, maybe we should try preprocessing some more information from the genome.

One idea would be to store the matches for every possible guide — essentially, a full map from all possible inputs to their corresponding answer. That way, given a guide, we could do a single query into the hash table to find all of its matches.

CPU Time: Unknown

Assuming the genome is random, there should be about 1000 (32-bit int) matches for each guide. This would result in a storage requirement of at least 420* 1000 * 4 bytes/match, or more than 4 petabytes!

Even after playing around with this for a while, it was hard to come up with a solution that was both fast and used a reasonable amount of storage.