CancerClinical trialsCrisprCrisprmedicineGene therapyGenome editing
By Rasmus Kragh Jakobsen
(Interview is condensed and edited for clarity)
- Thank You very much for taking time to talk to us. From your perspective what do you think are the greatest challenges of the CRISPR field?
We are at the beginning of a genome editing revolution. Things are moving so quickly and that is both a challenge and a really great opportunity.
There is a lot of innovation which is very exciting, but it makes it difficult to know where you are in the development of this technology. What you did today may completely change tomorrow because somebody has published a new innovation that makes it already outdated.
Setting standards begin by understanding practise
- When the field is moving so fast how should you implement best practices and standards?
We have to think a little bit differently about standards from standards in a field that's very established.
Right now we have to think about standards in terms of understanding the practices and we are still in that stage of understanding what we are doing.
What we have found particularly for CRISPR applications is that even how much of Cas9 and guide RNA somebody is using in their study, is being done very differently and it is not always clear.
So I think we can begin to standardize by recording metadata about our studies. That is the only way we can even begin to get to what could be a best practice.
Metadata: Opportunity for the community
- You talk about establishing a database for metadata?
Yes, to be able to go back to look at your data and see exactly what you did is really important and being able to share that in a way that is normalized. Something as simple as a fillable form - for these steps I used this reagent at this level, for this amount of time and then I looked at it in this way.
There is no infrastructure for this community to communicate in that way and in fact it is not that common for other communities. But I think there's a really great opportunity here to push the envelope on how we as a field can share information and it could be really powerful for helping understand and replicate science
- To make data from different labs comparable?
To make them comparable and also to know whether it's reasonable to expect they would be comparable.
You may find out by the time we look at the study that 'oh they're using different cell lines' and I would reasonably expect that maybe these different cell lines would give a different result. Or maybe they are using reagents that are formulated slightly differently and it might be reasonable that, how you formulate might make the result different.
But as it is today we can't even begin to tease out those things to know if the studies are comparable.
- How do you plan to launch this database?
Well, it's meant to be a global resource and the first thing is getting the thought process out there.
What we're doing is meeting people and talking at conferences, and then we plan to develop a white paper that would describe the vision of how we might all think about data and about metadata and how we might be able to come together as a field and leverage each other in a more effective way.
Samantha Maragh leads the genome editing program at the US National Institute of Standards and Technology (NIST). The primary focus is on measurements and standards that can increase global confidence in using genome editing technologies for research as well as for making commercial products.
In this interview she discusses how the CRISPR genome editing revolution is both a challenge because of the speed of innovation but also an opportunity for establishing a way for the community to share information that will leverage the field.
Safety at the DNA level
- What would you say are the biggest safety issues with implementing CRISPR medicine?
In the US, the FDA has publicly stated that at least a part of the safety is at the DNA sequence level.
So people would be able to report for 'on-target' locations what sequence change happened and at what frequency. And for 'off-target' DNA sequence changes where in the genome it happened, what variant happened and at what frequency it's happening.
Just at that DNA safety level the off-target detection is a technical challenge. To be able to reliably know within the entire genome where did something happen that you didn't intend and at what frequency.
A challenge to detect DNA off-targets
- And how can this challenge be adressed?
There is really two approaches. One approach is to sequence the whole genome and see what you find. But that approach has limitations on its sensitivity and in general the community has said that that doesn't give a low enough limit of detection for their comfort level.
The other option is to use some tool to decide where to go looking in the genome instead of looking at everything. That can be a bioinformatic tool that says 'here's the genome that I'm trying to do editing and here's the editing molecule I'm using - predict for me where I should be concerned'.
And then there are physical measurements that try to identify where particularly the editing molecules are breaking or damaging the DNA and then say 'ah show me where the damage is happening' and then I'll go looking for editing at those locations.
The challenge is that the CRISPR tools are so new and the assays to look for off-targets are even newer and it's not clear how well they work. They're not well characterized in the way that we normally think of a tool that's being used for safety.
Standard control samples for genome editing
- How can NIST help in this safety space?
NIST doesn't say we should use a specific tool. We want people to be open to choose the tool that they think is right for their uses.
A goal would be if we could for off-target assays come up with some physical control samples that people can get and run in their own hands. A lot of times when people get kits for all sorts of aspects of biology there's often that vial that's the positive control to run. This would be the idea of positive controls for genome editing assays - things that have properties that you can check for both for on target and off-target.
“people tend to be the biggest variable in a process and so I actually see automation as very good way to help push this field forward”
If you had these samples you could tell yourself 'I know this is the on target do I find it?' You could also take that same sample and say what if I didn't tell myself that this was an off-target and ran it just like I would run off-target detection if I didn't know where in the genome it was, would I find this?
We're hopeful that some sort of sample could be developed that could help those prediction assays.
Automation a way forward
- Ok we are running out of time is there something You would like to finish up on?
Yes. Automation of the CRISPR Cas9 editing workflow. I could envision pretty much an entire genome editing pipeline being automated and in fact that might be a really good way of pushing things forward in terms of getting reliability and confidence that your product is not changing over time.
Automation can be a fantastic way of both lowering variability and being able to gain more confidence that your process might be reproducible. You know people tend to be the biggest variable in a process and so I actually see automation as very good way to help push this field forward.
- Great thank You very much