Explore By Subject Area   

How Might AI-Native Drug Discovery Change Early R&D?

Dr Elizabeth Wood, CEO and Founder of Jura Bio, talks through how AI-native drug discovery bridges computational design and physical synthesis, and what that means for early-phase drug discovery.

January 12, 2026
How Might AI-Native Drug Discovery Change Early R&D?

What is Jura Bio doing and where does it hope to have the biggest impact for IO?  

We build sovereign AI systems, which are systems that plan their own experiments, create their own wet lab-validated data, and then train models off those to get smarter and smarter over time. What we hope to achieve is giving people the best shot at building the best possible therapeutic that they can – whether that be antibodies or next-generation biologics, TCR-Ts, etc – against targets that we thought we couldn't reliably drug before. 

We do this through variational synthesis, which is an AI-controlled gene synthesis procedure. It allows you to take a generic generative AI application and to execute it on the gene synthesis chip. Literally executing these computations with atoms that wind up as your designed DNA, in the same way that Microsoft Word is an application in which letters become words.  

"I don’t think machine learning is necessary to develop a blockbuster, but in the field of personalized and precision medicine, I don’t think we can develop those without machine learning."


What does this mean for actual drug discovery?  

The work that we, and others, are doing in de novo design is pushing forward how cheap and effective early designs are going to become. As a result, I think the field will change a lot to be towards those who have the ability to fundraise against getting something through clinic and commercialization and towards those who have some way to bring forward clinical inputs into the design process and these scaled in vivo methods, which variational synthesis also allows for. 

There is a broader shifting landscape in IO, whether it comes from regulatory uncertainty or regulatory acceleration, changing geographic focus on where first-in-human data is coming from, even where R&D is being done. I see a business model crunch where early discovery and development gets almost commoditized as sovereign AI’s prosperity data stacks are building and training on its own data. I think that one of the downstream effects could be that, by proving out certain aspects, there won’t be the need to validate them in trials, and that there is reassurance that variational synthesis isn’t some sort of hallucination and can be trusted.  


What do you feel like your overarching goal for Jura is?  

To get us to outlive the era of blockbusters. I don’t think machine learning is necessary to develop a blockbuster, but in the field of personalized and precision medicine, I don’t think we can develop those without machine learning. And so every decision I make about what we’re studying and how we’re trying to scale is to drive generational change and get the kind of precision medicine that we, as patients, deserve.  

"Every decision I make about what we’re studying and how we’re trying to scale is to drive generational change and get the kind of precision medicine that we, as patients, deserve."


What does AI-native drug discovery look like in practice?  

I can give an example with CD3. The issue with CD3 is not that it's hard to make a binder, but it's hard to make a binder that activates and in such a way that the T-cell isn't exhausted after it. So what that looks like for us is using AI to design a bunch of candidates, using our starting point as clinically informed data, whether that be observed in human healthy antibodies or human T cells.  

We like to start with N=1 safety. Tolerance in one person isn't equal to tolerance in everyone, but it's still a lot better than not knowing anything. Starting with that constraint, we use our sovereign AI to design a bunch of CD3 binders, and put them in a functional assay that has a throughput of ~1-10 million candidates. Our candidate is expressed on a CFE CAR, without the internal domains that would activate it, and on the other side, you have the T cell. Using variational synthesis, we get to make our 10 million candidates, screen them directly, sort for sequencing on the basis of which ones activate and which T cells light up. We send them off for sorting, get RNA-seq on the T cells, and look for which ones have the exhaustion profile we’re looking for.  

That’s how we go from 10 million AI-designed candidates to 22,00 ones that are wet lab-validated, rank-ordered based on the exhaustion profile candidates in the CD3 space.  


Are you limited by the human-generated data you’re inputting?  

It’s a starting point we like to bring to bear, because there’s so much we don’t know about how molecules are going to hit when they reach the clinic. However, variational synthesis as a technology can make whatever you want. If you wanted to, for example, print out the output of a completely de novo design system that was uncorrelated and never seen in human, you could do that just as easily.  

With giving ourselves that limitation of human tolerance, we haven't yet failed to drug one of the targets we've gone after. We've gone after more than 150 PHLA targets, like intracellular cancer oncogenes expressed on the surface through HLAs. There are certainly areas where it might not apply, like AAV design with no human repertoire to draw from, but right now it’s a design choice rather than limitation to start with human-generated data.  


How is this different from other approaches to AI-driven drug discovery?  

We’re not focusing on developability score, or solubility, or affinity, etc. If we have a sense of what the clinical outcome will be, we are engineering first for safety or some picture of efficacy.  

SARS-CoV-2 is a good example where we suddenly had a huge amount of sequencing information about people with different outcomes when they met their disease and different outcomes post-disease. We looked at people who ended up in the ICU versus those who did not, and used causal machine learning models to pick out which T cells were deleterious and which were protective.  

That’s the approach that we’ve brought to other challenges. There is a whole world of TCRs or antibodies that can be made with variational synthesis. We can print them all, subtract the ones that are deleterious, enrich the areas that are therapeutically relevant, and add in constraints like manufacturability to end up with a pool of certain candidates that allow you to screen for aspects that are easy to screen for.  


Where is the power of AI overstated, and where is it underappreciated?  

When something is in distribution, meaning it was in the training data set, machine learning is extremely good at matching that training data distribution. If you are asking it something that it was trained on that data to solve, it should be really good. That's why we see a lot of success.  

What I don't want us to do is say, “Now we can move past that problem of binder design,” after seeing some success, because as soon as we get to rare disease populations, we hit a wall. We don’t have any data about that, and none of the tools we built can solve those problems, nor tell you what data they need in order to solve the problem. None of those tools can plan the data acquisition path to get the data that they need to solve that problem. That’s called model criticism, and it’s in the edge cases that the most interesting stuff is happening and that actually the true value and limitations of the model are exhibited.  

Our field should be so excited about machine learning, but we need to not overextrapolate from certain worked examples. For example, they should not be deployed on patients because ML to guide patient outcomes is very understudied. I’m not saying that eventually it won’t be a good idea, but it needs to be more tested.  

"Our field should be so excited about machine learning, but we need to not overextrapolate from certain worked examples."


What advice do you have for a young professional entering a life sciences career?  

I had an incredible mentor, Dr Dudley Herschbach, who told me something that changed my life. The beautiful thing about science is that it waits. If you’re having trouble figuring out how to solve a problem, the science is fixed and unchanging. The therapeutic modalities we use to attack and engineer them will change all the time and get better, but the underlying ground truth is waiting for you to figure it out.  

This is helpful, especially when you’re feeling anxious about how many new modalities are coming out, because it gives you the grounding to say, “Let me pick one and see how far I can get.” And if you get scooped, it can be a blessing because they did the work. Go find the work that only you can do. 


In this article

Subscribe for More Information

Please provide your contact information and select areas of interest to receive updates.