DMPK Insights #15: Pharmacokinetics-Pharmacodynamics: Driving decision making with PKPD throughout Drug Discovery
About this Podcast on PKPD in Drug Discovery
In this episode of the Pharmaron DMPK Insights Podcast Series, Simon Taylor and Dr. Emile Chenโ discuss the relationship of drug concentration and effect (PKPD). PKPD is critical to decision making from initial modality selection, through molecule optimization, translational science and clinical dosage planning. Simon and Emile discuss the concepts and practical applications of PKPD and explore the concept of model-based target pharmacology assessment in combination with physiologically based pharmacokinetic-pharmacodynamic (PBPK-PD) modelling to improve decision making.*
* Model-based Target Pharmacology Assessment (mTPA): An Approach Using PBPK/PD Modeling and Machine Learning to Design Medicinal Chemistry and DMPK Strategies in Early Drug Discoveryโ, J Med Chem. 2021 Mar 25;64(6):3185-3196.
*Artificial neural networks as a novel approach to integrated pharmacokineticโpharmacodynamic analysis. Journal of Pharmaceutical Sciences, 1996, 85(5), pp.505-510.
*Model-based virtual PK/PD exploration and machine learning approach to define PK drivers in early drug discovery. Journal of Medicinal Chemistry, 2024, 67(5), pp.3727-3740.
*Applications of model-based target pharmacology assessment in defining drug design and DMPK strategies: GSK experiences. Journal of Medicinal Chemistry, 2022, 65(9), pp.6926-6939.
*Putting Pharmacokinetics and Pharmacodynamics to Work in Drug Discovery: A Practical Guide for Pharmaceutical Scientists. John Wiley & Sons. 2025.
We will address the following points:โย
- What is PKPD and its importance throughout a discovery project lifecycleโ
- How PKPD can be used to guide decision making including case study examplesโ
- How a combination of physiologically based pharmacokinetic/ pharmacodynamic (PBPK/PD) modeling and machine learning (ML) can be used to elucidate the optimal combination of properties for the targeted pharmacology

Introduction
Simon Taylor: Hello and welcome to this Pharmaron podcast, part of our DMPK Insights podcast series. My name is Simon Taylor and I’m responsible for DMPK strategic input to projects from discovery to IND in Pharmaron’s integrated drug discovery group. Today I’m delighted to be joined by Dr. Emile Chen.
Dr. Emile Chen: Well, thank you, Simon. I’m very happy to be here.
Simon Taylor: Dr. Chen has 30 years of industrial experience across early discovery to late stage development, including authoring and reviewing regulatory documentation and NDA submissions. Up until March 2024, he was director in the systems modeling and translational biology group at GSK where he applied PBPK, mechanistic PKPD modeling, quantitative systems pharmacology and machine learning techniques to solve project questions.
Emile now works as a consultant. He has an undergraduate degree from the University of California and a PhD from Northwestern University in the field of biomedical engineering. He began his pharmaceutical career at Hoffmann-La Roche in 1993 and then joined what has become GSK in 1996, where he’s led ADME and PK groups supporting early discovery through to late development DMPK projects.
But more recently, Emile has focused on the use of innovative mathematical modeling and simulation methods to reduce attrition whilst enhancing the ability to predict efficacy and safety in human. And for the past 10 years, he’s also designed and tutored on a series of interactive workshops promoting the use of kinetic thinking and mathematical modeling to integrate preclinical and clinical information.
So Emile is with me today to discuss the concepts and strategies of pharmacokinetic, pharmacodynamic or PKPD modeling applied to compound optimization in drug discovery and preclinical development, with a particular focus on the applications up to first time in humans. So, Emile is clearly very well qualified to be talking about these topics with us today.
Understanding PKPD and Its Importance
Simon Taylor: So let’s begin our discussion and maybe we can start and really introduce the basics. So what is PKPD and why is it important?
Dr. Emile Chen: Well, PKPDโthere are many good books and review articles written about the subject, so I would just stick to a high level, brief introduction of that. PKPD basically describes the relationship between drug concentration, usually in the systemic circulation, and the pharmacological response that it elicits. That this response is usually part of the pathway, somewhere between target and the ultimate clinical endpoint, and usually quantified by a biomarker, usually in the animal model.
Not always, but because PD is often not characterized until late discovery, we often in early discovery use PK as a surrogate. Now the thing is that the PK is at best an imperfect surrogate for it because of the famous three-pillar concept of PD that people had written extensively about. Basically the drug from the systemic circulation gets into the target tissue and where they had to engage the target.
Now, a lot of biological processes going to modify that. The timecourse in the target will be different. There are many novel targets like PROTACs and covalent inhibitors or biologics that make the target engagement very complicated and it will elicit downstream biology that is completely messy in the sense of feedback, feedforward redundancy.
So it makes the PK response rather imperfect surrogate of the response that we can expect. And that’s why it is very important for us to understand the relationship between PK and PD because we can use itโwe can view it as a connector between dose and the clinical outcome and using it to design the dose that we use for clinical study. But we can use even earlier to decide how to optimize what kind of compound, well even if a target should be committed to or not. So yeah, I mean, I think it’s extremely important and we should look at it quite early.
Early Implementation of PKPD Thinking
Simon Taylor: Thanks Emile. So if we think about this kind of relationship then between drug concentration, I guess drug effect and also the effect of time, you mentioned a number of different concepts there around how this can feed into the discovery and development process. And so when really should project teams begin to think about these relationships between the pharmacokinetics and the biology?
Dr. Emile Chen: This is actually an area that I’m slightly unconventional about. Usually people tend to think about PD, PKPD relationship when we have enough animal model PKPD in vivo data to model to describe the PKPD relationship and that usually happens when we get very close to human trial, late in discovery stage, when we do the candidate selection.
But what we have been advocating in my previous company wasโwe were actually successfully implement that for quite a few yearsโis to take that process way early into the early discovery, and we want to use that to guide even decision of target commitment if we can do that that early and definitely before lead optimization to help the medicinal chemistry best deploy their resources.
Simon Taylor: Oh, that’s interesting. And we’ll come onto that and some of the nice publications that yourself and colleagues have written over the last few years on that topic. Yeah. I guess you’re talking here about, possibly almost considering the thinking about the mechanism that the pharmacodynamic part of this. Before the pharmacokinetic optimization as well, what do you kind of mean by this and how does that change the thinking of an optimization project?
Dr. Emile Chen: Thank you for asking that. This is something that I think long and hard about. I used to lead a PK team to support drug discovery, mostly during the lead optimization and hit to lead. And I focus primarily delivering PK results very efficiently for every 20, 30 compound new compounds coming down the pipeline every week and each week then we’ll sit with the medicinal chemist biologist around the meeting table and say, well, which one should we progress, which one we can use.
For us to be asking the question is this biology driven by Cmin or AUC? Or if either way, how much above the major potency do we have to have concentration stay above it? And sometimes we say well, should it be IC50 or IC90? This is another way of asking the same question.
So it is not until maybe after I’ve done that for 5, 10 years, I realize, well, those questions depends. I mean, we know that is, those questions depends on the biology. Some biology requires you to have Cmin driven. Some are more AUC driven. That’s, those are the obvious ones, but a lot of novel target. We don’t know what it is and so we end up sitting there guessing. And if it gets wrong then of course there are consequences.
And so this is when I come to realize we really, really had to do PD much earlier. We had to characterize PKPD relationship much earlier so that when we sit around the tables, we do have the answer to those questions.
Theoretical vs. Experimental Approaches
Simon Taylor: And is this just about experimental work? Because I guess at a project early stage, it may not be possible to characterize all the experimental endpoints or are you talking here about theoretical considerations around the mechanism as well? Potentially think thinking about other molecule, other related molecules related mechanisms where knowledge can be transferred from previous projects or maybe even the clinic. I think the question really is you’re suggesting it doesn’t have to be experimental data in all cases. It can be theoretical and then we start to move into the modeling side of applying the modeling to fill in the gaps.
Dr. Emile Chen: Yeah. That is another wonderful question. Traditionally we view PKPD modelling as data driven. That means that we, and particularly in vivo data driven for animal models. And if we do that, then obviously we are limited by the availability of the animal model, which, as we discover new novel targets the animal model, we may not be that quick in developing a new animal model for every new target we decided to commit to. That’s always a challenge, and also the resource is much more intensive.
I think that the resource intensivity is not as big of a issue because you just need to do this study once, you don’t need to do it for every compound. Once you get a concentration relationship characterized, it is a good investment. We can do it with a tool compound if the model exists, but some, a lot of the time the model doesn’t exist, and that’s where the challenge is, and this is where you need to come in with a different mindset, with different strategy.
Instead of being data driven particularly in vivo data driven, we need to leverage information from all kind of sources, including knowledge in the literature about the physiology process involved, and there are a lot of knowledge in around the internal target, the time course of the biology as they move, whether they are feedback or not, whether they’re redundancy, pass away. We can borrow a lot of that from the literature, and then we bridge the gap with some key in vivo, in vitro studies, which we have done before, once we start building the model, we find out that where the key gaps is, and then we can actually execute a very well designed, well focused, in vitro study to fill those gaps.
So essentially you take information from many, many sources and build that model. In a sense, it’s similar, the QSP modeling, except we don’t build model that complicated because that would take too long.
Traditional vs. MTPA Approach
Simon Taylor: So, you’ve mentioned earlier on about the thinking here, perhaps being unconventional. So I think traditionally in a project medicinal chemists would look to optimize the potency first. We would then move into possibly tier one ADME screening, looking to lower intrinsic clearance, improve half life, improve absorption potential, perhaps as you say with a kind of more generic approach at that phase of the project.
And you are suggesting here that there’s this kind of much more deeper thinking put into the dynamics and the interactions of the biological processes to actually define what is actually needed for the best opportunity to define or to achieve efficacy and then clearly at a reasonable dose. So, I know over the last two to three years or so, you and coworkers have authored a series of extremely interesting articles, I think, which explore these concepts in more detail.
You referred to it as a model-based target pharmacology assessment MTPA, and you described how the kind of traditional approaches in discovery focus on optimization of PK properties, in advance of this gaining this deeper understanding of the PKPD relationships, as you’ve talked about. And this obviously often delivers, more favorable generic PK properties before the PD has actually been identified or evaluated.
So how does this MTPA approach differ and what are the advantages this can have in a drug discovery environment and to project progression?
Dr. Emile Chen: Let me take a small step back and answer the question, what if we don’t? Right? So what’s wrong with using a generic ADME property, criteria and potency criteria to design our molecule, to optimize our molecule? And then we look at PD. What could go wrong with that? I mean, sometimes it’s perfectly fine. I mean and often there are some clear cases that we can do that, but there are also many cases that when we have done that, we actually either progress a compound that turned out to be ineffective or that we actually had a perfectly effective compound that we didn’t realize we do have and keep throwing them away and keep looking for a better one when we don’t need to.
And I certainly have in my past, I mean in the two, two most extreme example is that let’s look at, whether it Cmin driven, the trough driven or Caverage driven or AUC driven. So if it’s actually an average concentration driven process and you accidentally think that Cmin, trough concentration driven, then you might actually go after a very long half-life compound long enough to have the Cmin above certain level when you actually don’t need to. And then you could have a compound that is already working perfectly, but then you didn’t realize that, and you keep saying, okay, I need a better compound with longer half life. But we don’t.
And medicinal chemistry strategy for a long half-life compound, obvious, very different from the one that doesn’t, right? On the other hand, if you have the opposite expectation, you think that it’s actually Caverage driven. When it’s actually Cmin driven, well then you could actually say, oh, this one is good enough. Let’s progress and actually turn out to be not. And then you, the worst case scenario is you find out too late in phase two and then you had to redo a lot of work. And that’s, that’s certainly not desirable.
I think that this question need to be answered as early as possible, certainly prior to significant medicinal chemistry resources being invested so that we don’t waste a lot of time. Right? And so that is kind of what we’ve been advocating. And you don’t need to do this for every project. Sometimes the biology is very obvious and they can do a triaging decision to say, okay, this compound we know is clearly this way. So, and the other compound that way, and you don’t need to do the so-called model based pharmacology, target engagement assessment.
But there are some that are novel target, that have complex biology. Then those, where this is most benefited and the approach that we take is we are taking a page from what the antibiotic and antiviral compound the area, the therapeutic area has done for years called dose fractionation experiments, which they sort of take one compound, those 10,000 different way to see. You know, which one give you more response than the other and then deciding what is driving the efficacy. What is the PK driver endpoint driving efficacy?
What we did, what we’ve been doing is similar to that, except we don’t do the experimental work. Instead of running 10,000 experiments, we focus on running a few key experiments. It could be in vitro, it could be leveraging literature information like I said before. But then once we’ve done that, we build a PKPD model and mostly a mechanistic PKPD model. And we use the model to do the exploration, and that’s why we call it virtual model based exploration. And so we use that model to do those fractionating experiments and then using that to figure out what, what is driving efficacy.
Virtual Library Design and Exploration
Simon Taylor: And I understand. So you generated a virtual library of thousands of molecules, all with differing properties, allowing you to explore the impact of these different ADME and PK properties on dose essentially, based on certain PKPD models. Could you describe that in a little more detail about what, what that process was? So how you designed that compound library. What type of PKPD models did you use to explore the different biologies?
Dr. Emile Chen: So there are three different way that we do that. And the first way, depending on the question we’re trying to ask, if we’re trying to decide whether the target is worthwhile committing or not, we don’t even need to design the library or compounds. We just simply come up with many combinations of the target engagement scenario where the different scenario depends on how much target is engaged and then for how long.
So we do a several thousand or tens of thousand of different variation of target intensity and duration. And we use that to probe the PD model, don’t even need a PK model and to see which one requires a reasonable degree of target engagement and duration that we believe is achievable that will give us the effect that we are looking for. That’s the simplest way of doing that.
The second way that we do, once again, it does not require us to design the virtual compound library yet, but it just simply, if the question is ADME property, what PK property is needed to elicit the desired response? What we do is we use Monte-Carlo simulation to generate tens of thousands of combination of ADME properties. And this ADME property does not need to be rooted on structure compound structures.
We just because we have done, the big company, big Pharma has done PK study for decades and we have tens of thousand of compound or more in our library. So we sort of pretty much know what’s the range of possible ADME property that the old compound can have. And we basically exploit that ADME space, property space thoroughly and use that to generate PK profile, to generate PK responses. And then we then ask retrospectively after we have this 10,000 virtual combination of properties, which kind of combination of property are optimal as required for the response that we need.
Simon Taylor: What type of pharmacodynamic responses did you explore there? Obviously indirect responses, fast and slow turnover targets, etc. What do you find when you look at these different pharmacodynamic scenarios in terms of the impact of the PK properties?
Dr. Emile Chen: So we actually built a mechanistic PKPD model for every project that we support on this using this paradigm. And so it’s usually go beyond, I mean, the real discovery compound biology usually go past the literature. You know library or PD models, the indirect response models, so on and so forth. Fortunately it’s usually a combination of two or three or four of these well studied model that we can piece together to describe the biology.
If you look at my publication, in the particular second one, I described several biology that we actually look at, and you can see that there are, even though they’re more complicated than the traditional PKPD model, but they are actually a combination of the traditional PKPD model put together to describe it. And so, that is the time consuming part of the process, being able to build a model very quickly in a timely fashion to support the decision making process early in discovery.
Machine Learning Integration
Simon Taylor: Thanks Emile. Are there any particular examples you can cite about how you actually deployed it? Clearly under confidentiality restrictions of course, but just kind of speaking more generically around the concepts. Maybe how this was used in the early project environment.
Dr. Emile Chen: Yeah, absolutely, I would be happy to. Before I go to that, I forgot to mention one thing. So after we run 10,000 combinations of compounds or the combination of property, we characterize each one of those we characterize which one met the response criteria, which one did not, right? But then from that point, we need to sort out what is the compound space that is set providing us with a satisfactory response.
And that is a huge amount of work that need to be done if we do it manually. And this is where we leverage machine learning to actually help us, very efficiently sort out the important attribute and in one combination and help us visualize that. So machine learning turned out to be a very big part of this process.
Simon Taylor: And was that machine learning done to set criteria and decision making criteria and decision trees to help understand how it could be used within the screening approach?
Dr. Emile Chen: Indeed. I mean in a sense it is because basically we are leveraging machine learning’s ability to recognize pattern for this task and that basically can figure out what is in common, what are the effective compound property have in common that are the one that are good or that one or bad. And decision tree is certainly one way to do that. Another one is just looking at which compound property is most important, the important feature analysis. This all fell under the realm of interpretable machine learning.
That’s a very interesting field that is getting a lot of interest now recently. Definitely decision tree is one way to do that, but we also use visualization techniques. One thing that I want to add to that is each one of the machine learning algorithm, while very effective, they can potentially mislead for the people that have been looking into chatGTP and that kind of thing. We also realize that they can give you very reasonable answers, but sometimes if you look deeper, they can actually be misleading. Somehow they call it hallucinating.
But the point is what we’re trying to do is that we don’t rely on any particular machine learning algorithm. We actually use several different algorithm to provide us the feedback, and then we use human intelligence to look at that and draw our own conclusions.
Practical Examples and Applications
Simon Taylor: Well, let’s return to a little bit earlier. We talked about examples, maybe we could just return to that. You could use an example to illustrate how this has worked for you or been applied on a project.
Dr. Emile Chen: I can give three examples and depending on time, I can be you know, more brief on some of them. You just let me know how much to go into each one. The first one is actually way early in target, commited target stage. And the question was, there were two target in the pharmacological pathway that can give us the same biology outcome. Which one should we go after? Which one had the higher probability success for us, we should go to and turn out?
So basically we built a very quick model for that. We even leveraged some of the clinical data on a another market compound that has described the mathematically the downstream biology. We use that and combined with in vitro data that we have for earlier part of the pathway. And we did this exploration and find out that one of the target has a much higher probability, easier to achieve the effect that we want than the other one.
And at the time they already had some in vitro data suggesting how much target can we engage for. How long for each one of those two targets. And when the analysis come down, they actually help us focus prioritize one target more than the other.
Simon Taylor: So Emile, just before we move on, that’s basically saying that the success criteria for this was whether you could achieve feasible compounds an ADME space that would allow you to deliver a certain extent of efficacy is, is that correct? And what was the probability of success? Of achieving that with one mechanism, or the other mechanism?
Dr. Emile Chen: Excellent question. In that particular example, we didn’t actually even get to the ADME part. We just simply look at how much target engagement for how long is needed. And so for one mechanism, it turned out that we need to have 90% or more target engagement for 24 hours. And at that time they already knew that 90% target engage is not possible from their in vitro experience. And the other target only require 50% target engagement for 16 hours. And then that certainly is something that we feel is possible. And so that’s why we did that.
But I think that probably the success is a very important concept. In my next example, maybe I can wait to get into that.
Simon Taylor: Yeah, let’s move to that.
Dr. Emile Chen: But internal target engagement, another very quick example is that, it turned out that it’s redundant to the pathway that when we did the analysis and that even when you get a hundred percent target engagement, you will never get the critical effect that you were wanted. And that sort of just simply told the program team not to even pursue that target. And I thought that was a very useful thing to do also. I mean, negative, even though people not necessarily happy about that, but it’s certainly very much a resource and time and money saving exercise.
Simon Taylor: Can you also apply that kind of thinking then to explore perhaps the opportunity to use a different modality to explore the same target? Can you apply this to that type of thinking?
Dr. Emile Chen: Absolutely. And indeed we have, I mean, we actually look at how different way whether using a molecule or antibody and small molecule to do that. Yes, we did.
Simon Taylor: Thank you. And you mentioned you have a final example.
Dr. Emile Chen: Yes, so this is the example that is actually it applied to lead optimization and basically we build a model in such a way to define the ADME space this time, and then once you had the ADME space, and that space is actually quite complex. It’s not a “this region would be good, that region would be good.” It actually has peak and valleys and space. It’s hard to describe without the visualization, but it is the space is actually in my publication, so if you’re interested, you can look at that.
And so that basically, once you’ve done that, you can actually then put your compound or your example of your compound series on that map, and to show you whether that you’re sitting in a high probability success region or low probability region, and this is where the probability success comes in.
Probability of Success and Uncertainty Management
Dr. Emile Chen: We recognize that all the measurement of the property that goes into the monitoring is uncertain. There’s variability and, maybe as much as two to three fold variability in each one of the measurement. So the question that we ask is not, using model is not how much will be a perfectly certain measurement will give you what kind of response, but we ask the question, given the uncertainty within each one, the measurement, what’s the probability that it will generate a response that’s given at what dose that we decide that we are willing to do, give.
So usually we require the program team to think hard about what do you think is acceptable dose enrichment? Is it 1000 milligram once a day? Okay. Or does it have to be 200 milligrams? Or can you dose five times a day or do you want to dose once a week? And for each one of those scenario will build one of this probability success map based on the uncertainty, the parameter that feed into the the model.
Simon Taylor: How much flexibility is there on those types of targets? So you mentioned around dose and obviously early on in the project. There’s generic targets, a hundred milligram once a day, but actually perhaps in an early stage of a project, you want to retain enough chemistry and enough opportunity to explore other molecules. So you may increase that 500 milligrams once or twice a day just as an illustrative example. How do you manage that when you’re looking at these concepts, the models.
Dr. Emile Chen: You are absolutely right. I mean, sometimes you do want to open up the space much wider to allow you the flexibility. All I’m saying is that we should have that conversation deliberately early in discovery. Usually we say, oh, we don’t know, so we wait until later. But we really need to think hard about that question. And if you have a lot of choice, if it’s anti cancer 1000 milligram may be just fine. Right, but also at the same time, there’s a particle limitation of how high can you get. I mean, you probably don’t want to go to 5,000 milligrams. That could be quite difficult.
So I think the point is that you need to have that conversation as early as possible, and then you can make a decision. Once you make a decision, it doesn’t lock you into that space. You can change your mind halfway through. We can rerun the analysis once the model is built that now can be rerun in five minutes. So I mean, it’s not a big deal. So if you change your mind and you say, oh, I told you 200 milligram once a day, but now I can do a thousand milligram twice a day, we can rerun it and then redefine the space right on the spot.
The other thing that we often don’t think early, but it’s actually very important, is how big of a response you want. I mean, so we measure a response, in terms of the biomarker, but the biomarker is connected to the clinical outcome. So the question is, do you need your biomarker to increase twofold, threefold, a hundredfold to get the clinical response that you want?
And equally important duration of the response, the biomarker response. Do you want it to be 24/7 or, or is five hours a day is enough to get your clinical outcome? I mean, to give a very brief example, if it’s an appetite suppression drug and I’m sort of semi jokingly about this because I don’t know for sure what’s the right answer, but I mean do you need to be 24 hours suppress your appetite or is it only enough to, for your waking hour, which just the five hours that you normally eat, right?
I mean, this is the kind of conversation that you need to have as early as possible. You may not have the right answer. But I think that we need to, to ask that question early because all of that will impact the optimal PK profile that you are going to optimize to work.
Adoption Challenges and Model Validation
Simon Taylor: Yeah, it’s interesting. It’s always keep the patient in mind. Always keep the end game in mind no matter how early you are in a project. It’s always a good message. I’m thinking about adoption of this type of approach within projects. It wouldn’t surprise me it was met with some resistance at times. Modeling of some project teams prefer like to gather data, real experimental data or this is trying to position or use models to position the direction of a program, which I can see is incredibly useful.
Have you ever come across situations where there’s an expectation to provide some kind of qualification of these approaches. I guess the question is how was this approach received, when it was implemented?
Dr. Emile Chen: Another excellent question and I’m so glad you asked that. Yes I often get the question about validation, and that’s an extremely important question. But also surprisingly find that many aspect. A lot of medicinal chemists, they are very happy to accept certain degree of uncertainty, part of the process moving forward. And the validation requirement usually come from my fellow modelers.
And it is very important aspect that the problem is that there’s no easy way to validate these models in the sense that we hope to achieve in the late stage development process. In the late stage development process, we not only have data, in vivo data, we have human data, right? And so we can actually validate the model to the extreme. And that’ll be perfectly fine. In early discovery, not only we don’t have a lot of data, we not only we have, we don’t have a lot of project specific data, we also don’t have our knowledge of all biology still evolving. So how do you deal with that in such an environment?
Now, first of all, the way that we build a model is by piece-wise optimization. We follow information from different sources. If downstream biology has been shared by another market compound that has already been modeled, PKPD-wise and published, we follow the downstream process from that model because, if we know how the biomarker is related clinical outcome, and this might for model for that, we follow it. But then earlier on if we have in vivo data, we’ll use it and in vitro data, then we’ll use it.
That means that we had to validate the model in pieces rather than in totality. And we do certainly validate the model in pieces, but what we’re not able to say is for sure is that if we piece it all together, then we can validate the model to predict the human dose, human response. There’s no way for us to do that. To do that. We don’t need, if we can do that, we no longer need a model. Right?
So we take another view of this is that we view the modeling not as the way to accurately depict the true biological bioprocess that is actually happening. What we’re trying to do is build a mathematical model that accurately predict the biological hypothesis that our project team hold in our head in using that to progress compounds.
I mean, every program team or every discovery project team has a working by hypothesis about why the compound works and what we drive our decision based on that hypothesis. And what we do is that we build a mathematical model to quantify that biological hypothesis so that we can utilize that hypothesis most efficiently with the model to be guiding our decisions. So that is kinda what we aim to do.
Also we realize that that may not be accurate and so that we need to evolve the modeling and the decision that we make with the model continuously as data and knowledge evolve through the discovery process.
Communication and Team Integration
Simon Taylor: Absolutely. And interesting you, you mentioned earlier around, around communication. I mean, and the modeling piece I guess needs to be, it should be embedded, shouldn’t it? As part of the project team and the communication between the modelers, the biological experts, and those involved towards the clinical end as well, need to ensure good communication so that I guess you have the maximum confidence in the model that’s being brought forward.
Dr. Emile Chen: Can I add one more thing to that? The decision that we’re trying to make in early discovery is very different from the late discovery or early development in clinic. In late stage, the decision need to be very precise. Do I want to give the drug dose in a hundred milligram or 150 milligram, once a day or twice a day, that kind of thing.
In early discovery, we’re asking very broad categorical questions. Should we go? No go. You know, so you can set the, do you want to turn left or turn right? Do we wanna kill this compound series or not? So we can tune the probability of success based on our tolerance or risk. You can set a very low bar. So for example, you say you are not gonna say, oh, I need to have 80% probability of success before we progress this compound. You can set it to be 20%. Right? So that can allow you allow the leeway for errors.
So that that thing can be dialed up and down as you, as the program team deem necessary or feasible.
Simon Taylor: So this is around managing parameter uncertainty, for example, an exploring the impact of that.
Dr. Emile Chen: Exactly, exactly, exactly. And the other thing is that we don’t ask the project team to trust us a hundred percent. We always know that we could be wrong. So there’s a paradigm called explore versus exploit paradigm, which in a sense means that you trust me 70% of the time and you mistrust me 30% of the time. I mean, me being the model prediction and so that you progress compound.
This is an iterative process and each time you spend 70% your resource in believingly, the model to be right, and you still invest 30% resource hedging for the possibility of the model is wrong. And then you can always progress some compound in both category and you generate data for each one of the right or wrong category.
And each time you iterate, you revise the model based on that, you test and revise the model based on that. So this is called exploit versus exploit, so that we don’t accidentally get trapped into a corner because we overly trust the modeling effect. I won’t even trust my model a hundred percent for sure.
And if you have multiple biological hypothesis, competing hypothesis, then we’ll build multiple model for that. And when human make the decision on top of this, you will hedge between the different scenarios or hypothesis being right or wrong.
Dr. Chen’s Upcoming Book
Simon Taylor: Thank you, Emile. Well, I mean, we’ve covered a lot of ground today on how this approach, and considering the dynamics in the biology much early in a project can open up opportunities for optimization and progression and providing strategic direction to the project. It’s been fascinating to explore this with you in more detail.
I also know that since leaving GSK, you’ve been hard at work authoring a book, so we may as well give it some promotion. I understand it becomes available kind of during the summer of this year, and there’s a practical guide for pharmaceutical scientists on the use of PK and PD. Maybe you can just tell us a little bit more about your approach to writing this book and how it differs from other books along a similar subject matter.
Dr. Emile Chen: Well, thank, I mean. I can first of all just say I’m so glad I’m finished. It took me six years to get to this point, and I’m just so glad that I don’t have to wake up every morning, say, oh, I gotta go write a book. The reason I end up writing this book is the recognition that there are two reasons why I end up writing a book, because there’s certainly a lot of PKPD book out there already in the market.
But first of all, if you scan through the bookshelves, most of them are written for clinical application or late discovery application, and there are very little that are written for early drug discovery, as I kind of described today with you in our podcast. How do you apply PKPD in early lead optimization, hit to lead, what do you have to think about in turn of, when you select a target, that kind of thing. And so one of the thing is that I feel that drug discovery can use a good PKPD book. So that’s kind of one of the reasons why I wrote that.
The secondly is that a lot of PKPD book are more focused on the principle of PKPD science, and that’s of course is extremely important, but less often you see is, how do you apply the PKPD principle in everyday decision particularly in early discovery and in clinical. Yes, there are examples of that. So the question is how do you use this concept to solve the problem that I’m facing today in choosing between two compounds, which one should I progress?
And so what I’m trying to do is introducing briefly the PKPD concept, referencing other wonderful books and review articles about those concept, but focusing more on everyday problem that a discovery scientist may face in solving that and using example to showcase how those questions being solved. So those are, I believe there is two most important distinction that my book may be different from the others and the last one may not, is that perhaps I write it with a view that PKPD should be a shared knowledge for all discipline in discovery, medicinal chemistry, biology, pharmacology, toxicology.
So not just for modeller, not just for pharmacokinetic, not just for DMPK or clinical PK people. It is really because when we discover a new drug as a village, as a team, and everybody need to be able to speak a common language. You don’t need to be an expert in everybody’s field, but you need to know enough about each other’s field. And I try to serve that purpose from the PKPD point of view.
Closing Remarks
Simon Taylor: Oh, thanks Emile. Look forward to seeing that later this year. So, I mean Emile, many thanks for joining me today. We’ll close this podcast now. It’s really been, a fascinating and insightful discussion on PKPD applied to early drug discovery.
Dr. Emile Chen: Thank you for the opportunity. Always enjoy talking about it.
Simon Taylor: Thank you all for listening to this episode of Pharmaron’s DMPK Insights series. We’d like to remind you that our DMPK webinar series is also available on demand, and it covers a variety of key questions related to DMPK, science in drug discovery and development. Stay tuned for more podcasts in our Pharmaron DMPK Insights series. Thank you and bye for now.
Also available on:
Our Moderator:
Simon Taylor โ Vice President, Drug Discovery at Pharmaronโฏ
Simon Taylor is Vice President of Drug Discovery and is based in Hoddesdon, UK.โฏ With over 27 years of industry experience, he is responsible for DMPK/ADME and PKPD strategy, including human extrapolation and PBPK modelling and simulation, for Pharmaronโs integrated drug discovery projects from early discovery through to IND submission. Before Pharmaron, Simon worked at GSK for 20 years, leading DMPK and Quantitative Pharmacology teams and projects from the Hit Identification stage through to the clinic.โฏ He has worked across respiratory, inflammation, oncology, and cardiovascular therapy areas with drugs of varying routes of administration.โฏโฏ
Simon has a BSc in Pharmacology from the University of Leeds and an MSc in Model Based Drug Development from the University of Manchester.โฏ He has co-authored over 30 scientific publications in the literature.โฏ
Our Speakers:
Dr. Emile Chen โ Formerly Director, Modeling and Translational Biology at GlaxoSmithKline
Dr. Emile Chen has thirty years of industrial experience divided between early discovery involved in lead optimization and candidate selection, and late-stage development, including authoring and reviewing of regulatory documentation and NDA submission. Until March 2024, he was in the System Modeling and Translational Biology group, using PBPK, Mechanistic PKPD modeling, QSP, and machine learning techniques to solve project questions and thereby enhance scientific productivity.
Emile received his undergraduate degree from the University of California, Los Angeles, and his PhD from Northwestern University in the field of Biomedical Engineering, specializing in the development of mathematical models for information processing in the brain. He began his pharmaceutical career at Hoffmann-La Roche in 1993, following a postdoctoral fellowship at the University of California, San Francisco. He joined GlaxoSmithKline in 1996. Over the years, he has led ADME and PK groups at various times, supporting both early discovery and late development DMPK efforts. More recently, recognizing the current challenge of improving R&D productivity in the pharmaceutical industry, Emile is focused on leading efforts to utilize innovative mathematical modeling and simulation methods to help reduce attrition while enhancing the ability to predict efficacy and safety in humans and support portfolio investment decisions. For the past 10+ years, he has also designed and taught a series of interactive workshops that promote the use of kinetic thinking and mathematical modeling to integrate preclinical and clinical information to aid decision-making during drug discovery and development. The workshops are offered several times each year, both internally and externally.
In this episode of DMPK Insights, Professor Elizabeth de Lange discusses the evolution of CNS drug disposition models, covering advances in predictive pharmacology, bloodโbrain barrier transport, and PBPK modelling for central nervous system drug development.