Daphne Koller is an AI pioneer, MacArthur fellow, member of the National Academy of sciences and the founder and CEO of drug discovery and development company insitro. She’ll talk about how attitudes surrounding AI have evolved in her multi-decade career and what's ahead - including how technology is reshaping drug discovery, paving the way for more targeted treatments for the patients who can benefit most. But maximizing AI-powered innovation will depend on better investments in data aggregation, quality and collection and navigating hype cycles that can distract from real impact.
This academic-turned-entrepreneur will also share how founding insitro (and a previous company, Coursera) helped her expand her leadership and management skills, all while driving home the importance of shaping a company culture. At insitro, this focus building a culture that works for unique needs led to a special ‘helix’ inspired-structure that helps discovery biologists, automation engineers and others in the company's cross-functional teams keep communication flowing, problem solve, and prevent the siloes that can hold true innovation back.
Transcripción del podcast
Linda Lacina, Meet The Leader Welcome to Meet the Leader, a podcast where top leaders share how they're tackling the world's toughest challenges.
In today's episode, we talk to Daphne Koller. She is an AI pioneer who is also the founder and CEO of insitro.
She'll talk about how AI is driving drug discovery and development there and the big risks and opportunities AI will bring that we should pay attention to right now.
Subscribe to Meet the Leader on Apple, Spotify and wherever you get your favorite podcasts. And don't forget to rate and review us. I'm Linda Lacina with the World Economic Forum and this is Meet the Leader.
Daphne Koller, insitro We're in the midst of another AI hype cycle. And in this case, I think we're in a different situation because the field has actually delivered tremendous amounts of value, tremendous amounts of impact.
But I still worry that the hype sometimes still exceeds even the incredible contribution that has happened. And if we're not careful, we will find ourselves in another AI wintry situation.
Linda Lacina, Meet The Leader Daphne Koller is a computer scientist who has been working with AI for a staggering three decades. In fact, she was the first machine learning hire at Stanford University's computer science department. This was a time when the AI community was tiny, just a few hundred and then just a few thousand people strong.
Her research dug into things a few people were studying at the time, things like computational medicine and computational biology. She'd go on to win a MacArthur genius grant at just 36.
Since those early days, she has seen machine learning do things she didn't think she'd see in her lifetime. Data sets in biology and health have also improved, giving technologists more to work with.
Daphne founded insitro in 2018, all to help harness these new capabilities in machine learning for the life sciences, to discover and develop drugs faster and in a more targeted way than has ever been possible before.
AI for Scientific Discovery was named one of our top ten technologies in this year's flagship annual report, and I talked with her about the risks and opportunities it can bring.
We also talked about how she has changed as a leader. Insitro wasn't the first company she's founded. She also founded online learning platform Coursera, and she shared what she needed to learn as she transitioned from academia.
It's a wide ranging conversation with one of our day's pioneering technologists. We get started with the opportunity that data driven medicine presents.
Daphne Koller, insitro One of the big impetuses for the development of the AI technology that we see today is the incredible amount of data that are available in terms of language and images and so on. That data is not really available very much in the life sciences, which is why we've made an investment in generating data at scale. But with that investment, I think the opportunities are actually unlimited. We see our work, AI being able to identify patterns in what's called phenotypic data, which is when you measure a human in different ways using different types of assays. For example, you can do imaging on a person. You can take blood and measure what's in the blood. And there's a tremendous amount of information content that's in those data that a human eye just simply cannot perceive.
And we find that with the AI technology that we have available to us today, when you feed it that data, it can see things and understand underlying patterns of disease, underlying causal mechanisms of disease and so on, and really uncover novel ways in which one can intervene in disease and potentially slow it or even revert it.
Linda Lacina, Meet The Leader And with this, if it were scaled further, maybe in five years, what would be different? What would be the marked change that we would see?
Daphne Koller, insitro So drug discovery is a slow business, partly because it's a regulated industry and for good reason. Because when you put a medicine in a person, you want to make sure you don't kill them. First, do no harm. And so the process by which you first make sure that a medicine is safe and then you figure out the appropriate dosage, and so on, it takes a little bit of time, which is why five years is a little bit short. But I would say that in five years we will certainly see medicines that have a significant AI contribution to the discovery of the medicine being deployed, certainly in clinical development and possibly even in people. I would expect that to happen.
I think in ten years -- which is a more interesting timeframe -- we will see much more of the advent of what I like to think of as data-driven medicine, in which currently you have a patient coming in and a doctor basically either deploys some intuition about what drug the person should get, or there is like a very rigid protocol. "This procedure does X, and you do Y" with relatively little dependence on the individual factors that are specific to that patient. And I think that in a decade, we will have quantified enough patients and really been able to interrogate the data from those patients to the point that we are able to guide the treatment course of patients and develop new treatments for specific patient populations in a very deliberate way.
Linda Lacina, Meet The Leader Are there any specific case studies in insitro you can speak about? Examples of the potential and what you guys are digging in on?
Daphne Koller, insitro So let me give one that I think is perhaps most interpretable, which is the work that we're doing in cancer. So every patient who is diagnosed with a solid tumor gets a biopsy sample. The biopsy sample is stained and put under a microscope. And those are called H&E biopsy samples. This is probably the most abundantly collected data modality in clinical care other than things like, you know, EKGs and such. And currently you give what is a multi-billion pixel image to a pathologist. And the pathologist summarizes using three ordinal numbers on a scale of 1 to 3. That is a ridiculous amount of information that is left lying on the table.
In the work that we've been able to do, we've been able to show that you could look at one of these images and basically infer from multiple genes in our genome -- whether they're mutated in what is a mutation, whether they are overexpressed beyond their normal level of activity or under-expressed beyond below their level of activity -- All of which gives you a tremendous amount of information about the mechanisms of the pathways that drive the cancer. And so right now we're able to tell you, as, for example, a drug that is targeting a particular protein, where that protein is even active in that patient. And you know what? If it's not, you probably should not be giving that drug to this patient. You probably should be giving them something else.
And so right now this information is not accessible to clinicians. And so they're prescribing blindly. And furthermore, because you don't have this information at scale for a large number of patients, you're not able to design drugs in an engineered way, in a systematic way that says these are the subpopulations that we see, and therefore we should be making this drug and not that drug, because it will benefit the largest number of patients.
And so that's something that we've been able to show in our work in oncology that I think is truly exciting and could potentially change the way in which oncology drugs are delivered to patients and avoid what is sometimes a multi-month, tortuous journey to find the right drug for a group of patients that do not have months.
Linda Lacina, Meet The Leader You talked a little bit about sort of the the opportunity of data-driven medicine. What are some of the risks that we should also keep in mind?
Daphne Koller, insitro I think that one of the risks is that the data that one collects in clinical care is often quite noisy and quite dirty. And so garbage in, garbage out is definitely a thing that is still true even in the day of, you know, modern machine learning. In fact, one might say even more predominant because machine learning that is more powerful can identify and hone in on subtle artifacts just as well as it can on subtle signals. So I think that's a really important thing to think about.
I think that it's also important to realize that people are not all the same, and there's biases in how data are collected and data are recorded that can often really be quite pernicious. And if they're not appropriately accounted for, can skew the way in which we deliver medicines to patients. We might be thinking that we're doing things in a data driven way, but if we haven't appropriately accounted for all of the covariates, then we can make real mistakes.
I'll give you an example: You know, 50% of the world's population are women. Most of the data that are collected and most of the clinical is still very male focused. Because clinical development that involves women is harder because the menstrual cycle causes fluctuations of variations that have to be accounted for. So people follow the easy path. And as a consequence, a lot of drugs are just considerably less effective for women than they are for men. And so I think if you don't appropriately account and stratify for that, then you can make decisions that will actually be detrimental to patients.
Linda Lacina, Meet The Leader Do you see that there are certain blind spots for leaders of any stripe when it comes to just AI in general.
Daphne Koller, insitro I think probably the biggest blind spot is thinking that AI is this magic wand that you wave it on top of a pile of data and magic will come out. And unfortunately, the input needs to be of high quality enough that the magic wand actually produces something of value. And so I think people are not investing nearly as much as they should in data collection, data aggregation, data quality in order to make sure that the AI has the right input to work on.
People are not investing nearly as much as they should in data collection, data aggregation, data quality in order to make sure that the AI has the right input to work on.
”Linda Lacina, Meet The Leader And are there certain questions that businesses, leaders should be asking themselves to kind of make sure that they're on that right path, that they're in the right mindset when they're going forth with that data?
Daphne Koller, insitro So the first thing that we do when we look at a pile of data is we do some simple statistical correlations just to see whether things that shouldn't be correlated actually are correlated with each other.
So for example, if data are collected from multiple batches and multiple days, how correlated are certain outcome measurements with the day in which the data was collected or the machine on which they were collected on? So a simple statistical test like that often reveals significant artifacts that one needs to correct for. And then maybe on the other side, demographics of patients to basically see whether certain outcomes are correlated with race and age and gender, which they inevitably are. And so just making sure that you're appropriately accounting for those covariates, those are some things that we do. So that's on the input side.
On the output side, once you've trained the model, it's really important to make sure that your model generalizes in the right way. So oftentimes people just test the model on the data that they use to train the model. That is an absolute no-no in machine learning. And the less sophisticated people don't always think about really keeping your test data entirely separate and ideally from an entirely separate cohort. So, for example, if you're doing something for clinical care, you want to train potentially on the data from some number of hospitals and test on a completely different hospital, which will ensure that you have really addressed these -- really come up with something that is truly general, as opposed to focusing on the specifics of the population that used for training. Those are some of the metrics that I think people should be applying. And it's not just in healthcare, it's in every challenging AI application.
Linda Lacina, Meet The Leader What keeps you up at night when it comes to knowing that this is the big buzzword? Everyone's talking about AI, regardless of how much they read in on it, right? There's sort of a little bit of a sophistication you need to have where you're getting this stuff, how you're using it, and that can upend the whole thing, especially when it comes to how hospitals, patients, doctors, whether they trust these tools, like tell me a little bit about that.
Daphne Koller, insitro So in general, I have a deep aversion to hype cycles. And I think hype cycles, which have happened multiple times in the journey of this field, have been incredibly detrimental and caused several of what are called AI winters, when the field made extravagant promises that did not come true in the timelines that people made them for. And what caused this disillusionment is like, "oh, this is all just a bunch of hooey, and I'm just going to go and put my money or my time into something else."
When I graduated from my PhD at Stanford, you couldn't say you were doing AI. It was considered too fringe, or as my teenage daughter likes to call it: "suss." And, you know, because we were just coming out of an AI winter that had been created by exactly that. You used to have to say you were doing cognitive computing or computational learning theory or anything that was not AI. Now, of course, we're in the midst of another AI hype cycle. And in this case, I think we're in a different situation because the field has actually delivered tremendous amounts of value, tremendous amounts of impact. But I still worry that the hype sometimes still exceeds even the incredible contribution that has happened. And if we're not careful, we will find ourselves in another sort of AI, maybe a wintry situation. And that's going to be hugely detrimental to the field again. So I think people should be balanced in how they talk about the technology. Being optimistic, but still being balanced about where we are relative to the challenges.
We're in the midst of another AI hype cycle. And in this case, I think we're in a different situation because the field has actually delivered tremendous amounts of value, tremendous amounts of impact. But I still worry that the hype sometimes still exceeds even the incredible contribution that has happened
”Linda Lacina, Meet The Leader Are there tells that you're in an AI hype cycle, a way that people can recognize, "oh no, like I think I've been taken over by a trend, by a fever," as opposed to maybe solving an actual problem.
Daphne Koller, insitro Yeah. So I think there are a number of those. So first of all, it's when everyone who last week was a data scientist is now an AI expert. They are not. It's a skill that one needs to learn. So you need to be watchful that that it's when the valuation of a company goes up by 30 or 40%, when they're fundraising, simply by having an AI slide and slide four of their dek. So both of these are, I think, important tells. And I think it's important that as one encounters individuals or companies that are claiming to do AI and solve AI problems, to actually ask them, where does your data come from? What AI technologies do you deploy? What is the skill set and the experience in this space that your team has, you know, how long have they been working on this?
And you will find that there's a lot of organizations that once you start kind of poking at it, there is very little substance underneath, whereas there are many that do. And so I think it's important to kind of winnow the wheat from the chaff as you're talking to organizations because it's a bandwagon, and everyone's jumping on it.
Linda Lacina, Meet The Leader How do you think that you have changed as a leader in the course of your career? Is there something that you do now that you would not have done maybe at the beginning?
Daphne Koller, insitro I think the first one is recognizing that leadership and management is a skill, just actually the two different skills that are things that you need to learn. Like many tech people, STEM people tend to grow up valuing the hard sciences, the hard technology as the skills that are important. And then when you go into business for the first time, as I did when I left Stanford to found Coursera, you don't really appreciate the quote soft skills and just how important those are, and that you need to learn them and you get better at doing them, by deliberate practice, by learning from others. And so I think that, to me, was an important realization that it wasn't just about the hard skills. It was at least as much about the other skills: how to lead, how to manage, how to coach, how to grow people, how to give feedback, things like that.
I think the second one that was really profound to me was the importance of culture. When I started Coursera, I had just come out of academia. I've been an academic actually my entire life. I'd never been at a company far less building a company, and a relatively senior potential recruit asked me, you know, as the Coursera co-founder and co-CEO, what would you like the culture here to be in? I kind of internally scratched my head saying, what is culture and why do you need it? You come to work, you do your work, you go home go home. I mean, what's culture? I think I learned how important that was in the Coursera days because of some things that we accidentally did right. Other things that we, by neglect didn't do right, and that was quite profound and so on.
And so at insitro, I took a very deliberate, very thoughtful view on building culture. And I think that has served us incredibly well, especially in a time of great change and great potential turmoil that results from the change, including the rapid growth of a company. Having a very deliberate mindset on culture is absolutely critical.
Having a very deliberate mindset on culture is absolutely critical.
”Linda Lacina, Meet The Leader And within that culture, what are 1 or 2 things that you're really proud that you guys have built in and so other people can learn? What are those things that you think are like, gosh, those are key.
Daphne Koller, insitro So, insitro is very much a cross-disciplinary company. We have everything from, you know, life sciences, we're discovery biologists, we have automation engineers, we have machine learning scientists, we have software engineers. We have such a huge breadth of disciplines of people who normally do not talk to each other, cannot communicate with each other effectively, not because either of them is a bad person. It's simply the language. The jargon are different, but also the mindsets are different. The mindset of a discovery scientist versus the mindset of an engineer, they just think about the world in very different ways.
And one of the things that we put in place from the very beginning is a set of expectations about how people interact with each other and engage with each other in order to take that very disparate group of individuals and create a cohesive culture that creates two cross-disciplinary thinking and ideation. And what we've found consistently is because of this culture that we've put forward. It's not only that we come up with better solutions -- we do -- but we also come up with better problems that people would not think about if they were working in a siloed approach. And so this culture that we've put in place has both elements of expectations and how one people interacts. But there are also org structural consequences. So for example, all of our projects are done via cross-functional teams that involve people from different disciplines working together.
So you can think of it as a version of a matrix or the helix, however you'd like to think about it, that incurs a certain cost in terms of the need to coordinate across functions in ways that a standard hierarchical structure doesn't impose. But we felt so strongly that if you had a hierarchical structure with the corresponding silos, then we wouldn't build the kind of company that we wanted to build.
Linda Lacina, Meet The Leader You mentioned that there are either solutions as well as, you know, interesting problems to tackle that you were able to discover. What's an example of one of those that kind of helps people understand, hey, this is where you could go with this.
Daphne Koller, insitro So I think coming back to one of the examples that I gave earlier, the ability to take data from histopathology images and from that readout molecular covariates and then relate them to clinical outcomes is something that requires both the ability to understand what machine learning can do with images, which oftentimes is just like magic to a typical, you know, cancer biologist. But it also requires someone who can come in and say, you know, all of the many things that you can read from these histology images. These are the ones that matter. So let's focus on reading those as opposed to the other things.
If you just had the machine learning scientists on the one side, they would probably say, "oh, look, I can read this and I can read this and I can read this," and maybe these wouldn't be the things that matter. And on the other hand, if you just had the life scientists, they probably wouldn't even imagine. They wouldn't even imagine that you can read these kinds of signals from images. I mean, really, you can read gene activity level just by looking at a bunch of fixed cells of of histopathology images on a slide. I mean, how could you possibly do that? And it's really bringing those two groups of people together that gave us this "Aha moment" of this is a really important problem that technology can help solve.
Linda Lacina, Meet The Leader You are a woman in tech. There's now senior women in tech senior roles. What is a way that you either help coach sponsors, support women to kind of stay in technology? What do you do that you swear by or that you think is a great practice?
Daphne Koller, insitro It is really hard to be a woman in tech. Unfortunately, even today, I mean, when I started, it was typically the case that I was the only woman in the room. And I'm sorry to say that even today, I'm often maybe not the only woman in the room, but there might be 1 or 2 others. And it does create a certain feeling of loneliness and being put on the spot and a real sort of need to represent the gender, so to speak, and that makes it kind of a little bit overwhelming at times.
One of the things that I would suggest to anyone, not only a senior woman, but to anyone who's an ally, is to watch out for the many situations where women are put on the spot, not just women, people of disadvantaged backgrounds. So, for example, it is very common for women again, even today, that there is a group of people around the table, ideating. And a woman says something and the conversation just slips on. And then five minutes later, a man says the exact same thing and everyone's like, "John, what a great idea." And if you're the woman who made that comment, I mean, there's no good solution for you because you can say, hey, I said that and you look like a, you know, credit-grabbing, kind of petty type person, or you can let it go and never get credit for your ideas. Neither of these is great.
If you're in the room and you're catching that and you can then say, "oh yeah, John, that's great. Thank you for amplifying Sally's idea from a few moments ago." And it is a non aggressive way to highlight that. Sally made that same comment. And I think that is something that all of us could be helpful to women in raising their confidence, raising their credibility and raising the recognition for credit.
Linda Lacina, Meet The Leader
Is there a piece of advice that you've received that you've just been grateful for?
Daphne Koller, insitro There is a number. There's many. But I would highlight one that I found useful, both in understanding myself and in understanding others. For example, in coaching situations or even in interaction with anyone, which is that oftentimes our areas for development are our strengths taken into extremes.
So, oftentimes when you have something that you're really good at. You end up overusing it and that ends up being minus. And that's certainly something that I found for myself in cases, for example. I've been considered as very good at ideating and, you know, coming up with new ideas. When you overuse that and you're not careful in how you use that, especially if you're in a position of authority, every new idea somehow miraculously spawns a project that an entire team is working on, even if that was just really not your intent. Your intent was just to say, "oh, this is a really interesting thing." And so being very conscientious of what your strengths are and not overusing them.
And then also the same is true for others. You sometimes see people like why are they doing that? That's just such a horrible thing. And you realize that this is just like their strength, but taken to extremes. It makes you more empathetic and more able to give them feedback that will actually be received in the right way.
Linda Lacina, Meet The Leader There's so many AI tools that people are using in their regular life. I think it'd be very interesting to know your perspective. Are there AI tools that you use that have surprised you at how practical they are to making your life better, faster, stronger?
Daphne Koller, insitro So I will give two. 0ne that dates back even before the latest generative AI sort of cycle that we're experiencing is the power of machine translation. One of the barriers to communication with people from backgrounds different from your own is often that they speak a different language. And I found that even the previous generations of machine translation were just functional. You could get an email from someone in German. Then when you don't speak German and be able to pretty much understand what it said and respond in English and have that translate into German, it was amazing. So that, to me, is a huge one.
I think there's a bunch of fun ones, like what you could do with images, both, you know, finding images and creating images. But maybe the last one that I would give is just like the ability of tools, like the large language models, ChatGPT and others, to create and dialogue around complicated content in ways that are intelligible and meet you where you are. So if you want to have a conversation at the level of a high school student, you can do that. If you want to do it at the level of a, you know, elementary school student, you can do that if you want to do it at the level of a PhD, you could do that, too. And it's the same content, but they're able to kind of generate a dialog that anyone can relate to. And I think that is going to be one of the most transformative things for everyone, from researchers to education because it allows everyone to have an interactive experience around complicated stuff that is tailored to who they are. And I think that is one of the most important things that we can do for people.
Linda Lacina, Meet The Leader What do you think leaders should prioritize now?
Daphne Koller, insitro I think we're at a time of incredible technological opportunity, and that technological opportunity can be deployed in ways that are beneficial to humanity or not so beneficial. And that is certainly the case. I mean, certainly, if you're in industries that are the social good industries, but even in any other industry, you can think about the downstream secondary consequences of the actions that you take in deploying that technology and really be mindful of how you can do it in a way that overall increases societal good. And when you're at times like this, where you are on this exponential curve, small actions can have large downstream consequences because they get amplified over time. So I would encourage people to be really thoughtful and how they're deploying technology so as to optimize for maximum benefit to humanity.
Linda Lacina, Meet The Leader That was Daphne Koller. Thanks so much to her. And thanks so much to you for listening.
To read our recent Top Ten Technologies report, go to the show notes of this episode. We'll make sure to have a link.
You could also check out my colleague's podcast, Radio Davos, which delves all into the tech that experts want us to pay attention to now, all at wef.ch/podcasts.
This episode of Meet the Leader was produced and presented by me, with Jere Johansson as editor, Juan Toran is studio engineer in Davos and Gareth Nolan driving studio production. That's it for now. I'm Linda Lacina with the World Economic Forum. Have a great day.