Our predictive genomics content hub offers insightful perspective from industry experts on topics such as polygenic risk score applicability in the clinic, pharmacogenomics market landscape, host genetics for SARS-CoV-2 infection research, among others.

Predictive Genomics with Finngen and the Taiwan Precision Medicine Initiative

Discoveries and translational opportunities arising from the FinnGen Project
Aarno Palotie, MD, PhD and Samuli Ripatti, PhD
Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Finland Broad Institute of MIT and Harvard, MA, US

The Taiwan Precision Medicine Initiative: Incorporating Genetics In Clinical Practice
Pui Yan Kwok, MD, PhDz
Henry Bachrach Distinguished Professor at UCSF and  Director of the Institute of Biomedical Sciences at the Academia Sinica

View webinar ›

Predictive Genomics Discussion with Dr. Mark Daly and Dr. Ulrich Broeckel

Polygenic Risk Scores: Challenges for Individualized Prediction of Disease Risk

Dr. Andrew Peterson, CEO Broadwing Bio and Principal, SARGAM Consulting

View webinar ›

Common risk variants and complex disorders

Dr. David Whitcombe, Ariel Precision Medicine

View webinar ›

Investigating genetic diversity in the VA’s Million Veteran Program (MVP)

Dr. Saiju Pyarajan, Director, VA

View webinar ›

MADCaP: The First Large Pan-African GWAS Tackling Cancer Health Disparities in African Populations

Tim-Rebbeck-headshot

Timothy Rebbeck, PhD 
Vincent L. Gregory, Jr. Professor of Cancer Prevention, Harvard TH Chan School of Public Health and Division of Population Sciences, Dana Farber Cancer Institute

View webinar ›

Polygenic Risk Scores - Value & Path into the Clinic

Learn how our understanding of polygenic risk scores to predict disease risk has broad implications for the future of health care through a Thermo Fisher Scientific sponsored discussion with Sir Peter Donnelly, founder of an unrivaled data platform for predicting disease risk of complex diseases, and Dr. Jill Hagenkord, an expert in population health, patient engagement, and medical diligence for health technology.

Transcript of presentation

It's a great pleasure to be here. So I just wanted to set the scene a little bit for the discussion that we'll have in a moment about polygenic risk scores, what I see is their huge potential and some of the challenges in getting them into healthcare systems. So, many of you will be aware of the background, but we've learned from about 15 years of human genetic studies that if you take any complex human disease. Indeed, if you take any complex human trait, there are tens or hundreds of thousands of genetic variants which affect susceptibility to that trait, which change someone's risk to the trait. So many, many, many variants, but all with small effects. If you have an A rather than a C in this place in your genome, it might increase your risk of heart disease by one percent and a G rather than a T here might increase it by two percent and so on. So the idea's been around for a long time of combining those small effects within an individual, aggregating them. And that's what's called a polygenic risk score. So if we did that for lots of people, for everyone in the room, we'd end up with a distribution of scores. Most people would end up somewhere in the middle because they have some of these variants which bump their risk up a bit and some which bump their risk down a bit. Some people would end up at one end because they've got rather more of the variants which increase risk. And some at the other end because they've got rather more of the variants which decrease risk. So that idea has been around for a long time. But I think there's been a step change in about the last 12 months for two different reasons. The first is that we now have large enough studies of the diseases themselves to do a good job of identifying which of the variants, the many variants across the genome, which contribute to risk. The first point. And the second is that we have a resource in UK Biobank, which is large enough to allow us to measure what the impact of these polygenic risk scores. In principle, they should be helpful with segregating individuals within a population according to risk. So we can measure that.

So this is based on a polygenic risk score for coronary disease, for coronary artery disease. So the background is that we in genomics, we have data from you could think of it as 10,000 GWAS studies. We analyze all of that data together, focusing on, in this case, coronary disease. We decide which of the SNPs across the genome to include in our polygenic risk score and how much to weight them. And then we take those scores. We kind of lock down the algorithm for the scores. We take that into a totally independent set of individuals in the UK Biobank and we use the health information in biobank to learn for those individuals about their disease incidents at different ages. And the graph shows three curves, the red curve. So this is men, but the men in the test set who have polygenic risk scores in the top three percent. And what it shows, as one moves across the picture, is the increasing incidence of disease with age. You can see that for one of those men in the red group, their lifetime risk of coronary disease is about 40 percent. If you compare vertically, if you compare the red curve with the blue curve is the kind of middle 20 percent. The green curve is the bottom three percent. If you compare vertically, it speaks to relative risk. The men in the red group are three fold more likely than average to have coronary disease. And if you compare horizontally, if you look at a man in his early 40s here in the red group, his risk of coronary disease is about the same as a typical man in his mid 50s. So people in the UK have talked about the idea of heart age, and it's a good way of getting across, simple and informal way of getting across the idea that men in this group are at substantially increased risk. In this case, if you were such a man, you might think about adapting your lifestyle, exercising more, losing weight, stopping smoking if you're a smoker. But there's also a potential clinical intervention through statins, which is a way of reducing heart disease risk.

The second one is breast cancer. So you'll be very aware that there are two genes, BRCA1 and BRCA2. A woman who has the wrong kind of mutation in either of those genes has a very increased risk of breast cancer, something like ten-fold average and a 70 or 80 percent lifetime risk. So this curve is ignoring BRCA1 and BRCA2. It's doing the same thing. It's aggregating risk from many, many, many variants across the genome, which each have small effects. Again, if you do that across many women, there'll be a distribution of these poligenic risk scores. The women in the top three percent in this case for breast cancer have almost a 30 percent lifetime risk of breast cancer. And again, if you look horizontally, a woman who is in her early 40s in the red group has about the same risk of breast cancer as a typical woman in her early to mid 50s. So in the UK, and this is not atypical, but in the UK, breast cancer screening is offered via mammograms to all women at age 50. And I think it's hard to look at this picture and think that that's a sensible strategy. In any sane version of the world, we'd be targeting screening on these women earlier and potentially more frequently. And we might be targeting screening on these women whose lifetime risk is only two or three percent in total, we might be targeting that screening less frequently. This kind of information changes the way you target screening. But also, if you think about two women, one in the green group at age 60 and one in the red group, if they both had a mammogram and it came back with a positive result because the woman in the red group has a much higher underlying chance of having the disease, her positive result is much more likely to be a true positive. Whereas for the woman in the green group, it's rather more likely to be a false positive. So this kind of information, because it changes baseline rates of disease, changes not just the way we should target screening, but also the way we interpret it.

Final picture I want to show- this is prostate cancer, which in our hands is the one where the genetics is currently most powerful for stratifying risk. The men in the top three percent of risk here have something like a 40 percent chance of having been diagnosed with prostate cancer by age 70 compared to something which is of the order of 10 percent for the people in the middle group. Prostate cancer doesn't have a good screening tool. Prostate specific antigen is sometimes used. It's kind of a noisy measure, but the possibility is apparent from this. I've trying to combine polygenic risk scores with PSA measurements to do a much better job of interpreting and understanding risk. Or to put it another way, if you're a man in the green group and I remain in the red group and we had borderline PSA readings, I should be much more worried in the red group than you should in the green group. Again, because the underlying retro disease is much lower. So that's three examples. In fact, you can calculate these polygenic risk scores across 15 or 20 diseases as we have. And this is a picture of an individual. The guy in the picture is called Matt Handcock. He's the secretary of state for health in the UK. So he's the cabinet minister responsible for running the National Health Service. And to his credit, he was keen on understanding how it felt as an individual to learn about your risk through polygenic risk scores. So he asked us to do this in his case. And then he spoke publicly about a couple of examples. This is him speaking about prostate cancer, where he learned that he was at something like a 50 percent increased risk compared to average. He wasn't aware of any family history. So it's given him something that he can be aware of as he gets older.

The answer is yes. It turns out that the polygenic risk score for coronary disease is effectively independent of the clinical risk scores, either the QRISK score used in the UK or the AHA score used in the US. In fact, it's almost independent of family history and that's initially a bit surprising because polygenic risk scores are about genetics and family history is about genetics, we think. So what's going on here? I think it's one of those things where when you learn a surprising result, you kind of retrain your intuition, which I've it's process I've been through. I think there are a couple of things. I think, firstly, family history, the way we use it clinically is picking up some element, this is a sort of obvious statement, but it is picking up some element of shared environment. And maybe that's rather larger than we thought. And also, family history is probably picking up the effects of rare variants with larger effect, which aren't being measured or captured by polygenic risk scores. But interestingly, polygenic risk scores because they're largely independent of family history add to that information as well. So you can ask the question, suppose we already estimate risk by one of these clinical tools and I'll use the QRISK example from the UK, and then we think about using both the existing clinical tools plus genetics through polygenic risk scores, how does it change the results? So I'll give you a sense. So this is a schematic. This is an extrapolation from real data in UK Biobank. In the U.K, there are about 10 million people aged between 40 and 55. And so I want you to think of that group here. These little stick figures thus represent about one hundred thousand people each. So the first question is, if we just use the existing clinical risk score in that group, it turns out that about 10 percent of the individuals so 100 million of the 10 million are above the clinical threshold for statin prescription in the UK. Then you can say, what if I change and I incorporate genetics as well as QRISK. So the polygenic risk scores are independent of QRISK. You should be able to do a better job by using both of them than just by using one of them. That's indeed the case. So the first thing is that when you combine genetics, there are more individuals above the threshold because you're doing a better job of estimating risk. And the second is they're actually different people. So, to break it up there were these 10 little stick figures, so about a million people who are above the threshold just using their clinical scores. So eight of those or eight hundred thousand of them, in my numerical example, are still above the threshold when you combine genetics. But they are two hundred thousand of them who are now below the threshold. So they had unhelpful traditional risk factors, but helpful genetics which brought them down. But interestingly, there are almost half a million people who are above the threshold who are currently invisible to us using just the clinical risk scores. So they are people whom the clinical guidelines say we should be intervening and offering statins, but they're invisible to us. And we'd learn about them if we combine genetics and the traditional risk scores. And you can ask because the data is there in UK Biobank, and you can say, if I follow those individuals through time, is it the case that more of them in the group that we've now moved into the high risk group get disease than those which are taken out of the high risk group? And indeed, that's the case. So a higher proportion of these people, the ones who are now above the threshold, have both coronary disease and cardiovascular disease than the proportion of these people who are the ones who were above the threshold. But actually, when we do, the more sophisticated risk estimation are less have have a lower lifetime risk and a lower 10 year risk. So that's one example of combining genetics and non-genetic risk factors. And that's, I think, a natural way to think in general rather than just thinking about the genetics in its own right.

Polygenic risk scores tend to be more powerful when they are used in the populations of the ancestry of the individuals in whom the original genetic studies were undertaken. And most human genetic studies have been undertaken in people with European ancestries. So they tend to be more powerful in those groups. Some people say, well, you just can't use them for other groups. Actually, I think about it as an empirical question. The right question to ask is what is the performance like in other groups and how much less powerful is it? And what are the ways we can think about addressing that? So I'll go back to the three examples I've shown you before. This is coronary disease. Picture I showed earlier was based on men with European ancestries. There aren't large numbers of individuals in UK Biobank, but there are some individuals in UK biobank of non European ancestries. They are preferentially from South Asia. There are some East Asian and some Afro Caribbean individuals as well. And actually, if you estimate risk in those, these shadings are the uncertainty intervals, they are wider because the numbers of individuals are less. But broadly, the performance is reasonably similar, or at least it's similar in the high risk group for coronary disease. We do the same exercise for breast cancer. Again, there's more uncertainty, but the level of risk in the high risk group is lower in the non European ancestry individuals and in prostate cancer, it's somewhere between the two. So what to do? Two things. The first is to flag the fact that there's absolutely an issue about using these tools currently in individuals from other ancestries. But the second is, it's an empirical question. The right question to ask is for a given polygenic risk score algorithm and a particular disease, what's the performance in individuals of this ancestry, that ancestry and so on.

I think the potential of polygenic risk scores is that it allows us to identify early in life individuals who are at high risk of particular diseases. And if we're able to do that, we can target interventions in some cases, so for coronary disease, either lifestyle changes or statin lipid lowering medications or in other cases, it can allow us to target screening. Critically because they're based on common variants to get the information needed to calculate polygenic risk scores, you just need to run a genotyping array. You don't need to sequence individuals. So it's cheaper by a factor of something like twenty-fold. And it's something that starts to be accessible at a population scale. You can do one genotyping array and calculate polygenic risk scores for say, 20 different diseases. And interestingly, although any one of you is probably not in the top three percent for heart disease, well the chance that any one of you is in the top three percent, and you probably are not in the top three percent for type two diabetes and if you're a woman, probably not in the top three percent for breast cancer and so on. Across 20 or so diseases, you're likely to be in the top three percent for something. So one way of thinking about this is it allows us as individuals and health care systems to work out what are the two or three diseases at which we happen to be at increased risk because of the common variants we inherit. It's different from traditional clinical genetics in another way. So these scores are risk factors. If you have a high polygenic risk score for coronary disease, it increases your risk of disease in the same way that having a high LDL cholesterol level increases your risk of disease. Neither is determinative. And in each case, you can do other things to mitigate the risk. So we should think about polygenic risk scores in that way. They're much less correlated among relatives. If you're in the top one percent for polygenic risk score for a particular disease than the chance that one of your kids or one of your siblings is in the top one percent is 10 percent. They're at increased risk. But it's nothing like if a woman has a BRCA mutation, then all of her female relatives have, first degree relatives, have a 50 percent chance of carrying the mutation. And that then has a very significant effect. So I'd argue we should think about it much more like medical tests - doctors of different specialties and in primary care can think about it. We don't need clinical geneticists treating polygenic risk score because I don't think we need genetic counseling to be feeding it back to individuals.

[00:16:50] They're useful for risk prediction in all individuals, they perform more powerfully in individuals of some ancestries than others currently. So that's absolutely something that the field should be focused on. It's something that we're focused on in Genomics that'll get addressed in two different ways. One of them is improving the algorithms coming up with clever methods that can make use of the data we do have available on other ancestries. And the other one is larger datasets. So I see that as very significant, but short term effect. The final one is to say that we've put a lot of effort into this within Genomics. My background, as many of you will know, in the academic world was statistical genetics. Vincent Plagnol, who leads for us on this area and his team, have got polygenic risk scores for 15 or 20 or so diseases. And in each case, they're comparable to it, in most cases better than those that have been published.

[00:17:42] I'll just finish by giving you a sense of where this is going into health care. That's what we're aiming at. We're aiming at using this in a way that makes a difference to clinical care. So in the UK, there's very strong support from the government, including the guy who went through this experience himself, Matt Handcock, the secretary of state for health. There has been a recent announcement of a large cohort in the UK of five million people. A key part of the thinking behind that cohort is the idea of getting genetic information through genotyping arrays, calculating polygenic risk scores, feeding that back to the individuals and feeding it into the National Health Service, into the health care system and there are a number of other parts of the National Health Service who are talking to us about doing polygenic risk scores in primary care, in some instances, and in some other context as well. We'll be doing the polygenic risk scores in the in the five million cohort, the ADD cohort. We are in discussions with a number of healthcare systems in the US, again, about early adoption and being thinking about piloting this a number of different ways. We can discuss it a bit more in the question and discussion which follows. But I now think strongly that actually this is the area, we've been working hard for 20 years as you've heard, trying to use genetics to understand human disease, better common human diseases in particular, with a view to improving the way we develop medications and improving clinical care. I'm absolutely convinced that this is the area where genetics will have its biggest impact on health care. It's all the common diseases. It allows us to identify people earlier in life and then to work out how to get the right interventions or lifestyle changes or how to target screening appropriately.

Panel discussion

Transcript of panel discussion

Peter Donnelly: I think there's an important issue that has to be solved disease by disease, and that's the question one gets asked. Okay, so I'm a patient or I'm a doctor and I know either my own or my patient’s polygenic risk score for this disease. Then the question is, what next? What do I do differently as a result of that? So I think to get this stuff into healthcare, we need to solve those questions, disease by disease. We need to know what the next steps are. In the case of coronary disease, I think it's one of the easier ones. So in the UK, general practitioners have software sitting on their computers, which combines all of those classical risk scores -cholesterol, age, sex and so on, and comes up with a 10 year risk of coronary disease. The GP looks at it, and if it's above the guidelines, he talks to the patient about statins. So in that case, I think all that's needed is to change the software so that there's one more box into which you can enter a polygenic risk score if it's available, and then you just press the button in exactly the same way. It does a slightly different calculation using the genetic information if it's there, not using it, if it's not there, comes up with a risk estimate over 10 years. And the GP interpreted it exactly the same way. So I think that's one case where the path to clinical usage is clear in some of the cancers and breast cancer, for example, one can think about targeting screening earlier. And then you have to do the calculations that say, well, if you change a program, so you targeted certain percent of the top few percent of women based on polygenic risk scores at this age and this often, you could use the data that's available to work out how many cases of breast cancer you'd catch early and so on. So I think we need to solve it disease by disease. We're thinking hard about that. But the National Health Service is starting to think hard about it. In the UK, as I mentioned, there's this large five million cohort that is going ahead. That will involve polygenic risk scores. It will force the system to work out how to incorporate those. But I think disease by disease and a number of other pilots we have are doing the same sorts of things.

Jill Hagenkord: I mean, I think the things that matter are the same in both countries. A big research, a government funded research project is a really excellent way to catalyze this process and get the data that we need. But in both countries, if you want to be legally on the market, you need to have analytical validity and then clinical validity about the meaning of that finding that you've gotten. And there's several polygenic risk scores that have gotten to the clinical validity stage to get actual widespread adoption, whether it's in the UK or in the US. You need to come into professional society guidelines. You need NICE to say this is what you're supposed to do or you need the appropriate professional society in the United States to issue practice guidelines around it. And until then, you know, you might get some practices that will pick it up. But if you're going to go widespread you need professional society guidelines, and you need someone to pay for it. And that usually is tied to the professional society guideline part. And so to get the rest of the way there - so you need analytical validity, clinical validity. Then you need clinical utility, some kind of health economic model. And at that point, you can then do a large prospective study, either a randomized clinical trial or some kind of real world evidence style observational study with this test being done in the wild. And it's at that point that the professional societies will consider adoption and then the payers will consider paying. And that's when it will take off. But it's these large research initiatives that are going to catalyze that and facilitate that.

Jill Hagenkord: Peter and I talked about that a lot, actually. The initiative that he's a part of in the UK is really important because I think once you can get yourself with one indication for use that you kind of got enough evidence where you can legally put it on the market and you get either a large research project like in the U.K. or, you know, some partnerships with some more forward looking health systems who really believe that this is going to be part of the future of health care who will do a large study on their own population. But once you get in with one intended use, even if that's a very simple intended use, it gives you the opportunity to study all the data because it's now a large, diverse group of people. You've got access to all of the ancillary health records and you can actually in real time be collecting the evidence that you need for the additional intended uses.

I think the average time for a health care product in the United States to go from conception to widespread adoption is somewhere between 10 and 17 years. Yeah. But at the same time, each one of these evidentiary steps serves a really critical function. And so, again, depending on your intended use, this could be easier or harder. Something like, you know, something that's done on the entire population in a screening context for a disease that you don't have and may not you know, this is just even a risk for a disease, those tend get a lot of scrutiny. If you're screening an entire population for cancer, like the cost of getting that wrong, is significantly higher than if you've got an end stage cancer patient who's failed three lines of therapy and you have an intended use, that it's kind of focused on that very specific set. Like getting that wrong isn't going to hurt the general public as much. So those are the considerations. There really are reasons why you need that data for the widespread adoption. But I do think the only way to accelerate this is to just accept that you actually do have to prove that it works, that your claim is true, and that the intervention is safe and effective and that the cost of doing it actually makes sense for that system to pay for it. Without that, doctors can't make the decisions and the payers can't make the decisions about whether or not to use it. But once you concede that you have to get to that data, then I think, again, it goes back to having these strategic relationships with forward looking health systems, health systems who want to become what they're calling the learning health systems.

Geisinger is probably the most well-known model of the learning health system, where once you get enough data about a new technology or test or app or whatever it is, where you actually have that minimum that I described and you're ready to go into a prospective trial of some kind, you put it in and you measure it in a real world evidence way. So it's almost like a living petri dish is how they describe themselves at Geisinger. And it feeds back.  So you implement a new screening program and then you measure it and make sure that it is actually causing, you know, the benefit you thought it was going to provide and doesn't actually provide harm that it's not. But you can measure it in real time. And in that way, once you get a system like that set up, you can actually like fire new technologies through a lot faster and not having to do those big, long, randomized clinical trials. This is a kind of alternative to that. And then there are even payment methods that can be considered when you're at that level as well too.

Peter Donnelly: It is arbitrary. If you go further into the tail, if you look at the top one percent, they are at even higher increased risk. If you look at the top five percent, then the average of the top five percent is less extreme than the average of the top three percent. I mean, in practice, you calculate the polygenic risk score for an individual if they're your patient or if it's you. And you'd want to know whereabouts in the distribution that individual falls. And then think about that in terms of health care. So it's arbitrary in the sense that we needed to pick some level at which to show risk.

Jill Hagenkord: I mean, I think it's always good to understand that for any kind of, you know, test and health information that you're providing, especially if you're doing it on a population scale. But I do kind of agree with Peter's point that the polygenic risk score is a little bit more akin to like a cholesterol level or an LDL cholesterol level. And I don't know that you need to explain the molecular structure of cholesterol and the mechanism of detection of the assay when you're talking to a patient about cholesterol level and risk for cardiovascular disease. I don't know that you have to unpack everything we know and don't know about genetics in order to have a conversation with a patient about the contribution of cholesterol or a polygenic risk score of the disease. But I think that's a hypothesis that should be tested.

Peter Donnelly: Yeah, I actually I think it's a great question. And I think there are really interesting issues and we don't know the answer to it, which is how are individuals going to react to this? I mean, obviously there'll be a range of reactions. There's not great data, but there's some evidence that if you compare the two situations, one of them, is you say to someone, you know, it's a good thing to lose weight and exercise more. And another one, you said to someone here's some specific information about you in the form of your genetic risk. And it's a good thing to lose weight and exercise more. There's some limited evidence in small studies that the behavior changes are different in the second case from the first case. And we absolutely we need to understand that. I think also the fact that these are risk factors and people need to think of them in a slightly more nuanced way than you are going to get sick or not get sick. And I think it's interesting if we if, as is likely, it's used in one disease first and then another disease and then another. But when we get to the stag of 20 diseases, we'll all be at high risk for something. And then we might think about it differently. We might think, okay. In my case, I know I need to be careful about this and this and pay particular attention to that. My health care system does. So I think you're right that it would be great for us to understand those things better. And we'll do that either by research studies or in these kind of early kind of pilot research experiments by getting that kind of information.

Jill Hagenkord:  I feel like I'm the speaker that has this like one answer to every question. That's all I did. I just memorized one thing and will say it all over and over again. No, it really just does come down to the evidence like prove that it works. Prove that the intervention that it drives is safe and effective and prove that it does so in a way that is cost effective. Right. It doesn't have to be cost savings. Most new devices or tests or whatever that improve health actually are not cost savings. It costs more to get better health. And the payers all accept that. But you do need to have at least that much data for them to make a decision about whether or not they're going to cover it.

Peter Donnelly:  So I think many of those things apply in principle in the NHS as well, but there's a slight difference. So there's kind of a parallel issue, as many of you will be aware. So the UK has a large, as part of the NHS, a large program to do whole genome sequencing on a set of individuals who have most of the rare conditions, usually kids thought to be genetic. This whole genome sequencing - that was kind of driven by a government initiative. No one actually in those cases did any of the stuff to work out whether it was cost effective or what the consequences of the interventions were. It was, I think, to quote the then prime minister; it was a sense that that's where the world's going to be going. And the British government at the time took the view that if that's where the world's going to be going, we got a choice between being out in front, leading or waiting till other people have figured it out and then following. And the government at the time took a view to lead. So I think there are some differences in a way you have a healthcare system that's single payer and run by, as it were, by the government. They have the opportunity of saying this stuff is strategically important. So we'll do it and we'll learn as we're doing it because we think that's where the world's going. So in that sense, I think there could be differences.

Jill Hagenkord: This is a research protocol, right? I mean, that's where they're actually going to generate the data about all of you. So kind of like what do we have in the United States? All of Us.

Peter Donnelly: Yes, it is research, but it's about real people and their real clinical care. And the Genomics England program was also real people. And it was to some extent, it was replacing stuff that was being done in the clinical genetic services in the UK.

Peter Donnelly: Good question. And let me respond at a couple of levels. Where it's something that the definition of the condition has changed, if one has the right kind of data, as Jill said, if you're already operating in a large healthcare system with electronic medical records, it's not trivial. But it should in principle be relatively straightforward just to go back in, kind of readjust the phenotype definition. You'll have different individuals and then redo the work that led to the polygenic risk score, and then you can get a polygenic risk score for the newly defined phenotype. So that's provided the data is there, that's in principle straightforward and in the world that we're heading towards 5, 10 years away where this kind of information will be part of electronic medical records, that's much easier. Where the underlying rate of disease incidence has changed, then polygenic risk scores might still be helpful because they're speaking to relative risks. But then you can ask a different question which is, if the underlying rate is changing, is there something different going on and is the relative risk still useful? Again, that's kind of checkable with the data in the new world of higher incidence. So there's at least a chance in that case that because polygenic risk scores speak to relative risks, if it's just higher baselines, they'll still be good at distinguishing the people who are at relatively high risks than average.

Peter Donnelly:  Yeah, I don't think I know very much about autism. So stroke is a disease for which polygenic risk scores are not especially predictive currently. It's a strange thing but strokes, I think, are the third highest source of mortality and morbidity globally, fourth highest. And yet it's been remarkably understudied genetically as a GWAS study. So there aren't baseline studies first of all. Secondly, we happen to be involved in one of the early, large studies of stroke. And people who study stroke know this that there are different subtypes of stroke. I think this is what the question is leading to. And we are able to show that the genetic architecture, at least at that stage in terms of the more significant GWAS variants, was quite different for the different subtypes of stroke. So that's a condition where you really do know in advance is actually heterogeneity of the phenotype and different sorts of drivers. So probably you'd want a different PRS for a large vessel stroke, another one for small vessel stroke, another one for hemorrhagic stroke and so on. And we just don't have the right data to do that. Autism I'm less familiar with. There's been recent GWAS work in autism, but also a lot of effort put in autism into finding the effects of rare variants.

Jill Hagenkord: It's going to make it harder to generate the data and have it be compelling that this test can reliably and reproducibly predict the top three percent or the bottom three percent. And so it's going to make it harder to make the clinical utility and health economic arguments. But I agree with Peter. We just don't understand enough about some of these diseases yet to bucket them correctly and time and science will help that.

Peter Donnelly:  At the moment, we're in a world where you have to construct PRSs from GWAS studies in effect. And we have one large resource to assess their performance-UK biobank. Moving forward - I'd like to hope five years, but maybe 10 years - there's a path, I think, where most people in healthcare systems in the developed world and hopefully many people in the developing world will have been genotyped. That information will be part of the system. So the potential in the data in that context is massive compared to where we are now and the ability, as Jill was saying, to use that to get much, much better PRSs for the things we're currently thinking about is striking. But also, we'll be able get PRSs for things that we haven't been thinking about. Let me give you one set of example. So I talked a little bit about this earlier. So we know the drug development industry isn't great at the moment. 90 percent of drug targets fail when they go into clinical trials. Even successful drugs have the property that they typically work on a subset of patients, not all of them. And in most cases, we have no way of knowing in advance which patients it'll work on. Genetics is often part of that story. But the current paradigm is to try and find the SNP, which determines whether someone responds to this drug or doesn't respond to that drug. That's kind of touchingly reminiscent of what common disease genetics was doing, heart disease genetics was doing 20 years ago when people were trying to find this SNP, which determined whether people would get heart disease or not, or the SNP which determine whether they get Type two diabetes. We know, and I started my talk out by saying this, we know that for any complex trait in humans, there will be thousands or tens of thousands of variants which affect the trait. In most cases, whether we respond and how much we respond to a drug is likely to be a complex trait. So on top of all the diseases we're talking about, I think there'll be a world in which there'll be polygenic risk scores, which are one of the factors to decide. I mean, they won't be determinative. They won't say this person will respond and that one won't. But this person is much more likely to respond to drug one and this person is much more likely to respond to drug three. So we'll be able to offer the drugs in the right kind of order. So it's an example of the sorts of things where these ideas polygenic risk scores will be helpful, which we'll be able to do inside health care systems where we get the right data. And I think that'll be the real lift off point.


Polygenic Risk Score Analysis for Alzheimer's Disease Risk

Mask_Group_3

Speaker: Richard Pither, PhD, CEO, Cytox Ltd

The utility of the polygenic risk score (PRS) is gaining researchers' attention for identifying individual genetic risk and disease risk prediction in Alzheimer's Disease (AD) at both the early and pre-symptomatic stages. genoSCORE™ combines Cytox's proprietary technologies variaTECT™, a single nucleotide polymorphism (SNP) profiling array built on the Axiom platform from Thermo Fisher Scientific, and SNPfitR™ analytical and interpretive software incorporating algorithms to give a PRS that identifies any individual's genetic risk of developing AD from blood or saliva-extracted DNA. This can help in patient stratification for clinical trial research to develop targeted treatments for AD.

View webinar ›

Preemptive PGx at Population-Scale: Implementation in Pittsburgh

Speaker: Philip Empey, PharmD, Ph.D. Assoc. Director

Institute for Precision Medicine​
Assoc. Professor, Pharmacy & Therapeutics​

View webinar ›

How comprehensive pharmacogenomics accelerates personalized therapies and reduces health disparities for minorities

Speaker: Ulrich Broeckel, MD Founder and CEO

RPRD Diagnostics

View webinar ›

Implementing a world-class clinical pharmacogenomics service

Speaker: Dr. Hyun Kim, Clinical Pharmacist, Clinical Pharmacogenomics Service, Boston Children's Hospital

View webinar ›

Accurate Pharmacogenomics (PGx) testing – A Market Overview

Accurate-PGx-testing_a-market-overview_use-cover-image

This white paper summarizes the pharmacogenomics (PGx) landscape addressing regulations, global population coverage, complexity of PGx markers, and the importance of using the right technology. Some of the important statistics covered include:

  • Total number and percent of trials using PGx for patient selection year over year from 2003 through 2018
  • Effect of PGx markers on completed trial outcomes
  • Breakdown of PGx trials broken down by phase
  • Top 20 sponsors of active trials using PGx biomarkers

Download white paper ›

Host Genetics with the New Axiom Human Genotyping SARS-CoV-2 Research Array

Image_5

Speaker: Shantanu Kaushikkar, Director Product Marketing, Microarray Genotyping, Thermo Fisher Scientific

A major challenge for healthcare providers and geneticists is to learn how SARS-CoV-2, the coronavirus that causes COVID-19, interacts with the human genome and to stratify populations in order to understand disease susceptibility, severity, and outcomes.

For such studies in predictive genomics, Thermo Fisher Scientific is offering the Applied Biosystems Axiom Human Genotyping SARS-CoV-2 Research Array with a COVID-19 research module covering various genes and pathways such as ACE2, TMPRSS2, and the NOTCH pathway that are implicated in SARS-CoV-2 host genetics. . In addition, the array has a large GWAS module (>820,000 markers) that can serve to provide information on druggable targets.

View webinar ›

Predictive Genomics with Finngen and the Taiwan Precision Medicine Initiative

Discoveries and translational opportunities arising from the FinnGen Project
Aarno Palotie, MD, PhD and Samuli Ripatti, PhD
Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Finland Broad Institute of MIT and Harvard, MA, US

The Taiwan Precision Medicine Initiative: Incorporating Genetics In Clinical Practice
Pui Yan Kwok, MD, PhDz
Henry Bachrach Distinguished Professor at UCSF and  Director of the Institute of Biomedical Sciences at the Academia Sinica

View webinar ›

Predictive Genomics Discussion with Dr. Mark Daly and Dr. Ulrich Broeckel

Polygenic Risk Scores: Challenges for Individualized Prediction of Disease Risk

Dr. Andrew Peterson, CEO Broadwing Bio and Principal, SARGAM Consulting

View webinar ›

Common risk variants and complex disorders

Dr. David Whitcombe, Ariel Precision Medicine

View webinar ›

Investigating genetic diversity in the VA’s Million Veteran Program (MVP)

Dr. Saiju Pyarajan, Director, VA

View webinar ›

MADCaP: The First Large Pan-African GWAS Tackling Cancer Health Disparities in African Populations

Tim-Rebbeck-headshot

Timothy Rebbeck, PhD 
Vincent L. Gregory, Jr. Professor of Cancer Prevention, Harvard TH Chan School of Public Health and Division of Population Sciences, Dana Farber Cancer Institute

View webinar ›

Polygenic Risk Scores - Value & Path into the Clinic

Learn how our understanding of polygenic risk scores to predict disease risk has broad implications for the future of health care through a Thermo Fisher Scientific sponsored discussion with Sir Peter Donnelly, founder of an unrivaled data platform for predicting disease risk of complex diseases, and Dr. Jill Hagenkord, an expert in population health, patient engagement, and medical diligence for health technology.

Transcript of presentation

It's a great pleasure to be here. So I just wanted to set the scene a little bit for the discussion that we'll have in a moment about polygenic risk scores, what I see is their huge potential and some of the challenges in getting them into healthcare systems. So, many of you will be aware of the background, but we've learned from about 15 years of human genetic studies that if you take any complex human disease. Indeed, if you take any complex human trait, there are tens or hundreds of thousands of genetic variants which affect susceptibility to that trait, which change someone's risk to the trait. So many, many, many variants, but all with small effects. If you have an A rather than a C in this place in your genome, it might increase your risk of heart disease by one percent and a G rather than a T here might increase it by two percent and so on. So the idea's been around for a long time of combining those small effects within an individual, aggregating them. And that's what's called a polygenic risk score. So if we did that for lots of people, for everyone in the room, we'd end up with a distribution of scores. Most people would end up somewhere in the middle because they have some of these variants which bump their risk up a bit and some which bump their risk down a bit. Some people would end up at one end because they've got rather more of the variants which increase risk. And some at the other end because they've got rather more of the variants which decrease risk. So that idea has been around for a long time. But I think there's been a step change in about the last 12 months for two different reasons. The first is that we now have large enough studies of the diseases themselves to do a good job of identifying which of the variants, the many variants across the genome, which contribute to risk. The first point. And the second is that we have a resource in UK Biobank, which is large enough to allow us to measure what the impact of these polygenic risk scores. In principle, they should be helpful with segregating individuals within a population according to risk. So we can measure that.

So this is based on a polygenic risk score for coronary disease, for coronary artery disease. So the background is that we in genomics, we have data from you could think of it as 10,000 GWAS studies. We analyze all of that data together, focusing on, in this case, coronary disease. We decide which of the SNPs across the genome to include in our polygenic risk score and how much to weight them. And then we take those scores. We kind of lock down the algorithm for the scores. We take that into a totally independent set of individuals in the UK Biobank and we use the health information in biobank to learn for those individuals about their disease incidents at different ages. And the graph shows three curves, the red curve. So this is men, but the men in the test set who have polygenic risk scores in the top three percent. And what it shows, as one moves across the picture, is the increasing incidence of disease with age. You can see that for one of those men in the red group, their lifetime risk of coronary disease is about 40 percent. If you compare vertically, if you compare the red curve with the blue curve is the kind of middle 20 percent. The green curve is the bottom three percent. If you compare vertically, it speaks to relative risk. The men in the red group are three fold more likely than average to have coronary disease. And if you compare horizontally, if you look at a man in his early 40s here in the red group, his risk of coronary disease is about the same as a typical man in his mid 50s. So people in the UK have talked about the idea of heart age, and it's a good way of getting across, simple and informal way of getting across the idea that men in this group are at substantially increased risk. In this case, if you were such a man, you might think about adapting your lifestyle, exercising more, losing weight, stopping smoking if you're a smoker. But there's also a potential clinical intervention through statins, which is a way of reducing heart disease risk.

The second one is breast cancer. So you'll be very aware that there are two genes, BRCA1 and BRCA2. A woman who has the wrong kind of mutation in either of those genes has a very increased risk of breast cancer, something like ten-fold average and a 70 or 80 percent lifetime risk. So this curve is ignoring BRCA1 and BRCA2. It's doing the same thing. It's aggregating risk from many, many, many variants across the genome, which each have small effects. Again, if you do that across many women, there'll be a distribution of these poligenic risk scores. The women in the top three percent in this case for breast cancer have almost a 30 percent lifetime risk of breast cancer. And again, if you look horizontally, a woman who is in her early 40s in the red group has about the same risk of breast cancer as a typical woman in her early to mid 50s. So in the UK, and this is not atypical, but in the UK, breast cancer screening is offered via mammograms to all women at age 50. And I think it's hard to look at this picture and think that that's a sensible strategy. In any sane version of the world, we'd be targeting screening on these women earlier and potentially more frequently. And we might be targeting screening on these women whose lifetime risk is only two or three percent in total, we might be targeting that screening less frequently. This kind of information changes the way you target screening. But also, if you think about two women, one in the green group at age 60 and one in the red group, if they both had a mammogram and it came back with a positive result because the woman in the red group has a much higher underlying chance of having the disease, her positive result is much more likely to be a true positive. Whereas for the woman in the green group, it's rather more likely to be a false positive. So this kind of information, because it changes baseline rates of disease, changes not just the way we should target screening, but also the way we interpret it.

Final picture I want to show- this is prostate cancer, which in our hands is the one where the genetics is currently most powerful for stratifying risk. The men in the top three percent of risk here have something like a 40 percent chance of having been diagnosed with prostate cancer by age 70 compared to something which is of the order of 10 percent for the people in the middle group. Prostate cancer doesn't have a good screening tool. Prostate specific antigen is sometimes used. It's kind of a noisy measure, but the possibility is apparent from this. I've trying to combine polygenic risk scores with PSA measurements to do a much better job of interpreting and understanding risk. Or to put it another way, if you're a man in the green group and I remain in the red group and we had borderline PSA readings, I should be much more worried in the red group than you should in the green group. Again, because the underlying retro disease is much lower. So that's three examples. In fact, you can calculate these polygenic risk scores across 15 or 20 diseases as we have. And this is a picture of an individual. The guy in the picture is called Matt Handcock. He's the secretary of state for health in the UK. So he's the cabinet minister responsible for running the National Health Service. And to his credit, he was keen on understanding how it felt as an individual to learn about your risk through polygenic risk scores. So he asked us to do this in his case. And then he spoke publicly about a couple of examples. This is him speaking about prostate cancer, where he learned that he was at something like a 50 percent increased risk compared to average. He wasn't aware of any family history. So it's given him something that he can be aware of as he gets older.

The answer is yes. It turns out that the polygenic risk score for coronary disease is effectively independent of the clinical risk scores, either the QRISK score used in the UK or the AHA score used in the US. In fact, it's almost independent of family history and that's initially a bit surprising because polygenic risk scores are about genetics and family history is about genetics, we think. So what's going on here? I think it's one of those things where when you learn a surprising result, you kind of retrain your intuition, which I've it's process I've been through. I think there are a couple of things. I think, firstly, family history, the way we use it clinically is picking up some element, this is a sort of obvious statement, but it is picking up some element of shared environment. And maybe that's rather larger than we thought. And also, family history is probably picking up the effects of rare variants with larger effect, which aren't being measured or captured by polygenic risk scores. But interestingly, polygenic risk scores because they're largely independent of family history add to that information as well. So you can ask the question, suppose we already estimate risk by one of these clinical tools and I'll use the QRISK example from the UK, and then we think about using both the existing clinical tools plus genetics through polygenic risk scores, how does it change the results? So I'll give you a sense. So this is a schematic. This is an extrapolation from real data in UK Biobank. In the U.K, there are about 10 million people aged between 40 and 55. And so I want you to think of that group here. These little stick figures thus represent about one hundred thousand people each. So the first question is, if we just use the existing clinical risk score in that group, it turns out that about 10 percent of the individuals so 100 million of the 10 million are above the clinical threshold for statin prescription in the UK. Then you can say, what if I change and I incorporate genetics as well as QRISK. So the polygenic risk scores are independent of QRISK. You should be able to do a better job by using both of them than just by using one of them. That's indeed the case. So the first thing is that when you combine genetics, there are more individuals above the threshold because you're doing a better job of estimating risk. And the second is they're actually different people. So, to break it up there were these 10 little stick figures, so about a million people who are above the threshold just using their clinical scores. So eight of those or eight hundred thousand of them, in my numerical example, are still above the threshold when you combine genetics. But they are two hundred thousand of them who are now below the threshold. So they had unhelpful traditional risk factors, but helpful genetics which brought them down. But interestingly, there are almost half a million people who are above the threshold who are currently invisible to us using just the clinical risk scores. So they are people whom the clinical guidelines say we should be intervening and offering statins, but they're invisible to us. And we'd learn about them if we combine genetics and the traditional risk scores. And you can ask because the data is there in UK Biobank, and you can say, if I follow those individuals through time, is it the case that more of them in the group that we've now moved into the high risk group get disease than those which are taken out of the high risk group? And indeed, that's the case. So a higher proportion of these people, the ones who are now above the threshold, have both coronary disease and cardiovascular disease than the proportion of these people who are the ones who were above the threshold. But actually, when we do, the more sophisticated risk estimation are less have have a lower lifetime risk and a lower 10 year risk. So that's one example of combining genetics and non-genetic risk factors. And that's, I think, a natural way to think in general rather than just thinking about the genetics in its own right.

Polygenic risk scores tend to be more powerful when they are used in the populations of the ancestry of the individuals in whom the original genetic studies were undertaken. And most human genetic studies have been undertaken in people with European ancestries. So they tend to be more powerful in those groups. Some people say, well, you just can't use them for other groups. Actually, I think about it as an empirical question. The right question to ask is what is the performance like in other groups and how much less powerful is it? And what are the ways we can think about addressing that? So I'll go back to the three examples I've shown you before. This is coronary disease. Picture I showed earlier was based on men with European ancestries. There aren't large numbers of individuals in UK Biobank, but there are some individuals in UK biobank of non European ancestries. They are preferentially from South Asia. There are some East Asian and some Afro Caribbean individuals as well. And actually, if you estimate risk in those, these shadings are the uncertainty intervals, they are wider because the numbers of individuals are less. But broadly, the performance is reasonably similar, or at least it's similar in the high risk group for coronary disease. We do the same exercise for breast cancer. Again, there's more uncertainty, but the level of risk in the high risk group is lower in the non European ancestry individuals and in prostate cancer, it's somewhere between the two. So what to do? Two things. The first is to flag the fact that there's absolutely an issue about using these tools currently in individuals from other ancestries. But the second is, it's an empirical question. The right question to ask is for a given polygenic risk score algorithm and a particular disease, what's the performance in individuals of this ancestry, that ancestry and so on.

I think the potential of polygenic risk scores is that it allows us to identify early in life individuals who are at high risk of particular diseases. And if we're able to do that, we can target interventions in some cases, so for coronary disease, either lifestyle changes or statin lipid lowering medications or in other cases, it can allow us to target screening. Critically because they're based on common variants to get the information needed to calculate polygenic risk scores, you just need to run a genotyping array. You don't need to sequence individuals. So it's cheaper by a factor of something like twenty-fold. And it's something that starts to be accessible at a population scale. You can do one genotyping array and calculate polygenic risk scores for say, 20 different diseases. And interestingly, although any one of you is probably not in the top three percent for heart disease, well the chance that any one of you is in the top three percent, and you probably are not in the top three percent for type two diabetes and if you're a woman, probably not in the top three percent for breast cancer and so on. Across 20 or so diseases, you're likely to be in the top three percent for something. So one way of thinking about this is it allows us as individuals and health care systems to work out what are the two or three diseases at which we happen to be at increased risk because of the common variants we inherit. It's different from traditional clinical genetics in another way. So these scores are risk factors. If you have a high polygenic risk score for coronary disease, it increases your risk of disease in the same way that having a high LDL cholesterol level increases your risk of disease. Neither is determinative. And in each case, you can do other things to mitigate the risk. So we should think about polygenic risk scores in that way. They're much less correlated among relatives. If you're in the top one percent for polygenic risk score for a particular disease than the chance that one of your kids or one of your siblings is in the top one percent is 10 percent. They're at increased risk. But it's nothing like if a woman has a BRCA mutation, then all of her female relatives have, first degree relatives, have a 50 percent chance of carrying the mutation. And that then has a very significant effect. So I'd argue we should think about it much more like medical tests - doctors of different specialties and in primary care can think about it. We don't need clinical geneticists treating polygenic risk score because I don't think we need genetic counseling to be feeding it back to individuals.

[00:16:50] They're useful for risk prediction in all individuals, they perform more powerfully in individuals of some ancestries than others currently. So that's absolutely something that the field should be focused on. It's something that we're focused on in Genomics that'll get addressed in two different ways. One of them is improving the algorithms coming up with clever methods that can make use of the data we do have available on other ancestries. And the other one is larger datasets. So I see that as very significant, but short term effect. The final one is to say that we've put a lot of effort into this within Genomics. My background, as many of you will know, in the academic world was statistical genetics. Vincent Plagnol, who leads for us on this area and his team, have got polygenic risk scores for 15 or 20 or so diseases. And in each case, they're comparable to it, in most cases better than those that have been published.

[00:17:42] I'll just finish by giving you a sense of where this is going into health care. That's what we're aiming at. We're aiming at using this in a way that makes a difference to clinical care. So in the UK, there's very strong support from the government, including the guy who went through this experience himself, Matt Handcock, the secretary of state for health. There has been a recent announcement of a large cohort in the UK of five million people. A key part of the thinking behind that cohort is the idea of getting genetic information through genotyping arrays, calculating polygenic risk scores, feeding that back to the individuals and feeding it into the National Health Service, into the health care system and there are a number of other parts of the National Health Service who are talking to us about doing polygenic risk scores in primary care, in some instances, and in some other context as well. We'll be doing the polygenic risk scores in the in the five million cohort, the ADD cohort. We are in discussions with a number of healthcare systems in the US, again, about early adoption and being thinking about piloting this a number of different ways. We can discuss it a bit more in the question and discussion which follows. But I now think strongly that actually this is the area, we've been working hard for 20 years as you've heard, trying to use genetics to understand human disease, better common human diseases in particular, with a view to improving the way we develop medications and improving clinical care. I'm absolutely convinced that this is the area where genetics will have its biggest impact on health care. It's all the common diseases. It allows us to identify people earlier in life and then to work out how to get the right interventions or lifestyle changes or how to target screening appropriately.

Panel discussion

Transcript of panel discussion

Peter Donnelly: I think there's an important issue that has to be solved disease by disease, and that's the question one gets asked. Okay, so I'm a patient or I'm a doctor and I know either my own or my patient’s polygenic risk score for this disease. Then the question is, what next? What do I do differently as a result of that? So I think to get this stuff into healthcare, we need to solve those questions, disease by disease. We need to know what the next steps are. In the case of coronary disease, I think it's one of the easier ones. So in the UK, general practitioners have software sitting on their computers, which combines all of those classical risk scores -cholesterol, age, sex and so on, and comes up with a 10 year risk of coronary disease. The GP looks at it, and if it's above the guidelines, he talks to the patient about statins. So in that case, I think all that's needed is to change the software so that there's one more box into which you can enter a polygenic risk score if it's available, and then you just press the button in exactly the same way. It does a slightly different calculation using the genetic information if it's there, not using it, if it's not there, comes up with a risk estimate over 10 years. And the GP interpreted it exactly the same way. So I think that's one case where the path to clinical usage is clear in some of the cancers and breast cancer, for example, one can think about targeting screening earlier. And then you have to do the calculations that say, well, if you change a program, so you targeted certain percent of the top few percent of women based on polygenic risk scores at this age and this often, you could use the data that's available to work out how many cases of breast cancer you'd catch early and so on. So I think we need to solve it disease by disease. We're thinking hard about that. But the National Health Service is starting to think hard about it. In the UK, as I mentioned, there's this large five million cohort that is going ahead. That will involve polygenic risk scores. It will force the system to work out how to incorporate those. But I think disease by disease and a number of other pilots we have are doing the same sorts of things.

Jill Hagenkord: I mean, I think the things that matter are the same in both countries. A big research, a government funded research project is a really excellent way to catalyze this process and get the data that we need. But in both countries, if you want to be legally on the market, you need to have analytical validity and then clinical validity about the meaning of that finding that you've gotten. And there's several polygenic risk scores that have gotten to the clinical validity stage to get actual widespread adoption, whether it's in the UK or in the US. You need to come into professional society guidelines. You need NICE to say this is what you're supposed to do or you need the appropriate professional society in the United States to issue practice guidelines around it. And until then, you know, you might get some practices that will pick it up. But if you're going to go widespread you need professional society guidelines, and you need someone to pay for it. And that usually is tied to the professional society guideline part. And so to get the rest of the way there - so you need analytical validity, clinical validity. Then you need clinical utility, some kind of health economic model. And at that point, you can then do a large prospective study, either a randomized clinical trial or some kind of real world evidence style observational study with this test being done in the wild. And it's at that point that the professional societies will consider adoption and then the payers will consider paying. And that's when it will take off. But it's these large research initiatives that are going to catalyze that and facilitate that.

Jill Hagenkord: Peter and I talked about that a lot, actually. The initiative that he's a part of in the UK is really important because I think once you can get yourself with one indication for use that you kind of got enough evidence where you can legally put it on the market and you get either a large research project like in the U.K. or, you know, some partnerships with some more forward looking health systems who really believe that this is going to be part of the future of health care who will do a large study on their own population. But once you get in with one intended use, even if that's a very simple intended use, it gives you the opportunity to study all the data because it's now a large, diverse group of people. You've got access to all of the ancillary health records and you can actually in real time be collecting the evidence that you need for the additional intended uses.

I think the average time for a health care product in the United States to go from conception to widespread adoption is somewhere between 10 and 17 years. Yeah. But at the same time, each one of these evidentiary steps serves a really critical function. And so, again, depending on your intended use, this could be easier or harder. Something like, you know, something that's done on the entire population in a screening context for a disease that you don't have and may not you know, this is just even a risk for a disease, those tend get a lot of scrutiny. If you're screening an entire population for cancer, like the cost of getting that wrong, is significantly higher than if you've got an end stage cancer patient who's failed three lines of therapy and you have an intended use, that it's kind of focused on that very specific set. Like getting that wrong isn't going to hurt the general public as much. So those are the considerations. There really are reasons why you need that data for the widespread adoption. But I do think the only way to accelerate this is to just accept that you actually do have to prove that it works, that your claim is true, and that the intervention is safe and effective and that the cost of doing it actually makes sense for that system to pay for it. Without that, doctors can't make the decisions and the payers can't make the decisions about whether or not to use it. But once you concede that you have to get to that data, then I think, again, it goes back to having these strategic relationships with forward looking health systems, health systems who want to become what they're calling the learning health systems.

Geisinger is probably the most well-known model of the learning health system, where once you get enough data about a new technology or test or app or whatever it is, where you actually have that minimum that I described and you're ready to go into a prospective trial of some kind, you put it in and you measure it in a real world evidence way. So it's almost like a living petri dish is how they describe themselves at Geisinger. And it feeds back.  So you implement a new screening program and then you measure it and make sure that it is actually causing, you know, the benefit you thought it was going to provide and doesn't actually provide harm that it's not. But you can measure it in real time. And in that way, once you get a system like that set up, you can actually like fire new technologies through a lot faster and not having to do those big, long, randomized clinical trials. This is a kind of alternative to that. And then there are even payment methods that can be considered when you're at that level as well too.

Peter Donnelly: It is arbitrary. If you go further into the tail, if you look at the top one percent, they are at even higher increased risk. If you look at the top five percent, then the average of the top five percent is less extreme than the average of the top three percent. I mean, in practice, you calculate the polygenic risk score for an individual if they're your patient or if it's you. And you'd want to know whereabouts in the distribution that individual falls. And then think about that in terms of health care. So it's arbitrary in the sense that we needed to pick some level at which to show risk.

Jill Hagenkord: I mean, I think it's always good to understand that for any kind of, you know, test and health information that you're providing, especially if you're doing it on a population scale. But I do kind of agree with Peter's point that the polygenic risk score is a little bit more akin to like a cholesterol level or an LDL cholesterol level. And I don't know that you need to explain the molecular structure of cholesterol and the mechanism of detection of the assay when you're talking to a patient about cholesterol level and risk for cardiovascular disease. I don't know that you have to unpack everything we know and don't know about genetics in order to have a conversation with a patient about the contribution of cholesterol or a polygenic risk score of the disease. But I think that's a hypothesis that should be tested.

Peter Donnelly: Yeah, I actually I think it's a great question. And I think there are really interesting issues and we don't know the answer to it, which is how are individuals going to react to this? I mean, obviously there'll be a range of reactions. There's not great data, but there's some evidence that if you compare the two situations, one of them, is you say to someone, you know, it's a good thing to lose weight and exercise more. And another one, you said to someone here's some specific information about you in the form of your genetic risk. And it's a good thing to lose weight and exercise more. There's some limited evidence in small studies that the behavior changes are different in the second case from the first case. And we absolutely we need to understand that. I think also the fact that these are risk factors and people need to think of them in a slightly more nuanced way than you are going to get sick or not get sick. And I think it's interesting if we if, as is likely, it's used in one disease first and then another disease and then another. But when we get to the stag of 20 diseases, we'll all be at high risk for something. And then we might think about it differently. We might think, okay. In my case, I know I need to be careful about this and this and pay particular attention to that. My health care system does. So I think you're right that it would be great for us to understand those things better. And we'll do that either by research studies or in these kind of early kind of pilot research experiments by getting that kind of information.

Jill Hagenkord:  I feel like I'm the speaker that has this like one answer to every question. That's all I did. I just memorized one thing and will say it all over and over again. No, it really just does come down to the evidence like prove that it works. Prove that the intervention that it drives is safe and effective and prove that it does so in a way that is cost effective. Right. It doesn't have to be cost savings. Most new devices or tests or whatever that improve health actually are not cost savings. It costs more to get better health. And the payers all accept that. But you do need to have at least that much data for them to make a decision about whether or not they're going to cover it.

Peter Donnelly:  So I think many of those things apply in principle in the NHS as well, but there's a slight difference. So there's kind of a parallel issue, as many of you will be aware. So the UK has a large, as part of the NHS, a large program to do whole genome sequencing on a set of individuals who have most of the rare conditions, usually kids thought to be genetic. This whole genome sequencing - that was kind of driven by a government initiative. No one actually in those cases did any of the stuff to work out whether it was cost effective or what the consequences of the interventions were. It was, I think, to quote the then prime minister; it was a sense that that's where the world's going to be going. And the British government at the time took the view that if that's where the world's going to be going, we got a choice between being out in front, leading or waiting till other people have figured it out and then following. And the government at the time took a view to lead. So I think there are some differences in a way you have a healthcare system that's single payer and run by, as it were, by the government. They have the opportunity of saying this stuff is strategically important. So we'll do it and we'll learn as we're doing it because we think that's where the world's going. So in that sense, I think there could be differences.

Jill Hagenkord: This is a research protocol, right? I mean, that's where they're actually going to generate the data about all of you. So kind of like what do we have in the United States? All of Us.

Peter Donnelly: Yes, it is research, but it's about real people and their real clinical care. And the Genomics England program was also real people. And it was to some extent, it was replacing stuff that was being done in the clinical genetic services in the UK.

Peter Donnelly: Good question. And let me respond at a couple of levels. Where it's something that the definition of the condition has changed, if one has the right kind of data, as Jill said, if you're already operating in a large healthcare system with electronic medical records, it's not trivial. But it should in principle be relatively straightforward just to go back in, kind of readjust the phenotype definition. You'll have different individuals and then redo the work that led to the polygenic risk score, and then you can get a polygenic risk score for the newly defined phenotype. So that's provided the data is there, that's in principle straightforward and in the world that we're heading towards 5, 10 years away where this kind of information will be part of electronic medical records, that's much easier. Where the underlying rate of disease incidence has changed, then polygenic risk scores might still be helpful because they're speaking to relative risks. But then you can ask a different question which is, if the underlying rate is changing, is there something different going on and is the relative risk still useful? Again, that's kind of checkable with the data in the new world of higher incidence. So there's at least a chance in that case that because polygenic risk scores speak to relative risks, if it's just higher baselines, they'll still be good at distinguishing the people who are at relatively high risks than average.

Peter Donnelly:  Yeah, I don't think I know very much about autism. So stroke is a disease for which polygenic risk scores are not especially predictive currently. It's a strange thing but strokes, I think, are the third highest source of mortality and morbidity globally, fourth highest. And yet it's been remarkably understudied genetically as a GWAS study. So there aren't baseline studies first of all. Secondly, we happen to be involved in one of the early, large studies of stroke. And people who study stroke know this that there are different subtypes of stroke. I think this is what the question is leading to. And we are able to show that the genetic architecture, at least at that stage in terms of the more significant GWAS variants, was quite different for the different subtypes of stroke. So that's a condition where you really do know in advance is actually heterogeneity of the phenotype and different sorts of drivers. So probably you'd want a different PRS for a large vessel stroke, another one for small vessel stroke, another one for hemorrhagic stroke and so on. And we just don't have the right data to do that. Autism I'm less familiar with. There's been recent GWAS work in autism, but also a lot of effort put in autism into finding the effects of rare variants.

Jill Hagenkord: It's going to make it harder to generate the data and have it be compelling that this test can reliably and reproducibly predict the top three percent or the bottom three percent. And so it's going to make it harder to make the clinical utility and health economic arguments. But I agree with Peter. We just don't understand enough about some of these diseases yet to bucket them correctly and time and science will help that.

Peter Donnelly:  At the moment, we're in a world where you have to construct PRSs from GWAS studies in effect. And we have one large resource to assess their performance-UK biobank. Moving forward - I'd like to hope five years, but maybe 10 years - there's a path, I think, where most people in healthcare systems in the developed world and hopefully many people in the developing world will have been genotyped. That information will be part of the system. So the potential in the data in that context is massive compared to where we are now and the ability, as Jill was saying, to use that to get much, much better PRSs for the things we're currently thinking about is striking. But also, we'll be able get PRSs for things that we haven't been thinking about. Let me give you one set of example. So I talked a little bit about this earlier. So we know the drug development industry isn't great at the moment. 90 percent of drug targets fail when they go into clinical trials. Even successful drugs have the property that they typically work on a subset of patients, not all of them. And in most cases, we have no way of knowing in advance which patients it'll work on. Genetics is often part of that story. But the current paradigm is to try and find the SNP, which determines whether someone responds to this drug or doesn't respond to that drug. That's kind of touchingly reminiscent of what common disease genetics was doing, heart disease genetics was doing 20 years ago when people were trying to find this SNP, which determined whether people would get heart disease or not, or the SNP which determine whether they get Type two diabetes. We know, and I started my talk out by saying this, we know that for any complex trait in humans, there will be thousands or tens of thousands of variants which affect the trait. In most cases, whether we respond and how much we respond to a drug is likely to be a complex trait. So on top of all the diseases we're talking about, I think there'll be a world in which there'll be polygenic risk scores, which are one of the factors to decide. I mean, they won't be determinative. They won't say this person will respond and that one won't. But this person is much more likely to respond to drug one and this person is much more likely to respond to drug three. So we'll be able to offer the drugs in the right kind of order. So it's an example of the sorts of things where these ideas polygenic risk scores will be helpful, which we'll be able to do inside health care systems where we get the right data. And I think that'll be the real lift off point.


Polygenic Risk Score Analysis for Alzheimer's Disease Risk

Mask_Group_3

Speaker: Richard Pither, PhD, CEO, Cytox Ltd

The utility of the polygenic risk score (PRS) is gaining researchers' attention for identifying individual genetic risk and disease risk prediction in Alzheimer's Disease (AD) at both the early and pre-symptomatic stages. genoSCORE™ combines Cytox's proprietary technologies variaTECT™, a single nucleotide polymorphism (SNP) profiling array built on the Axiom platform from Thermo Fisher Scientific, and SNPfitR™ analytical and interpretive software incorporating algorithms to give a PRS that identifies any individual's genetic risk of developing AD from blood or saliva-extracted DNA. This can help in patient stratification for clinical trial research to develop targeted treatments for AD.

View webinar ›

Preemptive PGx at Population-Scale: Implementation in Pittsburgh

Speaker: Philip Empey, PharmD, Ph.D. Assoc. Director

Institute for Precision Medicine​
Assoc. Professor, Pharmacy & Therapeutics​

View webinar ›

How comprehensive pharmacogenomics accelerates personalized therapies and reduces health disparities for minorities

Speaker: Ulrich Broeckel, MD Founder and CEO

RPRD Diagnostics

View webinar ›

Implementing a world-class clinical pharmacogenomics service

Speaker: Dr. Hyun Kim, Clinical Pharmacist, Clinical Pharmacogenomics Service, Boston Children's Hospital

View webinar ›

Accurate Pharmacogenomics (PGx) testing – A Market Overview

Accurate-PGx-testing_a-market-overview_use-cover-image

This white paper summarizes the pharmacogenomics (PGx) landscape addressing regulations, global population coverage, complexity of PGx markers, and the importance of using the right technology. Some of the important statistics covered include:

  • Total number and percent of trials using PGx for patient selection year over year from 2003 through 2018
  • Effect of PGx markers on completed trial outcomes
  • Breakdown of PGx trials broken down by phase
  • Top 20 sponsors of active trials using PGx biomarkers

Download white paper ›

Host Genetics with the New Axiom Human Genotyping SARS-CoV-2 Research Array

Image_5

Speaker: Shantanu Kaushikkar, Director Product Marketing, Microarray Genotyping, Thermo Fisher Scientific

A major challenge for healthcare providers and geneticists is to learn how SARS-CoV-2, the coronavirus that causes COVID-19, interacts with the human genome and to stratify populations in order to understand disease susceptibility, severity, and outcomes.

For such studies in predictive genomics, Thermo Fisher Scientific is offering the Applied Biosystems Axiom Human Genotyping SARS-CoV-2 Research Array with a COVID-19 research module covering various genes and pathways such as ACE2, TMPRSS2, and the NOTCH pathway that are implicated in SARS-CoV-2 host genetics. . In addition, the array has a large GWAS module (>820,000 markers) that can serve to provide information on druggable targets.

View webinar ›


For other perspectives on the use and impact of predictive genomics in clinical research for health systems, national initiatives and researchers, visitwww.thermofisher.com/predictive-genomics.