Jeanette McCarthy, MPH, PhD
There are tens of thousands of genetic tests on the market and they are increasingly being used to make important health care decisions. The evaluation of the performance of these tests is key to ensuring their safety and efficacy. Whether it’s a physician trying to decide which laboratory’s genetic test to order, or a regulator assessing the safety and efficacy of a test, or a payer needing to make a coverage determination, all would benefit from having a coherent framework for evaluating genetic tests.
The ACCE framework
In the early 2000s the U.S. Centers for Disease Control and Prevention’s Office of Public Health Genomics developed a process for evaluating genetic tests. This model, referred to as ACCE, considers four main aspects of a test: its analytical validity, clinical validity and clinical utility as well as the ethical, legal and social issues around using that test.
Analytical validity refers to the laboratory assay’s ability to accurately detect the genetic variant of interest. Clinical validity refers to how well a positive genetic test result correlates with risk of disease, drug response or other outcomes. Clinical utility refers to whether genetic testing results in measurable improvements in health or improves the management of patients.
ACCE is the most widely cited framework for evaluating genetic tests and the concepts of analytical validity, clinical validity and clinical utility are utilized throughout the industry. However, the interpretation of these concepts varies, as does the evidence used to support the assertions.
In this post, I’ll focus on the concept of clinical validity of genetic tests and explain why I think we need a more nuanced vocabulary around this concept and review some of the efforts to define standards for measuring it.
There are two components of clinical validity
According to the ACCE framework, clinical validity refers to the ability of a genetic test to accurately predict a trait or clinical outcome. It’s probably what the consumer thinks of as the accuracy of a test, although many tests claim to be accurate based on analytical validity alone. A laboratory assay that is very good at detecting what it says it can detect (i.e. has good analytical validity) is said to have good accuracy. This is not the same as clinical validity.
Clinical validity is predicated on the assumption that there is a scientifically valid association between the gene and trait. Thus, scientific validity is a prerequisite for clinical validity, but not the only component. Clinical validity also encompasses the predictive value of the test, which I’ll refer to as predictive ability.
When someone uses the term clinical validity, it’s important to understand whether they are just referring to scientific validity, predictive ability or both.
Standards for scientific validity vary by the type of genetic test
It doesn’t make sense to talk about the predictive ability of a genetic test unless there is evidence of a scientifically valid relationship between the genetic variant and trait. Scientific validity is something that is best defined by the scientific community that understands how these discoveries are made in the first place. Several frameworks for determining the scientific validity of different types of genetic tests have been proposed by researchers and professional organizations.
For rare monogenic (Mendelian, single gene) diseases, scientific validity is determined both at the level of the gene and at the level of the specific variant in a gene. The NIH-funded Clinical Genome Resource (ClinGen) organization put forth an evidence-based framework for evaluating disease-gene pairs. This framework has been applied to some areas of cancer, but otherwise has not been widely adopted. Most commercial gene panel tests include a mix of genes, including some that lack evidence of scientific validity. Unfortunately, the distinction between scientifically valid and un-validated genes is not always clear.
At the variant level, most laboratories have their own variant classification methods, based at least in part on a set of guidelines developed by the American College of Medical Genetics and Genomics (ACMG). Variants are classified as pathogenic, likely pathogenic, variant of uncertain significance (VUS), likely benign or benign based on the cumulative amount of evidence currently available. Many labs submit their variants and assertions of pathogenicity to public variant databases like ClinVar in order to promote more accurate classification. Last year, the FDA acknowledged that public variant databases may be considered as scientifically valid sources of information to support FDA approval of some genetic tests. This is a positive development that both acknowledges the work of this group and promotes sharing of genetic data.
For common polygenic diseases and traits, scientifically valid common variants with small effects (e.g. ‘risk alleles’), are generally identified in large Genome-Wide Association Studies (GWAS). Published GWAS generally adhere to very rigorous standards of proof of scientific validity. Well-designed, race-matched GWAS variants achieving genome-wide statistical significance (p value <0.00000001) and replicated in at least one independent population are typically considered scientifically valid. However, not all common variants-disease associations have been evaluated in GWAS, making their scientific validity less clear. In these cases, a robust meta-analysis showing a net positive association can suffice. A recent paper proposes a framework for classifying the scientific validity of risk allele associations with polygenic diseases and traits that will be useful if adopted across the field.
Polygenic risk scores
Genetic tests for common diseases that combine the effect of many variants into a polygenic risk score (PRS) are slowly making their way into the market. Guidelines for establishing the scientific validity of these risk prediction tests has not been articulated, although a statement on recommendations for the reporting of risk prediction study results, the GRIPS Statement, has been published. Generally speaking, PRS developed in one cohort should be validated in at least one large independent cohort.
In the field of pharmacogenomics, the PharmGKB database is a respected source of scientific evidence supporting associations between specific gene variants and therapeutic outcomes. Their evidence ranking system is widely utilized, along with guidelines from the Clinical Pharmacogenomics Implementation Consortium (CPIC) that address actionability of the test results. Large commercially available pharmacogenomic panels include genes supported by strong scientific evidence as well as those lacking strong evidence.
Recent movements by the FDA to regulate pharmacogenomic tests have left testing labs questioning what metrics the FDA will apply to judge clinical validity of these tests. Early signs suggest that PharmGKB and CPIC guidelines might not be enough.
Scientific validity is just one component of clinical validity
Scientific validity is only one aspect of clinical validity. It’s an essential component but doesn’t tell the whole story. Even when there is a scientifically valid association, the predictive ability of a test may be poor. This is the case for many of the common disease variants discovered through GWAS studies. For example, a Factor II (F2) gene variant is significantly and consistently associated with venous thromboembolism but is associated with a very modest increases risk of disease. Thus, it has excellent scientific validity, but poor predictive ability.
Clinical validity is often narrowly interpreted to include only scientific validity and not predictive ability. This may give a false sense of confidence about the test. Nonetheless, it’s understandable why this narrow interpretation is sometimes used. The other feature of clinical validity, the predictive ability, depends on the clinical context/target population, meaning that it will vary depending on how the test is used.
Predictive ability is a distinct aspect of clinical validity
The predictive ability of a test can be described in different ways. The most common measures include the positive and negative predictive values. These two measures go hand in hand and depend on the sensitivity/specificity of the test as well as the clinical context/target population. Most genetic tests are used to identify cases, and not rule them out, making positive predictive value the more important measure to maximize.
Positive predictive value (PPV) is the probability that a person who tests positive for the genetic variant will actually develop or have the trait/disease/outcome. It is sometimes referred to as ‘risk’. It is the same concept as the genetic term, penetrance. Generally speaking, PPV/penetrance is greatest in populations with the highest risk of disease. For example, a test will be more predictive in patients with a family history of disease than in the general population.
Historically, genetic diseases have been described as either Mendelian (monogenic) or complex (polygenic), with Mendelian disease variants having high penetrance and complex disease variants having low penetrance. It is now appreciated that genetic disease variants for Mendelian diseases are not always high, but sometimes moderate. Similarly, there are some exceptions for complex diseases, where variants have been found to have moderate penetrance. It makes more sense to think of diseases along a spectrum of penetrance from high to moderate to low.
High penetrance variants
This includes most Mendelian (single gene) disorders. For example, variants of the CFTR gene associated with cystic fibrosis, or variants of BRCA1 associated with breast cancer. The high penetrance makes them suitable as predictive as well as diagnostic genetic tests. Risk of disease in individuals with pathogenic variants is generally >>50%.
Moderate penetrance variants
Variants with moderate penetrance generally increase risk of disease 3-4-fold above the average population risk. These Includes some disease genes on cancer panels as well as some variants associated with complex diseases (e.g. the APOE-e4 variant associated with Alzheimer disease). It also includes many of the pharmacogenomic variants. The ability of these tests to accurately predict disease or treatment outcomes is modest, making them less informative on their own, and probably best used in conjunction with other patient information to make decision.
Low penetrance variants
Genetic variants underlying common, complex diseases and traits tend to have very low penetrance, increasing risk <2-fold, regardless of the target population. Consequently, they have low predictive ability and are generally not even offered by clinical labs. However, there are many genetic tests on the market outside of the clinical space that fall into this category, for example, tests for caffeine metabolism, vitamin levels and other traits.
Why the definition of clinical validity matters
Clinical validity, as outlined in the ACCE framework, includes both elements of scientific validity and predictive ability of a test. It can be misleading when clinical validity is narrowly interpreted as only scientific validity.
For example, 23andMe obtained FDA approval for some of its health-related tests by providing evidence that the tests were among other things, clinically valid. One of these tests includes a GBA gene variant identified in GWAS that puts one at ~5-10% risk of Parkinson disease, which is higher than the average population risk of ~1% but clearly not a good predictor of disease. It’s evident that the clinical validity of this test is based on scientific validity alone and not predictive ability.
I don’t think it’s wrong that the FDA focused only on scientific validity in this case. Predictive ability is important but difficult to regulate. After all, isn’t the predictive value of a test something that should be judged by the users or ordering physician? For some healthcare providers, a test that shows even a small increase in risk may be of value and the decision to use or not use this information falls within the realm of practicing medicine. Moreover, the predictive ability can change in different clinical contexts.
Nonetheless, the term clinical validity implies that a test is both scientifically valid and predictive, which is why I think we need a more nuanced vocabulary around this important topic.
Even more pressing is the need to define what is acceptable evidence of scientific validity. Efforts by professional organizations like ClinGen, CPIC and the ACMG to define standards of scientific validity are widely accepted within the scientific community. It would make sense if these efforts would inform decisions by regulatory agencies and payers when they evaluate the scientific validity of genetic tests as well.
Consumers, physicians, regulators and insurance companies all struggle to discern good genetic tests from those that are not so good, to ensure their efficacy and safety. In that sense we would all benefit from a standard vocabulary and metrics for defining what it means for a test to be clinically valid.
To improve your overall genomic literacy or for a deeper dive into predictive models used in Omics, check out our online courses at precisionmedicineacademy.org.
Dr. McCarthy is the founder of Precision Medicine Advisors, which specializes in communicating precision medicine to lay professional audiences, providing scientifically sound, unbiased information to promote the responsible use of genomics in medicine.
Contact info: firstname.lastname@example.org