|
Phenotype beats genotype; measurements
beat guessing
April 2016
SHARING OPTIONS:
I’ve been reading a lot on the promise of human genetics over the last several decades and especially for the past
year since the federal Precision Medicine Initiative was announced.
My knowledge of genetics is rudimentary. Too
many things have transpired too quickly for me, such as the amazing advances in sequencing, the ongoing evolution of PCR, epigenetics and now CRISPR.
Here I venture the opinion that that we are neglecting some advancing capabilities of a more traditional nature that
may be more impactful. In 2015, this column included the following: “In the State of the Union address a month ago, the President suggested a Precision
Medicine Initiative that would ‘give all of us access to the personalized information we need to keep ourselves and our families healthier.’
That’s a vote against the tyranny of averages and for the value of individual genotypic and phenotypic data which are increasingly affordable,
recordable and accessible. Bring on the initiative and let the chemistry talk. Make sure the geneticists don’t stay in charge, because phenotype
provides more meat for the N = 1 decision to select pharmacology that works.”
From all I’ve read, the
geneticists are still firmly in charge. At very least, they get most of the ink. Is it that to a geneticist, every problem looks like a gene, as for a hammer
every problem looks like a nail?
Or is it just the media? Or perhaps there is a genetics-industrial complex at
work? Genetics seems too imprecise for precision medicine. Inferences and probabilities are not often actionable. We’ve already learned how unreliable
genetics can be as the predictor of future health. Like with computer code, there is some fragility in our genes, and the code granted at birth is less than
reliable.
I was encouraged last spring when Harvard geneticist (and engineer and entrepreneur) George Church
commented in ACS Central Science that “Choosing the right drug and dose goes far beyond peering at genomic DNA—necessitating the
inclusion of many environmental and internal variables.” I’ve not met Prof. Church. Profiles of him are readily available and thought-provoking,
often with a quirky twist. This possibly is a phenotype selected for at Harvard’s Wyss (‘vees’) Institute for Biologically Inspired
Engineering. Spend time here: http://wyss.harvard.edu/. You will not be disappointed.
Given the audience for
DDNews, I’m most interested in drug selection, drug dose, drug-drug interactions and monitoring drug response. The old term pharmacogenomics
has proven most successful with the first of these, primarily in cancer. The promise 20 years back was to get down to subpopulations, a positive step beyond
one size fits all.
The precision will be much better when the subpopulation is one. We need not understand
genetics to deal more effectively with the reality that is phenotype, but we will need to make more measurements that take into account the many factors of
ontogeny, organ function, comorbidities, metabolizing enzymes, transporters, other prescribed drugs, microbiome and diet.
These components do not interact linearly. It’s therefore clear that simulations and models based on trials or data mining from populations
will not be adequate. Mining big data can uncover hints that suggest further studies, but rarely will these inferences support precision decision-making for
the patient at hand. We need more near-real-time measurements and to build algorithms to support their utility.
Much has been said about the reduced cost of sequencing as a rare example of where healthcare performance-for-the-money has come down at a rate
exceeding Moore’s Law. The same cannot yet be said for interpreting the data. There have been amazing revelations recently about the diagnostic
potential for circulating tumor cells, tumor DNA, exosomes, circulating DNA reflecting tissue damage and circulating fetal DNA. Blood is clearly a conduit
for a lot of information that we are just beginning to sort out. Then again, tumors are said to be genetically very heterogeneous. Thus we are all so much
more than what Mom and Dad provided us with after the wine and candles.
Measurements and their interpretation
remain very costly. Inferences for a single patient are often statistically too weak for a definitive decision. When it is said that we need larger and
larger trials to validate a given biomarker, that suggests to me that it won’t be very useful. I’m certainly encouraged by what we’ve been
learning from genomics. My complaint is the short-changing of the advancement of more traditional clinical chemistry, where the incentives for R&D are
far behind therapeutics. For one example, dynamic drug monitoring in smaller volumes becomes more feasible. We should not have to guess or rely on static
“models.” Measuring drug exposure simultaneously takes into account the impact of the many metabolizing enzymes and transporters, co-administered
drugs, blood volume and organ performance (particularly nephrology).
Listen to the chemistry talk. Going out on a
limb a bit, I speculate that there is also a lot more to be mined with metabolomics, especially dynamic metabolomics looking at rates as well as
concentrations. Genetics can guide too, but saving patients is too important to wait until everything is understood.
Carpenters say, “Measure twice, cut once.” I say, “Select, dose, sample, measure, adjust.” Recently, one famous company used
the wrong samples, at the wrong times, in the wrong locations, with the wrong instrument. Take that as a learning experience, not a condemnation of
diagnostic chemistry more generally.
Peter T. Kissinger (who can be reached at kissinger@ddn-news.com) is professor of chemistry at Purdue University, chairman emeritus of BASi and a director of Chembio Diagnostics, Phlebotics and Prosolia. Back |
Home |
FAQs |
Search |
Submit News Release |
Site Map |
About Us |
Advertising |
Resources |
Contact Us |
Terms & Conditions |
Privacy Policy
|