Statistics Matter October 19, 2023
Comparative Effectiveness Research- the politics of statistics
The Conundrum of Charge
The triad of value of any medical therapy or device is:
1. Effectiveness (different than efficacy)
2. Safety
3. Affordability
This triad communicated in statistical concepts could be
1. Number needed to benefit
2. Number needed to harm
3. Charge to the patient (not cost to the provider)
The conundrum is charge.
・ Charge is an essential ingredient to determine value to the patient.
・ Charge is what makes physicians, hospitals, insurance companies, pharmaceutical com-panies, & medical device makers some of the wealthiest entities in the United States. And this charge is almost completely hidden from the patient—and from physicians. The lack of financial transparency means that health care cannot be a free market. (regardless of one’s rhetoric or ideals)
・ Charges are extremely variable from geographic region to region and even from hospital to hospital in the same city. This wide variation in charge with no appreciable difference in quality or patient outcome was the primary point of the Dartmouth study outlined bt Dr. Atul Gawande in his 2008 book. This suggests the additional expense to patients, in-surance carriers, and to the Federal government under Medicaid is unnecessary ex-pense. The charges that make powerful entities rich harm the Feds budget.
・ The percentage of GDP (17%) in the United States for medical care far exceeds all other industrialized countries (ave 8.6%) while we remain actually lower in many measures of quality health care for a population.
・ The power of these highly influential and lucrative industries together with the enor-mous budget required for the federal government to maintain Medicare, makes “charges” a highly politically divisive topic.
Comparative Effectiveness Research
Comparative Effectiveness Research (CER) is simply the comparison of one therapy (medi-cine or device) with another. From a statistical and evidenced-based medicine perspective, comparative effectiveness is essential and perhaps the sin qua non of clinical application, particularly if it includes charges (affordability). For instance, if one used metronidazole for C. Difficile colitis outpatient vs. oral vancomycin; the Vancomycin might have a slight clini-cal edge in cure rate but metronidazole costs around $20 and vancomycin is > $400. Any-one who leaves the charge out of the comparative effectiveness equation is not really meas-uring “value”. Anyone who offers the answer “I don’t have to pay for because my insurance covers it” is blind, deaf, and dumb to what those covered charges mean to his premium and to the expenditures of the Federal government.
Brief History Of US Government Attempting CER
・ 1978 National Center for Health Care Technology (NCHCT) was a bipartisan plan to com-pare medical technologies for efficacy and safety. They were not allowed to do “cost ef-fectiveness” comparisons. This had some fair success but physician groups lobbied against.
・ 1981 President Ronald Reagan zeroed out the budget of the NCHCT which effectively killed it.
・ 1989 Under Reagan’s administration, began the Agency for Health Care Policy and Re-view (AHCPR) which was part of the genesis of “clinical practice guidelines”. Cost com-parison was not included.
・ 1994 the creation of the Office Technology Assessment which looked at medical de-vices—was shut down within the year.
・ The AHCPR put out small books to physicians for free that covered all of the scientific references, quality of methodology, and recommendations in the form of a clinical prac-tice guideline for about 15 common conditions including the most common ailment in the US, low back pain. The “back-lash” from orthopedic surgeons (and the congressmen that politically needed them) was immediate and severe. The next Congressional budget cycle slashed the budget of AHCPR and renamed it to Agency Health Review & Quality (AHRQ) which no longer put out clinical practice guidelines.
** In 1999, The United Kingdom established the National Institute for Health and Care Ex-cellence (NICE). With a single payer nationalized health care, it was politically easier to en-act a national “medical scientific voice” which could compare effectiveness that also com-pares cost.
・ Under President George Bush in 2001, the previous Health Care Financing Administra-tion was renamed and reshaped as Centers for Medicare and Medicaid Services (CMS)
・ 2003 The Drug Comparative Effectiveness Act under a Republican administration was passed and was the first act that compared not only effectiveness but included cost. This had little impact on physicians or hospitals, or medical device companies. CMS could only advise and could not include cost.
・ 2010 The Affordable Health Care Act (aka Obamacare) established Patient Centered Out-comes Research Institute (PCORI) which was to function much like NICE in the UK except PCORI was independent, non-profit, and non-government organization which was funded by congress out of an ear-marked trust fund. And PCORI vowed not to do any cost comparison evaluations. Central to PCORI was the use of Comparative Effectiveness Re-search (CER). There was nothing new about this recommendation that had not been put forth under the previous Republican administration with the Drug Comparative Effec-tiveness Act. But because the ACA passed without a single Republican vote, CER became a political grenade.
The Politicalization of CER
Though comparative effectiveness research is simply a statistical evaluation, it was pro-moted as a method by which physicians would have their decision-making capability threatened. Furthermore, that CER would be the method by which the government would “ration” certain medications and devices. Comparative Effectiveness Research became a symbol of “government interference”.
The similar kind of political tsunami that occurred with the low back pain guideline with ACHPR and the beginning of clinical practice guidelines in the early 1990s; had occurred again with PCORI and comparative effectiveness research under Obamacare.
PCORI still exists but few physicians even know what this is. They have sponsored few comparative effectiveness studies because it is such a political hot button that effects its very existence. There has not been a single study comparing medical devices. They have vowed not to include cost comparisons. In essence, they have walked so lightly as to make no real meaningful clinical steps.
There is some movement in the CER world with a Center for Comparative Effectiveness Re-search for Breast Cancer at Duke and a Center for Comparative Effectiveness Research for Cardiology at the University of Alabama.
The Political Sustainability of CER & EBM
If the United States has any hope of truly addressing meaningful interventions that curb over treatment, reduce unwarranted variation, protect patients against therapies with mar-ginal benefit—comparative effectiveness research for common diseases that include af-fordability must become a reality. This will require more politics, not less.
For this to happen, CER must avoid the political minefields of losing physician-patient deci-sion-making autonomy and the accusation of “rationing”. This can occur in two ways: phy-sicians and medical associations advocating for CER as “good statistics leading to good medicine” and PCORI doing relevant clinical CER on common conditions that begin with the message of doing more of a certain medicine or device for certain populations so they can not be accused that CER limits choices and denies medical services.
Regardless, in our current political environment, winning has become more important than truth. And politics has enslaved statistics. Our politics is our greatest challenge to confront bad medicine. It is not clear that politics will allow scientific or statistical freedom. It is not clear that PCORI can pull it off, even if it unchains CER to do what it is intended to do. Maybe we should look to NICE guidelines in the UK (and HAS in France) to inform US phy-sicians?
reference
Unhealthy Politics: the battle over evidence-based medicine by Patashnik, Gerber, Dowling (2020)