Unbiased Analysis of Today's Healthcare Issues

High quality comparative effectiveness research

Written By: Jason Shafrin - Oct• 16•16

What are the best practices for conducting comparative effectiveness research in the real-world?  One proposed best practice guildelines are the Good Research for Comparative Effectiveness (GRACE) guidelines.  However, most studies do not follow these guidelines.  A paper by Dreyer, Bryant and Velentgas (2016) assembled 28 observational comparative effectiveness articles published from 2001 to 2010 that compared treatment effectiveness and/or safety of drugs, medical devices, and medical procedures.  They found

The best predictors of quality included the following: use of concurrent comparators, limiting the study to new initiators of the study drug, equivalent measurement of outcomes in study groups, collecting data on most if not all known confounders or effect modifiers, accounting for immortal time bias in the analysis, and use of sensitivity analyses to test how much effect estimates depended on various assumptions.

What are the GRACE guidelines?  I list these below.


  • D1: Were treatment and/or important details of treatment exposure adequately recorded for the study purpose in the data source(s)?
  • D2. Were the primary outcomes adequately recorded for the study purpose (e.g., available in sufficient detail through data sources)?
  • D3. Was the primary clinical outcome(s) measured objectively rather than subject to clinical judgment (e.g., opinion about whether the patient’s condition has improved)?
  • D4. Were primary outcomes validated, adjudicated, or otherwise known to be valid in a similar population?
  • D5. Was the primary outcome(s) measured or identified in an equivalent manner between the treatment/intervention group and the comparison group?
  • D6. Were important covariates that may be known confounders or effect modifiers available and recorded? Important covariates depend on the treatment and/or outcome of interest (e.g., body mass index should be available and recorded for studies of diabetes; race should be available and recorded for studies of hypertension and glaucoma).


  • M1. Was the study (or analysis) population restricted to new initiators of treatment or those starting a new course of treatment? Efforts to include only new initiators may include restricting the cohort to those who had a washout period (specified period of medication nonuse) before the beginning of study follow-up.
  • M2. If 1 or more comparison groups were used, were they concurrent comparators? If not, did the authors justify the use of historical comparison groups?
  • M3. Were important confounding and effect-modifying variables taken into account in the design and/or analysis? Appropriate methods to take these variables into account may include restriction, stratification, interaction terms, multivariate analysis, propensity score matching, instrumental variables, or other approaches
  • M4. Is the classification of exposed and unexposed person-time free of “immortal time bias,” i.e., “immortal time” in epidemiology refers to a period of cohort follow-up time during which death (or an outcome that determines end of follow-up) cannot occur
  • M5. Were any meaningful analyses conducted to test key assumptions on which primary results are based (e.g., were some analyses reported to evaluate the potential for a biased assessment of exposure or outcome, such as analyses where the impact of varying exposure and/or outcome definitions was tested to examine the impact on results)

Friday Links

Written By: Jason Shafrin - Oct• 13•16

Does defensive medicine work?

Written By: Jason Shafrin - Oct• 12•16

Dr. Anupam B. Jena, a Harvard professor and colleague of mine at PHE, has an interesting interview at the Cunningham Group.  Dr. Jena discusses his study published in BMJ, which finds that higher resource use by physicians is associated with fewer malpractice claims.  In other words, defensive medicine “works” for physicians even if it may not be optimal for the health system as a whole.



2016 Nobel Prize in Medicine goes to…

Written By: Jason Shafrin - Oct• 11•16

Yoshinori Ohsumi. He won the award for his discoveries of mechanisms for autophagy. What is autophagy? The Nobel Prize website explains:

The word autophagy originates from the Greek words auto-, meaning “self”, and phagein, meaning “to eat”. Thus,autophagy denotes “self eating”. This concept emerged during the 1960’s, when researchers first observed that the cell could destroy its own contents by enclosing it in membranes, forming sack-like vesicles that were transported to a recycling compartment, called the lysosome, for degradation. Difficulties in studying the phenomenon meant that little was known until, in a series of brilliant experiments in the early 1990’s, Yoshinori Ohsumi used baker’s yeast to identify genes essential for autophagy. He then went on to elucidate the underlying mechanisms for autophagy in yeast and showed that similar sophisticated machinery is used in our cells.


Ohsumi’s discoveries led to a new paradigm in our understanding of how the cell recycles its content. His discoveries opened the path to understanding the fundamental importance of autophagy in many physiological processes, such as in the adaptation to starvation or response to infection. Mutations in autophagy genes can cause disease, and the autophagic process is involved in several conditions including cancer and neurological disease.

The Economist provides a concise summary of Dr. Ohsumi’s research as well.

2016 Nobel Prize in Economics goes to…

Written By: Jason Shafrin - Oct• 10•16

Oliver Hart and Bengt Holmström for their research on contract theory.

One of my favorite papers in all of economics is Holmstrom and Milgrom’s 1991 paper titled “Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design.”  In a world where health care is increasingly moving to value-based payment, payers (i.e., insurance companies, employers, and the government) are increasingly contracting based on observed quality.   However, much of the quality of care providers (i.e., physicians, hospitals, nurses) is unobservable to payers (e.g., bedside manner, unobserved clinical outcomes).

In a previous post, I summarize their argument as follows:

…it has been known that compensating individuals on one measured dimension can compel them to substitute effort away from unmeasured dimensions.  For instance, if a mortgage broker is compensated only for the number of new mortgages he secures and not the credit worthiness of the borrower, it is likely that they will bring in borrowers with bad credit.  In the healthcare setting, compensating doctors to do certain tests (e.g., test A1C levels) may increase the probability the doctor conducts the A1C test for diabetics, but may decrease the amount of time the physician dedicates towards counseling the patient to lose weight or stop smoking.

Alex Tabarrok of Marginal Revolution provides a concise, but a bit technical explanation of how compensation should be tied to measured output (or quality) based on the Holmstrom approach.

The Economist describes Dr. Holmstrom’s research as follows:

His work suggested that performance-based pay should be linked as much as possible to measures of managerial performance (such as the price of a company’s share relative to those of its peers rather than the share price in isolation). But the more difficult it is to find good measures of performance, the closer a pay package should get to a simple fixed salary.

Tyler Cowen writes in BloombergView that:

They’ve built a technical framework for other researchers to build on, which is much harder to do than to throw off useful insights. So don’t be underwhelmed if some of their work, when conveyed in sound bites, seems like something you heard last week at the water cooler.

Marketplace has a layman’s explanation of the research that won Holmstrom and Hart the award (see below).


Here’s an example: factory managers often pay workers per item produced. But if schools paid teachers in a similar way, based on student test scores, it would bring trade-offs. Teachers might focus on the wrong thing.

“We want them to raise our kids to be creative, to be mensches,” said economist Steve Tadelis of the Haas School of Business at UC Berkeley. “You will force them to just spend all their energy on that one thing, because that’s what they are getting paid for.”

Awardee number two: Oliver Hart of Harvard. Much of his work focuses on contracts and uncertainties and has been applied to governments and privatization. A local government might privatize trash pickup and find it saves money, with few trade-offs. But if it privatized a prison it might find corner-cutting, and that prisoners weren’t fed enough.

“The right answer is not privatize everything, the right answer is not have collective ownership for everything,” said economist Roger Myerson of the University of Chicago, himself a past Nobel economics winner for his work on game theory. “A good theory needs to be able to go both ways, precisely the kind of theory we need to understand the real world.”

Do narrow networks reduce cost?

Written By: Jason Shafrin - Oct• 09•16

Many health plans in the Obamacare health insurance exchanges aim to keep premiums down by limiting patients to a select group of providers (e.g., hospitals, physicians). The thought is, by limiting patients to a “narrow network” of providers, patients are in essence restricted to see the most efficient providers.  Some may claim that “efficient” means high quality and low cost; others may claim that health plans focuses on cost and largely ignores quality. One research question is, how much do narrow networks save on premiums.

A study by Polsky, Cidav, Swanson (2016) find the following:

We found that within a market, for plans of otherwise equivalent design and controlling for issuer-specific pricing strategy, a plan with an extra-small network had a monthly premium that was 6.7 percent less expensive than that of a plan with a large network. Because narrow networks remain an important strategy available to insurance companies to offer lower-cost plans on health insurance Marketplaces, the success of health insurance coverage expansions may be tied to the successful implementation of narrow networks.

While the final statement may be an overstatement, it appears that Obamacare participants can expect lower cost of health insurance, but also fewer provider choices.

HWR is up

Written By: Jason Shafrin - Oct• 07•16

Joe Paduda has posted “Pre-election pundit ponderings!” edition of the Health  Wonk Review up at Managed Care Matters.



Friday Links

Written By: Jason Shafrin - Oct• 06•16

Health reform and health insurance churn

Written By: Jason Shafrin - Oct• 06•16

The Affordable Care Act provides a lifeline for individuals previously “too rich” for Medicaid, but who did not have access to employer-provided insurance.  First, making Medicaid eligibility rules more generous lead to more people just above the poverty line getting access to health insurance.  Second, the “Obamacare” health insurance exchanges offered community-rated, income-subsidized health insurance coverage for people who did not qualify for Medicaid but also did not have have access to an employer plan.

While expanding coverage is useful, patients can change their eligibility for insurance year to year based on changing income and job prospects.   This leads to “churning” where patients frequently move between insurers each year.   How big a problem is this?  A study by Sommers et al. (2016) finds the following:

Nearly 25 percent of respondents in 2015 changed coverage during the previous twelve months—a rate lower than some previous predictions. We did not find significantly different churning rates in the three states over time. Common causes of churning were job-related changes and loss of eligibility for Medicaid or Marketplace subsidies. Churning was associated with disruptions in physician care and medication adherence, increased emergency department use, and worsening self-reported quality of care and health status. Even churning without gaps in coverage had negative effects.


China fabricates clinical trial data?

Written By: Jason Shafrin - Oct• 04•16

Academic integrity is one of the bedrocks upon which research is founded.  With that said, it is with great concern that I came across an article that stated that 80% of Chinese clinical trials data is fabricated.  Science Alert reports:

The review looked at data from 1,622 clinical trial programs of new pharmaceutical drugs awaiting regulator approval for mass production, according to an expose in the Economic Information Daily newspaper.

More than 80 percent of applications for mass production of new drugs have been canceled in the light of the findings, with officials warning that further evidence malpractice could still emerge in the scandal.

According to the SFDA report, much of the data gathered during clinical trials were incomplete, failed to meet analysis requirements or were untraceable, the paper cited a source in the agency as saying.

It said some companies were suspected of deliberately hiding or deleting records of adverse effects, and tampering with data that did not meet expectations.

In addition, Chinese scientist produce a large number of systematic literature reviews and network meta-analyses.  One recent study by John P.A. Ioannidis (2016) finds

China has rapidly become the most prolific producer of English-language, PubMed-indexed meta-analyses. The most massive presence of Chinese meta-analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes.

These findings are very concerning.  At the same time, one should not discount all studies from China just due to some bad apples; each author should be evaluated individually on their track record and integrity.  However, large scale fabrication of data threatens to undermine the people who are doing high quality research  in China.