Unbiased Analysis of Today's Healthcare Issues

Obamacare Death Spiral?

Written By: Jason Shafrin - Oct• 26•16

Health insurance premiums are projected to increase an astronomical 25% for plans in the health insurance exchanges. Some pundits claim that these increases represent that Obamacare is crumbling or in a death spiral. As premiums rise, healthy people flee the market. This leaves insurers with only more sick individuals which leads to premium increases. More increases mean that the moderately sick leave and so on. Eventually, insurers leave the marketplace due to excessive adverse selection.  In fact,  health insurers (e.g., Aetna and UnitedHealth) have already left the marketplace.

Is this what is going on or is there another explanation?

There could be other explanations include:

  • Low introductory offer. 2017 health insurance premiums may be high, but premium hikes in 2015 and 2016 may have been unnaturally low.  Insurers could have kept pricing low to attract new business.  If health plans are “sticky” in that patients are less likely to change plans, pricing low initiatively may be a good approach initially.  There is some evidence that premiums may have been too low in the early Obamacare years.
  • Insurers didn’t know the market.  The individual insurance market differs greatly from the group market.  Health insurers such as Aetna and UnitedHealth that are more used to group plans may have underpriced.  Insurers remaining after their exit may represent a more realistic picture of the true cost of care.
  • Reinsurance goes away.  When Obamacare was set up, the government would subsidize an insurer’s highest cost patients.   Reinsurance funds were set at $10 billion in 2014, $6 billion in 2015, and $4 billion in 2016 (KFF).  However, by 2017, these funds are likely to dry up and insurers will be responsible for covering the full cost of their highest cost patients.
  • Blame the risk corridors.  Risk corridors are a form of market stabilization whereby very profitable plans subsidize less profitable plans.  Specifically, KFF reports: “HHS collects funds from plans with lower than expected claims and makes payments to plans with higher than expected claims. Plans with actual claims less than 97% of target amounts pay into the program and plans with claims greater than 103% of target amounts receive funds.”   Similar to the reinsurance system, the risk corridors are going away in 2017.

In short, the large increase may represent the end of Obamacare, or it could just represent the stabilization of the market as government subsidies dry out and insurers more experienced in the individual market settle in.

Chocolate Bar

Written By: Jason Shafrin - Oct• 24•16

A truly amazing and uplifting story from Kind World from WBUR about a boy with Glycogen Storage Disease Type 1b, his friend and the Chocolate Bar Book.

LANTZ: Dylan is 10 years old now. But this story starts when the boys were in first grade. Dylan’s mom Debra Siegel was driving her son home from Jonah’s house when she told him that Jonah had a rare condition he could die from.

DYLAN SIEGEL: I’m like, oh, my God, I want to help.

LANTZ: That’s Dylan. His mom explained there was no cure and doctors needed money to find one.

DEBRA SIEGEL: And I said, do you want to do a bake sale? Do you want to do a lemonade stand? He’s like, no. He looked at me like I was insane. What horrible ideas.

DYLAN: I’m like, how about I could write a book?

LANTZ: The next day, Dylan took out his markers and made a picture book he dedicated to Jonah. His plan, sell the book to raise money for research.

SIEGEL: He marched in my office and said, here’s my book. Will you go make copies?

DYLAN: Please, please, please, print it, print it, print it, print it.

LANTZ: When he took his books to a school event…

DYLAN: We sold out.

LANTZ: So the next week, Dylan spoke at a PTA meeting.

SIEGEL: Somebody asked him how much money do you want to raise?

(SOUNDBITE OF ARCHIVED RECORDING)

DYLAN: A million dollars.

The funny thing is…he did. The Today show reports that he did in fact raise $1m.

More details and Dylan and Jonah’s website is HERE. Please donate.

Extended Cost Effectiveness Analysis

Written By: Jason Shafrin - Oct• 23•16

Cost effectiveness analysis (CEA) examines whether treatment benefits outweight treatment costs on average for a given patient population. A 2016 paper by Verguet, Kim and Jamison examine the concept of extended cost effectiveness analysis (ECEA) which applies cost effectiveness methodologies to health care policies. The policies are evaluated over 4 domains: (1) health gains; (2) financial risk protection (FRP) benefits; (3) the total costs to the policy makers; and (4) the distributional benefits.

Policymakers can use these results to identify preferred policy interventions.

Usually, ECEA displays the three outcomes of health gains, private expenditures crowded out, and FRP, by population stratum…Furthermore, the two major outcomes of ECEA, health gains and financial protection per population stratum, can be scaled with the net cost of the policy to a particular budget constraint or per dollar expenditure…The motivation is to enable the expression of ECEA findings in terms of the ‘efficient purchase’ of financial protection and equity, in addition to the efficient purchase of health gains, as in a traditional CEA.

 
For instance, one could identify the cost of treatment per life year saved or per poverty case avoided.

The authors sum up the benefit of their framework as follows:

The
ECEA approach permits the inclusion of non-health benefits and distributional consequences and equity in the economic evaluation of health policies. It enables the consideration of key criteria into the resource allocation problem and into the design of health benefits packages.

Source:

Friday Links

Written By: Jason Shafrin - Oct• 20•16

The Health Wonk Review is up

Written By: Jason Shafrin - Oct• 20•16

Peggy Salvatore has posted Health Wonk Review Election Edition – Mama Says Eat Your Peas and Don’t Forget to Vote at Health Systems Ed Blog.

Bad news for Obamacare?

Written By: Jason Shafrin - Oct• 20•16

Premiums are rising.  HHS Secretary Sylvia Burwell stated:

Building a new market is never easy,” she told the group at HHS headquarters. “We expect this to be a transition period for the marketplace. Issuers are adjusting their prices, bringing them in line with actual data on costs.”

Burwell’s comments foreshadow the higher premiums expected when federal officials release details on the health plans to be offered on HealthCare.gov for 2017. Those details are likely to come just days before a presidential election that could determine whether the ACA is repealed or revamped.

How will this affect enrollment?  On the one hand, failing to buy insurance is now costly.  Adults who do not buy insurance must pay a fine: or adults in 2016 is $695 or 2.5 percent of income, whichever is higher.

On the other hand, insurance premiums are  much higher than $695 so the penalty may not have teeth.  Further, as insurers leave the market, 1.4m patients will not able to renew their insurance.  Now having access to multiple insurance option surely will reduce enrollment.

The Healthcare Economist will continue to keep an eye on Obamacare markets as enrollment number come in the fall and Q1 2017.

The Quest to Improve Health Care Quality

Written By: Jason Shafrin - Oct• 19•16

By 2018, CMS aims to tie 90% of reimbursements to value-based care. Value-based reimbursement is the latest rage. According to a paper by Levine and co-authors in JAMA Internal Medicine, however, progress on quality of care has been modest at best.

Despite more than a decade of efforts to improve the quality of health care in the United States, the quality of outpatient care delivered to adults has not consistently improved

Why is this the case? A commentary by McGlynn, Adams, and Kerr gives one interesting perspective:

Much of the work in quality improvement has focused on approaches that are driven by payers and policy makers. These have included measurement and public reporting, payment incentives, investments in electronic medical records, and developing virtual systems of care in select areas. None of these approaches by itself is likely to fundamentally alter the level of quality delivered throughout the nation. To do so requires significant work by health professionals on the front lines in collaboration with their patients. And those approaches require time, resources, and energy that are beyond what is available to many practices that are struggling to keep up with a rapidly changing world.

Payers are in a difficult spot. They want to tie reimbursement to value (i.e., quality and cost). Although payers can measure cost with much accuracy, physicians are in a much better position to evaluate the quality of care patients receive. Thus, payers can try to tie reimbursement to value but do so imperfectly or abandon the effort and fall back to the unsatisfactory volume-based or capitation-based system…or perhaps there is a happy medium where payers can collaborate with physicians to improve quality of care and value to the healthcare system.

HT: Incidental Economist.

The problem with managed care is…

Written By: Jason Shafrin - Oct• 18•16

Managed care, as the names suggests, aims to manage health care.  The goal is to identify high quality, low cost treatments in order to insure that patients get the best care while keeping premiums low.  While good in theory, managed care critics often contend that some of the stricter managed care policies reduce patient access to high-quality medicines.  Health care providers complain that managed care may save insurers money, but imposes a significant paperwork burden on them.

Consider the excerpt from an the excellent book Far from the Tree: Parents, Children, and the Search for Identity where author Andrew Solomon discusses how managed care affects the treatment of patients with schizophrenia.

When I asked Jeanne Frazier whether it had been emotionally draining to work with schizophrenic patients, she said, “The thing that makes me emotionally drained is managed care.  When I have to fill out yet one more form just to increase the dose of an antipsychotic that’s already approved, it really impacts the quality of service I can provide.”

(more…)

The problem with p-values

Written By: Jason Shafrin - Oct• 17•16

Interesting article in Aeon on why p-values may not be the best way to determine the probability we are observing a real effect in a study.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what wouldbe expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

The article rightly describes that the p<0.05 threshold is very arbitrary.  Is a study with a p value of 0.047 much better than one with a p-value of 0.053?  Of course not.  This point is well known however.

Notice, though, that it’s possible to calculate the disastrous false-positive rate for screening tests only because we have estimates for the prevalence of the condition in the whole population being tested. This is the prior probability that we need to use Bayes’s theorem. If we return to the problem of tests of significance, it’s not so easy. The analogue of the prevalence of disease in the population becomes, in the case of significance tests, the probability that there is a real difference between the pills before the experiment is done – the prior probability that there’s a real effect. And it’s usually impossible to make a good guess at the value of this figure.

An example should make the idea more concrete. Imagine testing 1,000 different drugs, one at a time, to sort out which works and which doesn’t. You’d be lucky if 10 per cent of them were effective, so let’s proceed by assuming a prevalence or prior probability of 10 per cent. Say we observe a ‘just significant’ result, for example, a P = 0.047 in a single test, and declare that this is evidence that we have made a discovery. That claim will be wrong,not in 5 per cent of cases, as is commonly believed, but in 76 per cent of cases. That is disastrously high. Just as in screening tests, the reason for this large number of mistakes is that the number of false positives in the tests where there is no real effect outweighs the number of true positives that arise from the cases in which there is a real effect.

If we don’t know the prior probability for most interesting research questions, however, what is the solution?

So, although we can calculate the p-value, we can’t calculate the number of false positives. But what we can do is give a minimum value for the false positive rate. To do this, we need only assume that it’s not legitimate to say, before the observations are made, that the odds that an effect is real are any higher than 50:50. To do so would be to assume you’re more likely than not to be right before the experiment even begins.

If we repeat the drug calculations using a prevalence of 50 per cent rather than 10 per cent, we get a false positive rate of 26 per cent, still much bigger than 5 per cent. Any lower prevalence will result in an even higher false positive rate.

Getting this type of statistical analysis into mainstream research, however, will certainly be a challenge.

High quality comparative effectiveness research

Written By: Jason Shafrin - Oct• 16•16

What are the best practices for conducting comparative effectiveness research in the real-world?  One proposed best practice guildelines are the Good Research for Comparative Effectiveness (GRACE) guidelines.  However, most studies do not follow these guidelines.  A paper by Dreyer, Bryant and Velentgas (2016) assembled 28 observational comparative effectiveness articles published from 2001 to 2010 that compared treatment effectiveness and/or safety of drugs, medical devices, and medical procedures.  They found

The best predictors of quality included the following: use of concurrent comparators, limiting the study to new initiators of the study drug, equivalent measurement of outcomes in study groups, collecting data on most if not all known confounders or effect modifiers, accounting for immortal time bias in the analysis, and use of sensitivity analyses to test how much effect estimates depended on various assumptions.

What are the GRACE guidelines?  I list these below.

DATA

  • D1: Were treatment and/or important details of treatment exposure adequately recorded for the study purpose in the data source(s)?
  • D2. Were the primary outcomes adequately recorded for the study purpose (e.g., available in sufficient detail through data sources)?
  • D3. Was the primary clinical outcome(s) measured objectively rather than subject to clinical judgment (e.g., opinion about whether the patient’s condition has improved)?
  • D4. Were primary outcomes validated, adjudicated, or otherwise known to be valid in a similar population?
  • D5. Was the primary outcome(s) measured or identified in an equivalent manner between the treatment/intervention group and the comparison group?
  • D6. Were important covariates that may be known confounders or effect modifiers available and recorded? Important covariates depend on the treatment and/or outcome of interest (e.g., body mass index should be available and recorded for studies of diabetes; race should be available and recorded for studies of hypertension and glaucoma).

METHODS

  • M1. Was the study (or analysis) population restricted to new initiators of treatment or those starting a new course of treatment? Efforts to include only new initiators may include restricting the cohort to those who had a washout period (specified period of medication nonuse) before the beginning of study follow-up.
  • M2. If 1 or more comparison groups were used, were they concurrent comparators? If not, did the authors justify the use of historical comparison groups?
  • M3. Were important confounding and effect-modifying variables taken into account in the design and/or analysis? Appropriate methods to take these variables into account may include restriction, stratification, interaction terms, multivariate analysis, propensity score matching, instrumental variables, or other approaches
  • M4. Is the classification of exposed and unexposed person-time free of “immortal time bias,” i.e., “immortal time” in epidemiology refers to a period of cohort follow-up time during which death (or an outcome that determines end of follow-up) cannot occur
  • M5. Were any meaningful analyses conducted to test key assumptions on which primary results are based (e.g., were some analyses reported to evaluate the potential for a biased assessment of exposure or outcome, such as analyses where the impact of varying exposure and/or outcome definitions was tested to examine the impact on results)