Deriving Logit models
These models are typically of the form:
where Β is the coefficient vector and X is the matrix of explanatory variables. The dependent variable y^{*}_{i} is the unobserved continuous latent variable. The observed binary variable is typically assumed to equal 1 when y^{*}_{i} >0 and 0 otherwise. To measure the probability that an event occurs (e.g., the binary variable equals 1), we can calculate that as:
However, as the probability distribution of ε_{i }is typically not known, we typically scale this figure by the standard deviation of the residual ε. Thus, we get
In the probit model, we assume that (ε_{i}/σ)~N(0,1). so that Pr(y_{i} = 1| normal, X_{i}) = Φ [(β /σ) X_{i}]
In the logit model, we assume a logistic distribution where (ε_{i}/σ)~logistic(0, π/√3), so that Pr(y_{i} = 1| logistic, X_{i}) = 1/[1+exp{-(β /σ) X_{i}}].
How do you calculate odds ratios?
Odds ratios are simply the ratio of the odds something occurs divided by the odds that it does not. We already derived the odds that something occurs in the logit case; which is Pr(y_{i} = 1| logistic, X_{i}) = 1/[1+exp{-(β /σ) X_{i}}]. The ratio of the odds the event occurs and the odds it doesn’t occur is {1/[1+exp{-(β /σ) X_{i}}]} / exp{-(β /σ) X_{i}}/[1+exp{-(β /σ) X_{i}}], which simplifies to exp{-(β /σ) X_{i}}.
For binary independent variables, one can measure the log odds as simply β/σ for the specific coefficient of interest.
What is the problem with odds ratios?
One issue is that the specification used (e.g., logit vs. probit) can affect your estimate odds ratio estimates. Specifically,
…the logit and probit models postulate error distributions with different values of r (the standard normal distribution has a variance of 1, the standard logistic distribution has a variance of π^{2}/3). This explains why the estimated logit and probit coefficients are different. The normalizations are different. A rule of thumb is that logit coefficients are larger by a factor of about 1.6.
More importantly, the number of explanatory variables included in the model will affect the odds ratio. If you include a large number of explanatory variables in the model, it will account for variation in the dependent variable, and thus it will reduce the variance of the residual, ε. Thus, adding more explanatory variables will increase the magnitude of the odds ratio, whereas removing explanatory variables will decrease the size of the odds ratios.
The authors claim that there are a few implications to this finding.
The authors recommend reporting the marginal or incremental effect (a.k.a partial effects), as is more commonly done in economics, as this estimate does not directly depend on σ. These marginal effects are estimated for a specific patient population, often patients with average characteristics. However, one can show how marginal effects vary in a population by plotting the marginal effects conditional on a given patient characteristics on the y-axis and the domain of the patient characteristic on the x-axis.
Source:
plus
This is the question that Costa-Font et al. (2018) attempt to answer. They find:
We use quasi-experimental evidence on the expansion of the public subsidization of long-term care to examine the causal effect of a change in caregiving affordability on the delivery of hospital care. More specifically, we examine a reform that both introduced a new caregiving allowance and expanded the availability of publicly funded home care services, on both hospital admissions (both on the internal and external margin) and length of stay. We find robust evidence of a reduction in both hospital admissions and utilization among both those receiving a caregiving allowance and, albeit less intensely, among beneficiaries of publicly funded home care, which amounts to 11% of total healthcare costs. These effects were stronger when regions had an operative regional health and social care coordination plan in place. Consistently, a subsequent reduction in the subsidy, five years after its implementation, is found to significantly attenuate such effects.
An interesting study. Future work should consider whether the additional funds are worth the cost. Although avoiding hospitalizations is clearly a positive, these interventions have non-trivial costs associated with them and may or may not be cost-saving in the long run even if they do improve patient outcomes.
Source:
If you’re interested in what are the most-read posts of all-time on Healthcare Economist, without further ado, here is my very own Top 10 list.
Since the start of 2017, my top 5 articles are:
As atrial fibrillation (AF) is often asymptomatic, it may remain undiagnosed until or even after development of complications, such as stroke. Consequently the observed prevalence of AF may underestimate total disease burden.
To estimate the prevalence of undiagnosed AF in the United States, we performed a retrospective cohort modeling study in working age (18–64) and elderly (≥65) people using commercial and Medicare administrative claims databases. We identified patients in years 2004–2010 with incident AF following an ischemic stroke. Using a back-calculation methodology, we estimated the prevalence of undiagnosed AF as the ratio of the number of post-stroke AF patients and the CHADS_{2}-specific stroke probability for each patient, adjusting for age and gender composition based on United States census data.
The estimated prevalence of AF (diagnosed and undiagnosed) was 3,873,900 (95%CI: 3,675,200–4,702,600) elderly and 1,457,100 (95%CI: 1,218,500–1,695,800) working age adults, representing 10.0% and 0.92% of the respective populations. Of these, 698,900 were undiagnosed: 535,400 (95%CI: 331,900–804,400) elderly and 163,500 (95%CI: 17,700–400,000) working age adults, representing 1.3% and 0.09% of the respective populations. Among all undiagnosed cases, 77% had a CHADS_{2} score ≥1, and 56% had CHADS_{2} score ≥2.
Using a back-calculation approach, we estimate that the total AF prevalence in 2009 was 5.3 million of which 0.7 million (13.1% of AF cases) were undiagnosed. Over half of the modeled population with undiagnosed AF was at moderate to high risk of stroke.
Source:
Overall Inflation
Medical Inflation.
The authors also briefly touch on disease based price indices which–rather than measure the cost specific goods or services–measure changes in the overall cost of treating specific diseases.
So which index should you use? The authors recommend the following:
- To adjust health expenditures in terms of purchasing power, use the GDP implicit price deflator or overall PCE measure. The PCE measure is suitable for personal consumption. The GDP deflator is more appropriate for the societal perspective.
- To adjust overall consumer out-of-pocket spending in terms of consumer purchasing power or out-of-pocket burden relative to income, the CPI-U can be used.
- To convert average expenditures to care for a specific disease for price changes from 1 year to a different year, either the PHC deflator or the PCE health index can be used. Because of exclusions of some payers in its weights, the MCPI may not be appropriate to adjust all-payer expenditures or payments by employers, Medicaid, and Medicare Part A for medical inflation.
- To convert average consumer out-of-pocket health care expenditures from 1 year to a different year, the MCPI can be used.
- To adjust estimates of costs of inpatient services from different years, the PPI for inpatient services appears currently to be the best option.
And now you know how best to adjust for inflation depending on your specific research question.
]]>CVS recently announced its new Transform Rheumatoid Arthritis Care initiative, which aims to reshape coverage of rheumatoid arthritis care through “value-based management strategies including outcomes based contracts and a new indication-based formulary for autoimmune conditions.” The CVS announcement comes amid an ongoing movement to tie formulary design to measures of treatment value, including initiatives such as value-based insurance design (VBID). The Center for Medicare and Medicaid Innovation even partnered with 13 Medicare Advantage plans to implement its VBID Model.
As payers take important steps toward value-based care, the broader debate around how value should be assessed points to the importance of different stakeholders’ perspectives on the tools, methods, and sources of data. A number of organizations have created formal value assessment frameworks, each with a different perspective on what value means. These frameworks highlight the need for value-based coverage decisions to account for underlying differences in the patients affected. To truly link benefit decisions such as formulary design to value, formularies must incorporate the factors that determine value from patients’ perspectives, while also accounting for diversity in covered populations.
The rest of the post examines the feasibility of making formularies patient-centered. Please do read the whole thing. The piece would be interesting for those focused on patient-centered care. For economists, the article even uses the results from the Arrow Impossibility Theorem to help inform the challenge of formulary design.
]]>There were 28 cardiac and 9 orthopedic inpatient surgical services and procedures included in the bundled payment demonstration. These elective procedures were selected because volume has historically been high; there was sufficient marketplace competition to ensure interested demonstration applicants; the services were easy to specify, and quality metrics were available for them.
One can think of ACE as a precursor to the Bundled Payments for Care Improvement (BPCI) initiative. The Chen study used a difference-in-difference approach among ACE and non-ACE providers and found that:
…the ACE Demonstration was not significantly associated with 30‐day Medicare payments (for orthopedic surgery: −$358 with 95 percent CI: −$894, +$178; for cardiac surgery: +$514 with 95 percent CI: −$1,517, +$2,545), or 30‐day mortality (for orthopedic surgery: −0.10 with 95 percent CI: −0.50, 0.31; for cardiac surgery: −0.27 with 95 percent CI: −1.25, 0.72). Program participation was associated with a decrease in total 30‐day post‐acute care payments (for cardiac surgery: −$718; 95 percent CI: −$1,431, −$6; and for orthopedic surgery: −$591; 95 percent CI: $‐$1,161, −$22).
In short, there was no change in overall episode cost nor quality of care, however, post-acute care payments did decrease. Although this was a small demonstration for a limited number of procedures, one would hypothesize that episode based payment would incentivize less use of post-acute care (including hospitalizations), but with some potential that quality of care could worsen and quantity of care provided within the episode could decrease.
Source: