When selection bias is an issue, many researchers use propensity score matching to insure that observable differences in patient characteristics are balanced between individuals who receive a given treatment and those who do not. If unobservable characteristics are correlated with observable characteristics, propensity score matching generally works well. Cases where propensity score matching does not work well include […]

Read the rest of this entry »## What is a Pseudo R-squared?

When running an ordinary least squares (OLS) regression, one common metric to assess model fit is the R-squared (R2). The R2 metric can is calculated as follows. R2 = 1 – [Σi(yi-ŷi)2]/[Σi(yi-ȳ)2] The dependent variable is y, the predicted value from the OLS regression is ŷ, and the average value of y across all observations […]

Read the rest of this entry »## Optimal Matching Techniques

In randomized controlled trials, participants are randomized to different groups where each group receives a unique intervention (or control). This process insures that any differences in the outcomes of interest are due entirely to the interventions under investigation. While RCTs are useful, they are expensive to run, are highly controlled and suffer from their own […]

Read the rest of this entry »## Berkson’s paradox

Berkson’s paradox happens when given two independent events, if you only consider outcomes where at least one occurs, then they become negatively dependent. More technically, this paradox occurs when there is ascertainment bias in a study design. Let me provide an example. Consider the case where patients can have diabetes or HIV. Assume that patients have a positive probability of […]

Read the rest of this entry »## AA and selection bias

This video that discusses whether alcoholics anonymous actually improves the outcomes of alcoholics who attend the meeting. More broadly, the video the AA treatment effect discussion serves as an example for expounding on some fundamental statistical issues such as selection bias, randomization, intention to treat, marginal effect, instrumental variables, and others.

Read the rest of this entry »## LOWESS Curves

Often times when doing data analysis, you want to find the relationship between two variables. The first step is typically to plot a scatterplot. To better understand this relationship, however, it is useful to fit a line to the scatterplot. Most commonly, this is done with a simple linear regression (i.e., ordinary least squares (OLS) […]

Read the rest of this entry »## What are regression trees?

Regression trees are a way to partition your explanatory variables to (potentially) better predict an outcome of interest. Regression trees start with a an outcome (let’s call it y) and a vector of explanatory variables (X). Simple Example For instance, let y be health care spending, X=(X1,X2) where X1 is the patient’s age and X2 is the patient’s […]

Read the rest of this entry »## Synthetic Control Method

A common method for measuring the effect of policy interventions is the difference in difference (DiD) approach. In essence, one examines the change in outcomes among observations subject to the policy intervention and compare them agains observations that were not eligible for the policy intervention. A key assumption for this approach to be valid is […]

Read the rest of this entry »## Confirmation Bias

HT: Incidental Economist.

Read the rest of this entry »## What are cure fraction models?

Many people are familiar with survival models. Survival models measure the probability of survival to a given time period. The “problem” addressed by these models is that some people are “censored”, in other words, the do not die in the sample time period. Although longer survival is good in practice, for statisticians it is problematic […]

Read the rest of this entry »