Significance testing
Are these two estimates different?
In general, data users should be careful in drawing conclusions about small differences between two ACS estimates because they may not be statistically different. This is where you may need some statistical help. Some context from the Census Bureau:
Comparing American Community Survey (ACS) estimates involves more than determining which statistic is higher or lower. Users should also conduct statistical testing to make sure differences are statistically significant and are unlikely to have occurred by chance. This testing takes into account the margin of error (MOE) associated with survey estimates, which are based on responses from only a sample of the full population. Read more.
We recommend that analysts check the significance of any important finding in your reporting. If you are highlighting a particular trend or finding, be sure to compare confidence intervals on the estimates, as described below.
Comparing confidence intervals
A general rule of thumb is to compare estimates’ margins of error to look for overlap. Here’s an example:
The S0801 table contains estimates of commuting characteristics by sex. This data is shown in a bar chart below. As shown, in San Francisco, it’s estimated that 11.5% of workers (16-year-olds and older) who identified as male walked to work. For those who identified as female, about 12% of workers walked to work. So, can we say that one of these groups tends to be more likely to walk to work? Is the 12% estimate really greater than 11.5%? This is when we compare the margins of error (we’re using these margins of error as a 90% confidence interval). When we compare these two estimates, the overlap in the confidence intervals shows we can’t definitively say one estimate is larger or smaller than the other.
There are more sophisticated statistical tests for comparing survey estimates. Towards this end, the Census Bureau created a statistical testing tool to help users test estimates.
For your analyses, it’s recommended any key findings have undergone some significance testing (comparing the overlap in margins of error) to ensure the estimates are statistically different. It’s also recommended that you share the margins of error and CVs in your raw data results to make this process more transparent to those looking to replicate your work.
Last updated