There’s a strong tendency in human nature to draw distinctions along dichotomous lines. Good and evil, black and white, ugly and pretty. We all know that these distinctions only really work in children’s fiction, and even then tend to fall flat, but we try anyway. In teaching, particularly a new subject, those dichotomies are both useful and can lead to the downfall of a lesson.
In that vein, the instructor in my spatial econometrics workshop last week presented two significant data issues that a researcher might encounter in using spatial data: spatial heterogeneity and spatial dependence.
By way of definition: spatial heterogeneity is simply that there is something about an area or a piece of space that is different than the spaces around it. My dichotomizing, learning mind went immediately to the idea of observables. Clearly, if we are trying to include spatial information–location–in a regression, we know that the area has certain characteristics. As long as we explicitly control for these in our regression (and believe they are accurately measured), it doesn’t present much of a problem.
However, this is not always the case due to the level of analysis problem. In a general econometric specification, we control for the unit of spatial analysis that is relevant–county, Metropolitan Statistical Area (MSA), state, whatever it may be. By choosing the level and assigning a dummy variable, perhaps, we assume that all those characteristics are captured uniquely, but also that they are assigned independently to the spatial unit. Take for instance the distribution of the African-American population in the United States. Regression analysis that uses that variable as a covariate assumes that the number of African-Americans in Georgia is independent from the number of African-Americans in South Carolina, which makes little intuitive sense. Both were states with large plantation economies that employed Black slaves from Africa in production of goods. It makes sense that these two states, spatially proximate, would also have similar factors leading to their demographic makeup. Thus, spatial heterogeneity: areas in the South have higher Black populations than in the North.
The corollary to spatial heterogeneity is spatial dependence. Like spatial heterogeneity, we see patterns occur in certain variables, but rather than an outside, perhaps observable and easily measurable factor that accounts for the clustering, there’s something inherent about the place itself that causes proximate areas to change their realization of some variable. Think of housing prices. Housing prices are higher in places with certain amenities (close to transportation, mountains, whatever), but housing prices are also higher in areas with higher housing prices. Perhaps homeowners see their neighbors selling their houses for more and thus put them on the market for more. Or buyers see houses in the area with higher values and thus are willing to spend more. This spills over county and other lines, too.
Both of these problems, regardless of how strict that line is between the two, manifest in spatial auto-correlation. The variation we see in each variable for two spatially proximate observations is less than the variation for two independently observations because the information comes from the same place. Some of this we can control for, some of it we can’t, and some of it we can try to control for with the tools I’ll discuss in coming days.
Regardless, it’s important to remember that the realization of spatial heterogeneity and spatial dependence is the same mathematically. Statistically, we cannot differentiate between whether some unobservable variable caused everything to be higher, or whether each observation is exerting an effect on its neighbors (a butterfly flaps its wings…). So, even with acknowledgement of these problems, we have not established causation.
A familiar refrain is, thus, minimally modified: spatial auto-correlation is not causation.
A note on correlation and causation: (see Marc Bellemare’s primer for a more detailed explanation)
Anyone who has ever taken a statistics course is familiar with the refrain that correlation is not causation. It’s a common refrain because it’s something that is often ignored when statistics are cited in news articles and personal anecdotes. My favorite example of this is that ice cream sales and murder rates are highly correlated. Only the biggest of scrooges would believe that ice cream sales caused murder rates to increase. In the abridged words of Elle Woods, happy people don’t kill people. And in my words, ice cream makes people happy.
They do move together, though, which is essentially the definition of correlation. When ice cream sales go up, murder rates go up; when murder rates go down, ice cream sales go down. Not because one causes the other, but rather because of the seasonality of both variables. More homicides occur in the summertime, and more ice cream is sold in the summertime.