I was sitting down with a colleague this morning–he’s a biochemist and I’m a neuroscientist–and the question came up of how hypotheses can be properly falsified in the social sciences and especially computational social science. This, to me, is an incredibly important question for the future of the social sciences, because, if you accept Karl Popper‘s requirement that scientific questions by definition, must contain falsifiable hypotheses, then social science can’t just be observational and inductive.
Now I’ve been familiar with Popperian-based social science since my training days in Ann Arbor (e.g. much of survey research is of this type). My problem is a deeper one. Given the complexity of human social relations, and our difficulty making measurements (of say the type we make in chemistry), how sure are we that the result of an experiment on social human beings can say anything definitive about an underlying hypothesis (much less falsify it).
Here I explicitly ignore trivial examples such as: all heads of state throughout history have been male (false).
Rather, I’m after the really non-trivial social science questions. The following is an example: human settlements (such as cities) follow a power law as far as population is concerned.
Even more importantly, as we create models for social systems in silico, we can conduct experiments on the models with Popperian rigor, but what really can we say about the relationship between a falsifiable hypothesis for a computational social science model and its corresponding hypothesis in the world of social human affairs?
Recently the new field of social neuroscience has opened up, led by John Cacioppo at the University of Chicago among others. Here at least, is the germ of an answer to my problem. Functional MRI measurements of human brains as they interact fulfills Popper’s requirements–within the caution that the fMRI BOLD signal is mismatched to both the spatial and temporal dimensions of the neural code (think neurons and action potentials to a first approximation).
The field of experimental economics, as pioneered by Vernon Smith, similarly, fulfills Popper.
Interestingly, the field of computational neuroscience requires a “validation” cycle with bench top neuroscience to be taken seriously: model results must be compared to bench top science. When the model and the bench top results don’t agree, we go with the bench top data and change the model.
Should there be a similar requirement for computational social science models?