Headache. Image source: Mupso. |
“[T]here must be a place where science stops and politics begins, and this border is an extremely complex and uncomfortable one. Science can’t tell us what to do… The choice of policy response itself is not a purely scientific question, however, because it necessarily has moral, geopolitical and economic components.”I used to say the same, that politics unfortunately mixes up scientific questions with unscientific ones, and that informed decision making requires us to first distinguish these. But then I went down a windy road trying to understand where science ends and where decision making begins. This eventually lead to my paper on the measurement of happiness. It also lead me to the conviction that the “extremely complex and uncomfortable border” doesn’t exist. Cox and Ince come to the right conclusion, but for the wrong reasons.
What is and what isn’t in the realm of science, and what is the role of science is in our political system are questions I care about deeply. And so I could not avoid noticing Sean Carroll and Lubos Motl recently discussed whether morals can be reduced to science. They come down, in rare agreement, on the side of “no”. It’s a variant of the “boundary” Cox and Ince touched on, so let us see what they had to say.
Sean and Lubos start by elaborating on what is and what isn’t a scientific statement. A scientific statement, they say, is one that could be false and whose truth value could at least in principle be empirically evaluated. The problem is then that the statement that morals can’t be reduced to science itself isn’t scientific. It isn’t because a definition for “moral” is lacking. Then, all answers to this question are just opinion so why bother with it? Lubos alludes to this by saying that whenever one could answer the question one way or the other somebody might just change the definition of moral
“Imagine that you find some quantity M encoded in the equations of M(orality)-theory in the future and you will claim that it measures morality… The problem is that even with this nice and well-defined formula, one may always legitimately refuse such a measure of morality and choose a completely different one.”This lack of proper definition is an example for what I complained about in my recent post, that many philosophical questions are a waste of time if one doesn’t know what one is talking about to begin with. So let’s not debate the meaning of words and instead identify the real issue behind it.
What people really want to know is where science leaves them the freedom to make decisions. That’s why they are looking for a border between scientific and unscientific questions, the former can be answered by science, the latter presumably can only be answered by humans. In other words, they’re asking for their space to exercise free will.
Free will is an illusion that people hold on to quite stubbornly and that they protect vehemently, so the debate about the unscientificness of morals shouldn’t come as a surprise. The thought that science might tell people what they should or shouldn’t do is a great threat to free will, one that gets addressed in a forward defense. But that’s a misunderstanding. Science has never and will never tell anybody what should or shouldn’t be done because “should” is another one of these ill-defined words. “Should” implicitly necessitates a goal or a purpose.
“Science can’t tell us what to do”, as Cox and Ince write - correctly. But science can in principle tell us what we do. To understand how let’s have a look at what people mean when they refer to “morals” or “values”.
Humans are self-aware complex systems that have to process a lot of information to make informed decisions. Human self-awareness however is limited. We are not normally aware how the detailed processes of our thoughts proceed. In fact recent research in neuroscience seems to show that what we think of as “I” is primarily an aggregating mechanism of various deeper level systems whose detailed procedures the “I” does not normally take note of.
Thus, “we” don’t consciously know the details of how we make decisions. Moreover, a central element of human decision making is ignorance and oversimplification. The one thing that the human brain is really good at is energy efficiency. Which is why the default is to avoid thinking if unnecessary.
What we do instead of monitoring all that information from the input that we receive is learning to construct models of behavior that make use of simplified patterns and categories. Then we explain our decisions and those of others in terms of these simplified patterns. You chose this job because independence is important to you. You think polygamy is immoral and should be punished. These are rough summaries of longwinded thought processes which made use of experience, evolutionary traits, and random noise. They classify decisions in values like “independence” or morals like “faithfulness.”
Morals and values are thus just categories that people use to classify and explain the way they make decisions. Over time, using these simplified models, the higher level “I” system becomes good at predicting what will happen, and interprets this as an exercise of “free will.”
That having been said, if you believe in reductionism, morals and values are just emergent patterns in highly complex systems. It is clearly impractical and anyway presently impossible, but in principle one could define morals in this way. Imagine you’d do this. Now you have a definition for moral. An individual one, one that depends on cultural history as well as genetic ancestry. Here you have it. These are your morals.
You might then go and say that’s not what you mean with moral. And that would be fine with me because I don’t want to argue about words, so just call these emerging patterns something else. The point is that they’re what people make use of when they make decisions, and recall that this is the question we really want to address: What decisions are humans free to make because they’re allegedly unscientific?
If you have such a definition for morals then would science then tell you what you should do? No. It would in the best case simply tell you what you do. The best case being one in which scientists would be able to construct a complete model for human behavior. Depending on your attitude you might call that the worst case.
But while in principle possible, it is questionable that such a model is feasible to construct at all. It seems plausible to me that the process of thought is irreducible in the sense that if you tried to predict it you’d have to create an almost perfect copy of the original system and watch it in real time, in which case you’d just duplicate rather than predict decisions.
In other words, while the “border” between scientific and unscientific questions does not exist in principle, it does exist in practice. And it’s located where our ability to model complex systems ends, an end that might shift somewhat in the future but quite possibly will never entirely recede. The best way we presently know to find out what decisions humans make is to ask them. The best way we know to find out what the global climate does is not to ask humans but a computer model.
What does this have to do with happiness? Well, striving to achieve happiness is a human universal, so much so that you might want to raise the maximization of well-being of conscious beings to a universal goal. Having defined such a goal it would fill in the blank of the “purpose” and the “should” that was previously missing, or at least it seems so.
The problem is however that happiness is a byeffect of natural selection, it’s a simplified response to behavior that has in the past been beneficial for reproduction. Elevating happiness to an end unto itself is a circular definition of purpose, it’s fundamentally meaningless. Which is why, in my paper I argued we should forget about trying to define happiness and its maximization as a proxy to understand human behavior. Instead we should look for a properly defined quantity that has predictive power to describe the evolution of our economic, politic, and social systems, and the suggestion I made was maximizing the number of possible decisions that we (think we can) make. Which might or might not be correct. A scientific question that’s waiting to be answered.
Summary: Ill-defined questions are unscientific, but uninterestingly so. Once a question is well-defined science is in principle able to answer it, but not necessarily in practice. A scientific definition for morals might exist, but quite plausibly we will never be able to construct it. And even if we could, it wouldn’t tell us what should be done, but simply what is done. Opinion begins where our ability to model complex systems ends. This border will inevitably shift over time and it’s this “shift” that makes it uncomfortable. And no, I don’t believe in free will.