Rust on the guillotine blade

This post is a summary of my views regarding metaethics – what I mean when I talk about morality, and whether I think there are moral facts or a sense in which our judgments of right and wrong can be said to be correct. To be clear, I use “morality” and “ethics” pretty much synonymously throughout the text.

What I’m currently vouching for is a type of moral realism, but my views are still a work in progress (to say the least) and what I have now is certainly only a starting point. I promise I’ll get back to you in five decades or so with well-researched, super confident answers with carefully constructed formulations and all, but here’s a rough first draft just to pass the time!

***

I’ve always had a serious problem with Hume’s guillotine. Having these weird separate metaphysical categories for ought things and is things probably sounds A-OK in a world where people also don’t save people acutely drowning in bogs unless they recite a satisfactory amount of Christian prayers, but I have a feeling that in a physicalist worldview it won’t do: either we have a natural source for morality, or we don’t have morality at all.
Humans frequently speak of and act according to various forms of morality, whatever these forms may be, and since morality is a concept that needs to be defined only in the framework of human existence to be meaningful and functional, people acting as though it exists is sufficient to make it exist (just like human emotions are real even though their only source is human brains doing human brain stuff). All of this happens completely within the realm of what is, with no need for anything external to humanity determining moral truths for us. Like humans – which I here assume to be wholly physical beings – and all of their culture, morality exists in the physical nature and can be inferred from other aspects of physical nature: there is nothing ontologically mystical about it, it’s just very confusing in a million other ways (as is, of course, human consciousness as a whole).

As an aside, I was actually prompted to write this post yesterday by the introductory chapter in Patricia Churchland’s book Braintrust, in which she argues that the meaning behind Hume’s guillotine has been generally misunderstood and that David Hume himself also believed morality can be inferred from nature (it’s just that it’s vastly more complicated than people reasoning from simplified, fallacious naturalist principles assume). I hope she is correct about ol’ Hume, because this stuff is good.

Hume made his comment in the context of ridiculing the conviction that reason — a simplistic notion of reason as detached from emotions, passions, and cares — is the watershed for morality. Hume, recognizing that basic values are part of our nature, was unwavering: “reason is and ought only to be the slave of the passions.”
By passion, he meant something more general than emotion; he had in mind any practical orientation toward performing an action in the social or physical world. Hume believed that moral behavior, though informed by understanding and reflection, is rooted in a deep, widespread, and enduring social motivation, which he referred to as “the moral sentiment.” This is part of our biological nature.

***

Okay, so people use the word morality, and it probably refers to something real in the physical world, though very likely only present in the minds of humans and in their evolved cultures. This doesn’t say much about the nature of morality, of course: it could still be fundamentally empty and non-universal like any form of fashion that comes and goes along with arbitrarily changing cultural tastes – with nothing ultimately more correct or incorrect than something else, except maybe in relation to the cultural background each item is presented in. This affects how much we should care and expect other people to care about any kind of moral arguments and convictions. Moral relativism of this kind inevitably leads us back to moral nihilism, where all ethical judgments lose their essence, the power and need to convince, that originally made them matter more than your taste in ice cream.

The human moral compass really does seem fickle and arbitrary. From crusades and genocide to caste systems and slavery, history shows us that pretty much anything goes – and from the inside of an ideology, anything looks just as valuable and right as striving for poverty reduction or world peace does to many of us.
But this isn’t yet absolute evidence for moral relativism, any more than the fact that science has failed in the past is conclusive evidence for the all-encompassing truth relativism endorsed by annoying postmodernists. Many widespread moral views we deem atrocious are, I think, only probable when complemented with factually incorrect beliefs (or aliefs), such as “the enemy is nothing like us”, “it’s not so bad to be a slave if you’re black”, and “deity X will reward those who Y”. The reason why history has seen so many horrible moral frameworks isn’t necessarily that moral judgments are always brutally arbitrary and they just look wrong to us: it can also be in part because we’re looking at people who lack relevant factual information, something their mostly functional human conscience requires as input to be able to produce non-horrible, somewhat stable results. But I’ll return to this point in a while.

Another reason for the weird results seen in history as well as our everyday interactions is that moral egoism is actually correct, in the same way that mutual defection in a one-shot prisoner’s dilemma is correct. That is, it’s, uh, incorrect – in fact, self-refuting, in the sense that even if it might intuitively seem so, egoistic goods can’t be achieved by seeking to maximise them as mere individuals acting on causal decision theory. Even if you only care about yourself, you have a much greater chance of having a good life if you live in, and by acausal reasoning also seek to build, an environment in which people adhere to more or less altruistic principles such as the Golden rule or some variant of utilitarianism, just like everyone involved in a prisoner’s dilemma is better off as long as everyone by acausal reasoning cooperates despite each individual’s incentive to defect. However, there’s always going to be an incentive to defect when you know your co-player will cooperate, and likewise, moral failures stemming from opportunistic egoism are always tempting to humans operating with motivational structures like ours, which is why it’s not surprising that the world in general has been and largely still is a bloody mess. Of blood. And mess.

***

So, egoism creates some kind of a protomorality – just a seed, the adherence to which is not yet morality, but the reason there should be a morality in the first place. This is important to note, because according to this sort of simple egoism you might as well, for example, accept the torture of a million people as long as you don’t lose any utility from the decision and receive a cone of your favourite ice cream. (Maybe even your least favourite ice cream. Ice cream is always pretty good.)
I currently think this seed can be referred to as aesthetics, though philosophers of actual aesthetics probably disagree here because they have developed vastly more complex exact definitions for the concept and I should come up with a better word before someone catches me using it like this. Anyway, what I’m referring to is the content of egoism: what one personally values and/or likes to have as part of their conscious experience, ranging from not being in pain and feeling one is loved and respected to witty poetry and the knowledge that burrowing rain frogs exist or that other people have good lives, because ethics too is a strange loop.

The next paragraph is sort of the key point, so I’m highlighting with pretty colours:
Morality is what emerges when multiple agents with this kind of aesthetic values interact with each other, and it’s how their cooperation is to be coordinated so that the result is better than if the agents acted on pure egoism. An anti-defection system, in short. (This is not intended as an exact formulation as much as a general idea of what the function of morality is, so please be patient. Five decades, I promise.)
The values here are obviously contingent, defined by whatever brain/mind states each agent personally finds preferrable and how they may be achieved. The most effective means to achieve these goods are also contingent – the solution to morality itself, beyond the scope of this text – as it depends on the nature and group dynamics of the agents in question.

To elaborate on this definition a bit, I have three conditions that most people would hopefully agree are necessary, and double-hopefully maybe even sufficient, to define what morality is:

• To have morality, we obviously need agents with preferences regarding what they consciously experience: no use to build ethical systems for the wind, rocks and trees. Not even grass cells with cutesy smiling faces, though they do come close to fooling us.
• In addition to that, ethics must also on some level seek to fulfill these personal values (though not necessarily the explicit, conscious preferences of the agents – for example, I’m sure that in some religious systems it’s going to be whatever is  “good for one’s soul in the end” or something; but these are still expected to positively influence the subjective experience of the conscious being). No one would use the word about a normative system that’s indifferent or negative to the value it produces to conscious individuals, be it humans or some pantheon of gods we’re obliged to please. The consequences of an action or a rule or any other ethical construct must have a bearing on whether it is morally right, even if these consequences are mediated through concepts such as adhering to virtues or absolutely obeying certain rule sets.
• Ethics also has to arise from the interactions between these beings capable of affecting each other: a solitary agent on a desert island, guaranteed never to meet anyone else (or be under any kind of supernatural supervision), doesn’t need the concept of morality for anything, they can just do whatever.

If these three conditions were fulfilled looking at a behavioural system of an alien species XYZ in outer space, I would pretty confidently be able to say that the species has a morality, whereas if one or more of these aspects were missing, the behaviour would just be weird alien stuff.

***

Due to the wording of my definition above, it may sound like the approach sort of sneakily presupposes utilitarianism. This is maybe partially true, because utilitarianism seeks to fulfil something like these conditions and defines itself very similarly as a result. However, there’s a subtle distinction – utilitarianism is actually just one possible answer to this request, and it may well be that the coordination can’t be based on the fundamental axioms present in most forms of utilitarianism, such as being able to measure and compare utilities between different agents. These difficulties in quantifying the elusive concept of utility may mean that the best way to coordinate our personal quests for aesthetic goods is actually to give up on the maximising and on trying to evaluate the consequences of an act, and just always act according to rule set X in every situation Y, leading us to deontology. Moral systems converge in this way because they are all attempts at being the best, in this case most human-compatible, anti-defecting coordination system.
Take simple act utilitarianism. A group of agents so hopelessly ineffective that their individual actions don’t reliably produce the right consequences at all should not coordinate their actions by sticking to an act-utilitarian moral framework, because it wouldn’t realise anyone’s personal needs and wants much more than everyone randomly defecting in PD leads to optimal outcomes for its players. Therefore, for them, act utilitarianism is a bad alternative compared to a rule-based system such as virtue ethics, deontology, or genetically hard-wired behavioural responses.
On the other hand, another group of sufficiently rational, far-sighted agents, with vast amounts of computational power and the ability to model each other with near perfection, could understand the consequences of their individual actions so well that a less rigid, situational act utilitarianism might become the way to optimise everyone’s individual welfare. (If you guessed humans are closer to this latter group, by the way, guess again!)

With ethics defined as above, we can see that there indeed can be moral frameworks more or less correct than others. Were I to somehow, impossibly, learn everything about how the alien creatures XYZ above function and feel, I would by applying game theory be able to judge how well their morality works for them, that is, whether their ethical systems are close to being correct or ideal. (Were I a superintelligence with the ability to model each person in a given group of humans so that I knew them impossibly well, understanding the consequences of all world states reasonably possible vastly better than they could predict them for themselves, I could do the same judgment for them.)

Just as an example, let’s assume XYZ to have a very simple inner life: all they value, all that produces them positive qualia or brain states they would seek if on their own, is to gnaw at apples at a pace of one apple per day. There are enough apples for everyone, guaranteed to last far into the future too – it’s just that one half of XYZs are unable to climb the apple trees on odd days, and the other half on even days. It’s no big deal for an XYZ to drop down two apples at a time, as the only resources needed for that are the miniscule cognitive effort of remembering to do so and spending three extra seconds in the tree poking at a second apple until it falls down.
I would define morality so that there’s an obvious, correct ethical solution to this extremely simple coordination problem: everyone able to climb will just drop down an extra apple each morning. A moral relativist or nihilist, on the other hand, uses a definition of morality which says that this solution, just like any other, is essentially nothing but a matter of taste, and a society of ZYXs who burn the apple garden to ashes and proceed to wail in hunger and misery until the end of time while poking each other with sharp sticks and lying to their grandmothers is not employing a worse moral strategy here at all –  because there’s no value to moral strategies. I’ll emphasise that this is a matter of defining a concept, so I’m obviously not saying their view is incorrect. I just think that the concept of morality humans generally speak of and the purpose for which we need the concept in the first place calls for a definition closer to the one I would use, one in which it is indeed possible in principle to reach moral correctness (in non-contradictory situations and situations where dissolving contradictory values is possible).

***

A functional ethical system for a given group of agents, then, needs to be based at least on their biology, in that our physiological composition defines what all the vague aesthetic goods actually are for us – what we personally value based on how our evolutionary roots and personal history have formed our subjective experience, such as emotional responses to various stimuli – and game theory, which gives us an idea of what we should do to be part of the solution to these coordination problems. Since most of us aren’t XYZ’s, it certainly gets complicated and even inconsistent at times because of how the values often contradict each other even within the mind of a single individual, let alone in groups of agents with differing preferences. But I’m guessing that this inconsistency can be reduced to a surprisingly great extent by adding in a third ingredient – factual knowledge about the world, its inhabitants, and the interactions between them, all of which should help us understand and prioritise our own values better. Moral progress happens alongside scientific progress: our knowledge of the is slowly builds our oughts.

This is a bit speculative, but it seems this applies also to knowledge about subjective, first-person facts. For example, if A knows well enough how B feels when harmed, A’s subjective state approaches one that’s indistinguishable from B’s experience, and A is obviously not inclined to harm B. Lack of empathy, arguably even lack of its affective component, is a lack of factual knowledge. People who harm others wouldn’t do so if they simulated their harmees well enough.
This alone would not mean that harming others is wrong, of course, because it’s in no way axiomatic that having this kind of knowledge is morally better than lacking it (and indeed, no one except perhaps some incredible cases of conjoined twins can even have notable levels of this kind of knowledge. (Yet. Mindmeld mindset.)). However, if we assume according to the metaethics I described above that we’re solving actual coordination issues here, it becomes clear that an accurate picture of reality should help, like it always does when figuring stuff out about the real world.

Still, we currently have only vague hunches about what kind of a moral system best works for us: we have much to learn about our own nature, its limits, how reconcilable our values really are with each other and what can be done to the ones that aren’t very. But we have approximates, systems which in most realistic cases seem to produce beautiful outcomes when adhered to, and I think we have a basis for what can be called moral progress.
The fact that ethics as a field isn’t complete because it still shows confusions and unintuitivenesses in various extreme situations doesn’t mean you shouldn’t care about (and see as a morally better alternative) the things which almost certainly are good ideas, coordination-wise, any more than the lack of a complete theory of everything in physics means you couldn’t apply basic thermodynamics to heat our apartment. We’re getting there.

Advertisements

One thought on “Rust on the guillotine blade

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s