Monday, January 05, 2009

Inside the Moral Black Box

That we make moral judgements is uncontroversial enough, but how we do so - how we go from a particular stimulus to a moral judgement and subsequent behaviour - remains a matter of great conjecture. What exactly goes on inside the Moral Black Box?

We can start by graphically representing the process of moral judgement in its simplest terms (all diagrams go from the bottom up, with yellow indicating a change from the previous diagram):

Figure 1

At the bottom of Figure 1 is the raw stimulus: the sights, sounds and smells that confront our senses. At the top is the moral judgement, such as whether it's permissible or not to cause harm to another person. In between is the Moral Black Box, the inner workings of which have been the subject of much debate and speculation of late.

However, we don't just want to know how we form moral judgements, but we want to know how moral judgements lead to behaviour. So we expand our diagram slightly in Figure 2:

Figure 2

One solution to the workings of the Moral Black Box comes from Immanuel Kant, who suggested it is reason that plays the pivotal role in making moral judgements:

Figure 3

Thus, when you observe an act, you consider it in light of moral principles and subsequently judge it as permissible or impermissible. Only after that does emotion come into play to motivate behaviour.

Another solution comes from David Hume, who saw the moral faculty in almost the exact opposite light to Kant:

Figure 4

Instead of reason being the primary arbiter of moral judgement, Hume suggested it is the emotions or sentiments - feelings of approval or disapproval. This is because reason alone can not motivate moral judgement, all it can do is evaluate facts - the is but not the ought of a situation. However, once the moral judgement springs from the sentiments, reason provides options and alternatives for subsequent behaviour.

Hybrids

However, these aren't the only models of the workings of the Moral Black Box. Marc Hauser has suggested a possible 'hybrid' model incorporating elements of both Kant and Hume, “a blend of unconscious emotions and some form of principled and conscious reasoning,” (Hauser, 2006):

Figure 5

The two faculties of reason and emotion may be in accord or they may conflict, in which case another mechanism must interject to resolve the conflict and arrive at a moral judgement. (Note: Hauser's model stops at moral judgement; I've added the extra step to behaviour to keep it consistent with the other diagrams.)

Hauser then goes on to suggest a new, more sophisticated, model incorporating some insights from John Rawls, Hauser's so-called 'Rawlsian creature':

Figure 6

According to Hauser, “perception of an action or event triggers an analysis of the causes and consequences that, in turn, triggers a moral judgement... Emotions, if they play a role, do so after the judgement,” (ibid.).

Hauser then draws on Rawls' linguistic analogy to talk of the 'action analysis' as a 'moral grammar'. This moral grammar automatically gives structure and meaning to a situation, such as attributing intentionality to observed agents, and inspires moral judgement. Reason and emotion engage only after the moral grammar has done its work.

Moral Psychology

Which, if any, of the above diagrams is the correct model? Thankfully, we needn't speculate entirely from the armchair because moral psychologists have performed experiments intended to probe the moral faculty and establish whether reason or sentiments play the pivotal role. Here is what one of them found (Haidt, 2001):

Figure 7

Evidently, Hume was pretty close to the mark. According to the research, emotions (which, for the time being, will be used synonymously with 'sentiments' when applied to moral judgement) come first and inspire moral judgements. This process is rapid, automatic and not mediated by conscious deliberation. Only after the judgement springs forth does reason come into play to justify the judgement (although it's not always successful or consistent) and to direct behaviour.

(It should be pointed out that Haidt's experiments dealt with a fairly restricted set of situations, and other more complex situations such as moral dilemmas might yield different results, and a different moral model - although this will be factored into the moral diagrams further down.)

Questions

Assuming this model is accurate, it still raises a few key questions:
  1. Why are certain emotions elicited by certain stimuli rather than others?

  2. Given a particular stimulus experienced by two individuals, how can we account for variations in moral judgements made by those individuals?

  3. How do moral judgements lead to behaviour?

  4. What role, if any, do reason and moral beliefs play in the formation of moral judgements?
Moral Grammar

For a solution to the first question, we can turn to Hauser's 'Rawlsian creature' of Figure 6 and appeal to the notion of a moral grammar. This is a faculty is not unlike our language faculty, with a universal grammar that automatically and systematically structures sensory stimuli and orders them according to certain moral principles. One example of the moral grammar in action is the attribution of intention to agents, such as whether person A intended to do action X to person B, or whether it was accidental - a distinction that is crucial to forming a moral judgement.

However, instead of the moral grammar leading directly to a moral judgement, perhaps we can bring Hauser's (Figure 6) and Haidt's (Figure 7) models together and create a new one:

Figure 8

In this model, the moral grammar then inspires particular emotions/sentiments that lead to moral judgement - all automatically and without conscious intervention. Only after the initial moral judgement is made does conscious reflection kick in.

One of the big questions in this area is whether there is but one universal moral grammar that operates in much the same way in all humans, or whether there are multiple specific moral grammars that might vary from person to person, culture to culture. I'll remain fairly agnostic on this question, although I wouldn't be surprised if there was only one moral grammar with fairly minimal variation between individuals and cultures, and the variation in moral judgement springs from another source, which I'll discuss next.

The Lens

It's clear that there is an enormous variability in moral judgements and moral perspectives around the world, and any model of the Moral Black Box needs to account for this. One possibility is that the emotional responses vary from one individual to the next. Another is that there are many moral grammars, each informed by biology and culture, and these account for variation in moral judgement.

However, I think there may be another approach that could account for a great deal of moral variability without tinkering too much with the moral grammar - which can remain relatively uniform amongst all humans - and the emotions.

I call it the Lens:

Figure 9

Psychology is already very familiar with the notion that the raw input from our senses is heavily edited before it reaches our conscious awareness. We pick out shapes, faces, objects and even apply stereotypes or gauge threats in the barest moments after we've perceived something.

Perhaps we can draw upon this notion to propose a Lens through which the raw stimulus is filtered, even before it reaches the moral grammar. In fact, the Lens could determine which things are processed by the moral grammar, and which are processed by regular non-moral cognitive faculties.

It's the Lens that gives meaning to the raw sense data, and it's this meaning that could end up doing a majority of the work in forming a moral judgement. For example, we know that we make near instantaneous judgements about who belongs to our in-group and who to our out-group well before we have a chance to reflect on our judgement. Certainly, we can redirect this impulse, as many of us do, but the impulse is there.

So if we do have a Lens that plays a role in moral judgement, what could be the influences on it?

Figure 10

The Lens may be shaped by a variety of influences. One may be biology and evolution, such as the fact that we sort people into in-groups and out-groups at all.

Another influence could be culture, which could provide much of the content of the Lens, such as to whom we attribute in-group and out-group status.

A third influence could be experience, a trivial example of which is if you get food poisoning from an oyster buffet, you'll never look at oysters the same way again. The same might be said if you're assaulted by a member of a particular ethnic group, you might find yourself with automatic aversion responses or an automatic mistrust of other members of that ethnic group.

A final influence (although there could easily be more) on the Lens could be mood. Have your bag stolen and suddenly everyone looks like a thief. Or have a wonderful relaxing day, and everyone suddenly looks like a friend.

It's even possible that Hauser's moral grammar itself is a component of the Lens - although one that I believe must trigger later than some other elements of the Lens.

So, a certain stimulus is passed through the filter of the Lens, which sorts and orders the stimulus, discarding useless information and applying meaning to specific elements that are of significance. It could determine whether a certain action triggers the further moral faculties or passes through non-moral faculties.

If a certain stimulus triggers moral responses, it could then be processed by the moral grammar, which applies rules such as the principle of double effect. Depending on the outcome, this could trigger certain emotions, such as empathy or outrage, leading to a moral judgement. All in a matter of moments.

Reason

What of reason? Surely it plays some role in moral judgement? According to Haidt's research, “moral reasoning is an effortful process, engaged in after a moral judgement is made,” (Haidt, 2001, my italics). This places is somewhat later in the chain:

Figure 11

Once the initial moral judgement has been made, such as having someone push in front of you in line and responding with outrage and disapproval, reason may interject. An impulsive response to an injustice might be to enact retribution against the protagonist, but reason might give us pause to reflect on whether that action is in our best interests. Reason gives us a unique capacity to imagine future consequences of our actions and evaluate which are desirable and which are not. It also allows us to employ abstract concepts and moral beliefs, such as that 'violence is wrong', thus blunting the potential retribution.

Reflection using reason might yield multiple possible behaviours and enable us to balance them against each other via their various possible consequences. Should some of these consequences go against our explicit moral beliefs, or should we consider a more suitable behaviour that yields an optimal consequence, we can redirect or inhibit our original impulsive behaviour (although, of course, this may not always be successful).

A big question here is the role of explicit moral beliefs, their origin and their voracity. I'll leave these questions open for the time being, but it's a topic I will develop further later.

Moral Dilemmas

As we all know, moral judgements are not always straight forward affairs. Moral conundrums abound, as we can see in the plethora of moral dilemmas concocted by philosophers throughout the ages. To account for conflict within the moral faculty, we need to adjust our notion of the roles of emotions:

Figure 12

Here we have a slightly more complicated model, where the Lens and moral grammar contribute to multiple emotions - such as happens in trolley dilemmas. We could also have self-interested emotions arising at the same time. Perhaps the individual who pushed in front of you looks to be a dangerous sort. Perhaps our sense of danger battles with our sense of outrage, yielding conflicting judgements and conflicting possible behaviours.

Figure 12 is only a very simplistic rendering of these processes, which I think could take place both in the emotions and in reason. But it does start to explain the sources of moral conflict, which could be as follows:
  1. Conflicting moral emotions (eg empathy for the various bystanders in the trolley dilemma)

  2. Moral emotions conflicting with self-interested emotions (eg outrage versus self-preservation)

  3. Moral judgement conflicting with moral beliefs (eg outrage versus belief in non-violence)

  4. Moral judgement conflicting with possible consequences for behaviour (eg desire to help but anticipation that behaviour will cause harm and/or guilt)

  5. Multiple possible behaviours (eg only possible behaviours have negative consequences)
I have no doubt this is far from a comprehensive list. It will only be through empirical endeavour that we will be able to tease the various sources of conflict out of our moral faculty.

Emotion

Bryce Huebner, Susan Dwyer and Marc Hauser recently posted an opinion piece in Trends in Cognitive Science questioning the role that emotion plays in moral judgement. In some ways, it's directly questioning Haidt's findings, as shown in Figure 7. Huebner suggests that emotion may play a role at multiple points in the moral faculty, not just at the beginning as the font of moral judgement.

While not directly suggesting a concept such as the Lens, Huebner comes quite close:
“We suggest instead that our moral judgments are mediated by a fast, unconscious process that operates over causal-intentional representations. The most important role that emotions might have is in motivating action.”
This places the emotions after the Lens, as in Figure 9. However...
“Emotion could modify the inputs into distinctively moral circuits rather than modulating the operation of these moral circuits themselves.”
This could parallel the role of mood in the Lens in Figure 10.

I think Huebner is referring to the moral grammar in the article when he mentions "a fast, unconscious process," but I think the Lens could be a better fit for what he's looking for. And that's not to say the moral grammar isn't one component of the Lens. But I do think his criticism of Haidt's research - or at least Haidt's interpretation of the results - is valid. We don't yet know the role of emotion in moral judgement for sure.

In fact, 'emotion' may actually refer to several things that interact with the moral faculty at several differet points. For example, mood might affect the Lens; moral emotions like outrage or empathy might affect the initial moral judgement; and guilt might come into play when we imagine possible outcomes of actions or after we've acted. So 'emotion', in all its guises, might occur throughout the process, not just at one specific point.

Also, consider that we never get a single stimulus and the time to reflect on it in isolation. We have a continuous stream of stimuli and a corresponding continuous stream of emotions. So positing emotion at one particular point is a gross simplification, albiet hopefully a useful one for the purposes of understanding the workings of the moral faculty.

The Moral Black Box

Figure 13

So the Moral Black Box may not be impenetrable to scrutiny after all. We are accumulating an ever-increasing amount of evidence that reveals the ways in which we make moral judgements, and in recent years we've seen a resurgence of Humean and Rawlsian approaches to our moral faculty.

However, as far as I know, no-one has gone as far as to propose a Lens through which our sensory input is filtered; a Lens that may play a substantial part in forming moral judgements by giving meaning and significance to the objects of perception.

From there (or as a part of that process) a moral grammar may apply specific rules, such as the principle of double effect, which leads the way to initial moral judgements, fuelled by emotions. Up until now, everything has happened nearly instantaneously and without conscious reflection. Perhaps even higher primates and other social animals possess similar moral faculties to this point.

But only humans have reason. Which, at this point, steps in and causes us to pause and reflect and consider possible consequences of our action. This is a slow and taxing, process, however. It may also introduce new complexities and threaten to paralyse action through conflict with explicit moral beliefs.

But ultimately, we settle on a considered moral judgement, which may be the same as one of the initial moral judgments, or could be revised. And this leads to a course of action. Although even that action might trigger further moral consideration, such as through triggering guilt.

This is only an early model of the moral faculty, and I have no doubt it will be contested and revised many times before it gains even occasional agreement. And it'll ultimately be up to empirical science to test this and other models to see whether they actually work, and if they do, how they differ in structure and/or function from one individual to the next.

These models also raise a great number of questions, such as:
  • How is the lens constituted?

  • To what extent is the lens fixed (i.e. attributing in-group/out-group) and to what extent is it variable (i.e. who is assigned to in-group/out-group)

  • What are the influences on the lens (biology, culture etc)?

  • How do conscious thought, reason and explicit beliefs affect the lens, if at all?

  • What role does emotion and mood play in the lens?

  • Is there one universal moral grammar or are there multiple moral grammars?

  • To what extent do they account for variation in moral judgement and moral competence?

  • How are conflicting moral sentiments resolved?

  • How do moral beliefs form and how do they influence moral decision making?

  • How does the flow through the model differ for various moral dilemmas?

  • Which faculties are domain-specific and which are domain-general?

  • How do individuals with psychopathy or other neurological disorders (such as VMPC) differ from normal individuals in their moral faculty?

  • How does the human moral faculty differ from that of social animals or primates? Is it similar except for our capacity to reason and inhibit behaviour?

  • And many more, I'm sure...
I welcome any comments, criticism or feedback, or any suggestions of changes to the models. I also have the original file used to create these diagrams, in OpenOffice Draw format, and I'll be happy to forward the file to anyone interested in tinkering with them. Just email me: the [dot] tim [dot] dean [at] gmail [dot] com.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home