I just posted: 1) the syllabus updated with an addition of two
articles, and 2) the 6 articles for the week after the break. As last
time, each of you prepare a special one that you like (even if there
is a bit of overlap), and be prepared to discuss the others.
Have a super happy Spring Break,
Monica
The review of moral decision-making processes which I chose to focus on (Monin, Beer, & Pizarro, 2007) was very comprehensive and provided an excellent overview of the different influences on moral reasoning. I think the distinctions made were very important to understand—the nature of the questions asked changes the answers one obtains (i.e., cognitive moral dilemmas vs. emotional moral reactions). In addition, the discussion of other moral paradigms, situations, & processes contribute to the overall understanding of moral reasoning, decision-making, and actions. The discussion enables connections between not only the rational/emotional processes, but also enables a greater view of moral temptation/self-control, moral self-image/maintenance, and lay moral understanding/interpretation. Each of these sections provided an excellent overview of the subtopic which covered the major contributions/theories in each area without getting bogged down in too much detail. In addition, I had not previously come across the 4-component model of morality (Rest, 1986; p 105) and thought that this rung true intuitively, by including precursors of the decision and also the action to follow: i.e., “(1) interpreting the situation, (2) identifying the morally ideal course of action, (3) decision whether to try to fulfill one’s moral idea, and (4) implementing what one intends to do.” This theoretical paper laid the groundwork for further study in each of these areas. However, I noticed that the articles for this week highlight the role of emotional activations on moral judgments to the exclusion of rational decision-making. This made me wonder if the field has perhaps swung to the extreme of looking only at the emotional aspect of moral processing rather than the initial investigative extreme of looking at only the rational processes. I think future work would benefit from an exhaustive view incorporating all of the discussed aspects of moral processing laid out in this theoretical explanation. Particularly, I thought that the two assigned Moll et al (2002, 2007) papers did a great job of examining the emotional contributions of involved brain areas, there was a lack of acknowledgement that rational decision-making processes had any role at all, which I think was a weakness. The Tabibnia et al (2008) did incorporate rational decision-making into the model by creating a tradeoff situation between monetary reward maximization (payoff) and fairness, but again the focus was on the emotional reaction (hedonic pleasure vs. displeasure). The Greene et al (2001) and the Ciaramelli (2007) come closest to addressing the rational processing model since both studies use the moral dilemma paradigm, but again the question asked by researches is about the extent of influence of emotional processing by contrasting normal and focally brain-damaged patients. With the extensive development of technology, it seems that it would be prudent to apply these new research methods (i.e., neuroimaging) to the older discarded theories as well as the newer trendy ones, if only to compare and contrast the results of previously utilized methods in comparing theories. I would be interested to see which brain areas would be recruited in normal subjects to judge moral dilemmas in which emotion has been minimized/excluded or controlled for in the task itself (if this is possible to do, which is debateable). While it is evident that emotion plays a significant role in effecting these decisions, as far as I can see, rational moral reasoning itself still has a role as well but has been largely ignored.
Camille Response: Intention and Moral Reasoning
This collection of readings focused on moral decision making.
Ciaramelli, Muccioli, La`davas, di Pellegrino used patients with
brain lesions in the ventromedial prefrontal cortex to explore the
brain regions involved in moral decision making. These participants
were presented with a moral dilemma, were asked a question about the
appropriateness of an action one might perform in the scenario, were
asked a question about the content of the scenario, and Indicated if
the response is appropriate or inappropriate. They completed 3
sessions of 15 scenarios each all presented on the computer. They
found that individuals with brain lesions in the ventromedial
prefrontal cortex were less inclined and slower to approve moral
violations compared to actions with no moral implication. Also,
patients were more likely to approve moral violations in personal
moral dilemmas, whereas their performance in impersonal moral dilemmas
was not different than that of control participants. I like how these
authors presented this article. They began with evidence already
found from brain activation studies and developmental evidence. Then
they proposed their study as confirming evidence using a different
method. The addition of a new means of testing the hypothesis
provides a stronger argument.
Greene, Sommerville, Nystrom, Darley, Cohen recognize that
both reason and emotion have a role in moral decision making. To
further understand the brain areas active in moral decision making,
they explored brain activation during the processing of moral
dilemmas. Participants were presented with personal moral, non-
personal moral, and non-moral dilemmas. After viewing each dilemma,
participants rated the action in the dilemma as appropriate or
inappropriate. During this task, participants brains were imaged via
fMRI. They found that during personal moral dilemmas emotion areas
more active: medial portions of Brodmann's Areas( BA) 9 and 10 (medial
frontal gyrus), BA 31 (posterior cingulated gyrus),and B A 39 (angular
gyrus bilateral), and that there was less activation in working
memory areas.
Moll,de Oliveira-Souza, Eslinger, Bramati, Moura,
Andreiuolo, Pessoa used pictures to explore the role of emotion in
moral processing. Participants were scanned while viewing pictures of
emotionally charged scenes with and without moral content as well as
emotionally neutral pictures (6 different categories of images in
all). After fMRI scanning, subjects rated each picture for moral
content, emotional valence, and level of arousal on visual analog
scales. The experimenters found that moral stimuli activated the
right medial OFC and the medial frontal gyrus (MedFG) and the cortex
surrounding the right posterior superior temporal sulcus (STS). They
also proposed a pathway for moral processing; there was a highly
specific and statistically significant increase in connectivity
between the medial OFC and the STS, the precuneus, and the same
regions of the MedFG, during viewing of morally relevant images. In
this study, I did not understand what the purpose of many of the
categories of images was, the authors just told us what the categories
were, but did not provided theory to why they are needed. A The
authors state that “We suggest that an impaired ability to
automatically and rapidly process moral emotions in response to signs
of moral violations may be a critical mechanism underlying this
dissociation.” Since timing is so critical, I thought using an EEG
maybe useful in understanding this process.
Moll, Oliveris Sousa, et al. looked at agency and morality
and the brain areas associated with them. They presented written
statements to participants that described action scenarios
(‘‘scripts’’) that independently addressed (1) the contribution of
agency to brain activation independently of emotional processing, and
(2) the effects of distinct classes of moral emotions namely,
prosocial and other-critical emotions while controlling for agency.
Scripts expressed either: no agency, neutral agency, Indignation-
self, indignation-other, disgust, compassion, embarrassment, or
guilt. They found that anterior medial PFC and temporal poles were
more consistently activated by prosocial emotions, whereas the dorsal
ACC, lateral OFC and the ventral temporo- occipital cortex responded
more consistently to other-critical emotions. Empathetic emotions
(guilt/compassion) activated the midbrain and of ventral striatum.
Monin, Pizzaro, Beer present a review article about the
debate between the role of reason and emotion in moral decision
making. Basically, they state that the reason there is so much debate
in this area is because the experimenters arguing for a predominant
role of reason (moral dilemmas which elicit use of reasoning) are
using different methodology than the experimenters arguing for the
predominate role of emotion (moral reactions which encourage us of
emotions in decision making. The authors agree that these 2 moral
situations evoke different processes, and that there is value in using
different methodology to understand moral decision making. The also
express the need to include moral temptation, moral self-image, and
lay theories of morality in study of morality.
Tabibnia, Satpute, Lieberman use the ultimatum game to explore
whether perceptions of fairness elicit positive emotions. They used
methodology that looked separately at fairness and monetary gain. One
participant makes an offer to give the other participant some sum of
money. If participant accepts the offer, then both participants get
the amount proposed, but is participant 2 rejects the offer, no one
receives money. They found that participants experienced greater self-
reported positive emotion and activation of reward brain areas
(ventral striatum, the amygdala, VMPFC, OFC, and a midbrain region
near the substantia nigra) when fair offers were presented. Also,
accepting unfair offers required emotional regulation, activating the
right VLPFC , which reduces negative affect associated with the
anterior insula. I liked how this study designed the offers in the
ultimatum game so that they could differentiate monetary gain from
perceptions of fairness.
I did have one concern with these studies. With all of these
studies having so few participants, I wonder how much of brain imaging
is capturing social desirability effects. Because even if data is
anonymous, participants want to appear moral, and therefore
activations may reflect in part desire to appear moral and not actual
moral decision making. It seems that social desirability would be
higher in the moral situations than non-moral since they are greater
reflection of the self. I am not sure if there is a method of
assessing social desirability concerns, but if so I would have liked
to have seen that addressed.
In both studies, participants played the role of the responder in the
ultimatum game, deciding whether or not to accept a fair or unfair
offer. I liked that they used the ultimatum game – I think it is one
of the more natural behavioral measures. The first study was just
looking at self-reported happiness and contempt associated with fair
and unfair offers. I wondered why they didn’t have more participants
in the first study (n = 29) since it wasn’t a brain scan study, but
this is small flaw. I also wondered if there would have been a
difference in ratings if the participants had been asked to rate their
happiness and contempt for each offer as it occurred, rather than
subsequent to playing the game. Answering these questions after the
game has been played makes the scenarios more hypothetical and less
personally relevant; in addition, past research has shown that people
are not very good (accurate) at predicting how some unexperienced
event would make them feel. Regardless, the authors found that
participants reported greater happiness for fair offers than unfair
offers of equal value, and reported greater contempt for unfair offers
vs. fair offers.
The second study used fMRI to scan participants while they were
considering fair or unfair offers. Fair offers were associated with
higher activation in the ventral striatum, amygdala, vmpfc, ofc, and
the midbrain region near the substantia nigra, as predicted. However,
there was no evidence to suggest that these regions were also more
active when accepting vs. rejection unfair offers. This suggests that
the acceptance of unfair offers is driven by logic, not emotions. I
liked that the authors considered whether or not activation of the
reward system during fair offers could be due to carry-over effects
from the first part of the trial (they determined this was not the
case).
Throughout this article, the authors kept mentioning that it is
difficult to separate emotional responses to fairness from emotional
responses to monetary payoff. They held compared fair and unfair
offers for equal monetary values, but I wondered why they didn’t also
look at the proposer’s (rather than the responder’s) affect. Here, the
participant would still be receiving an unfair offer ($8/10), but the
offer would favor the proposer (an unfair situation that is associated
with high monetary payoff). I think that this would be an interesting
addition to their current study; we now know about brain activation
and the acceptance of an unfair offer that disadvantages someone, but
what about brain involvement during the acceptance of an unfair offer
that favors someone? I also wondered if fairness matters less as the
monetary value of the reward increases. Behavior may change such that
more unfair offers are accepted, but would happiness decrease in these
offers?
And I have just recently received in my inbox that Jenny focused on
this article too. xP
In reference to the other articles, I didn't spend much time on the
review paper, I liked the paper that used patients with brain damage
instead of fMRI (I particularly found interesting their discussion of
how the amygdala may be important for affective info regarding
immediate outcomes and the vmPFC regarding long-term affective
outcomes), and the other 3 or 4 were very similar and blurred in my
memory without careful review.
--
To unsubscribe, reply using "remove me" as the subject.
One of the main dichotomies of moral philosophy in Western cultures is
indeed that of that of: Is there an “a priori” right or wrong? Or are
actions right or wrong depending on their consequences? Kant versus
Mill, or deontologists versus utilitarians.
Kant was the main proponent of “deontological ethics”, (root of word
duty is deon). He believed that some actions are wrong no matter what
are the consequences that from these actions. Kant argued that the
only absolutely good thing is a “good will” or the motives/intentions
of the person carrying out the action. According to Kant, the
consequences of actions, even if good, are not morally right if the
person acts on bad intentions or bad will. The highest good is good
without qualification if one is acting from duty.
In contrast, Mill, who was the main proponent of Bentham’s
utilitarianism, believed that what is right is "the greatest good for
the greatest number of people", which refers to the idea that actions
are determined solely by their utility in providing happiness or
pleasure to the greatest number of sentient beings. In sharp contrast
to Kant, for utilitarianism the moral worth of an action is determined
by its outcome.
Social neuroscience, by focusing on the neural basis of human
morality, as Greene argues, we may be finding out how much of the
dichotomy between reasoning and higher cognition versus intuitive,
emotional may actually be reflecting structural properties of the
human brain. Greene and colleagues have shown that brain areas
associated with emotional social processing (medial prefrontal cortex,
posterior cingulated/precuneus, superior temporal sulcus, inferior
parietal lobe) were more active when the participants considered
personal moral dilemmas whereas the areas associated with cognitive
control (right DLPFC, bilateral inferior parietal lobe) were more
active when participants were dealing with impersonal moral dilemmas.
Therefore, both rationalists and utilitarians are right, except for
the fact that socio-emotional processing is involved in
“deontological” intuitions, and cognitive control processing is
involved when making utilitarian judgments. Moll and colleagues also
argue for a moral sensitivity construct that depends on the brain
activation of areas related with social agency as well as areas
related to typical moral emotions (guilt, embarrassment,
compassion).Basic and moral emotions activate the amygdala, thalamus,
and upper midbrain. However, the medial prefrontal cortex, and the
orbital prefrontal cortex, and the STS are recruited when moral
appraisals are involved.
Beka argues, however, that we may be highlighting the role of
intuitive, emotional activation patterns without giving a fair chance
to rational, non-emotional moral dilemmas in which rational moral
reasoning may have a larger role.
I found interesting that Camille, as always, got to the heart of the
matter straightforwardly: How much of this work is capturing social
desirability effects?
Jenny discussed Tabibnia et al article, which examined whether
fairness activates the reward system. Her interesting proposals are to
examine brain activations of the participant when the proposer is the
one who is treated fairly or advantageously, as well as changes in the
relevance of fairness with changes in monetary values.
Is Stuart giving an answer to the latter? Stuart also mentions the
role of emotion regulation in these studies. I was also surprised to
see how much self-control and emotion regulation appears to be a
central component in moral judgments.
David, who seems to have an eye for the clinical neuroscience work,
examined Ciaramelli et al, in which the role of the ventromedial
prefrontal cortex in moral judgment is examined with patients with
lesions in this area. Here, the authors also pose relevant questions
regarding the role of the ventromedial prefrontal cortex.
The ventromedial prefrontal cortex was associated with several areas
related to self-control functioning such as lack anticipation of
future consequences, reduced self-conscious emotions, failures to use
emotional cues to inhibit morally unacceptable behaviors. Monin
Pizarro and Beer focus extensively on moral temptations as
prototypical moral conundrums.