Neural and Moral

1 view
Skip to first unread message

Monicarodr

unread,
Mar 28, 2010, 11:27:24 AM3/28/10
to socialneuro780
Hi everyone!

I just posted: 1) the syllabus updated with an addition of two
articles, and 2) the 6 articles for the week after the break. As last
time, each of you prepare a special one that you like (even if there
is a bit of overlap), and be prepared to discuss the others.

Have a super happy Spring Break,

Monica

beka strock

unread,
Apr 5, 2010, 8:25:19 AM4/5/10
to socialn...@googlegroups.com

The review of moral decision-making processes which I chose to focus on (Monin, Beer, & Pizarro, 2007) was very comprehensive and provided an excellent overview of the different influences on moral reasoning.  I think the distinctions made were very important to understand—the nature of the questions asked changes the answers one obtains (i.e., cognitive moral dilemmas vs. emotional moral reactions).  In addition, the discussion of other moral paradigms, situations, & processes contribute to the overall understanding of moral reasoning, decision-making, and actions.  The discussion enables connections between not only the rational/emotional processes, but also enables a greater view of moral temptation/self-control, moral self-image/maintenance, and lay moral understanding/interpretation.  Each of these sections provided an excellent overview of the subtopic which covered the major contributions/theories in each area without getting bogged down in too much detail.  In addition, I had not previously come across the 4-component model of morality (Rest, 1986; p 105) and thought that this rung true intuitively, by including precursors of the decision and also the action to follow: i.e., “(1) interpreting the situation, (2) identifying the morally ideal course of action, (3) decision whether to try to fulfill one’s moral idea, and (4) implementing what one intends to do.”  This theoretical paper laid the groundwork for further study in each of these areas.  However, I noticed that the articles for this week highlight the role of emotional activations on moral judgments to the exclusion of rational decision-making.  This made me wonder if the field has perhaps swung to the extreme of looking only at the emotional aspect of moral processing rather than the initial investigative extreme of looking at only the rational processes.  I think future work would benefit from an exhaustive view incorporating all of the discussed aspects of moral processing laid out in this theoretical explanation.  Particularly, I thought that the two assigned Moll et al (2002, 2007) papers did a great job of examining the emotional contributions of involved brain areas, there was a lack of acknowledgement that rational decision-making processes had any role at all, which I think was a weakness.  The Tabibnia et al (2008) did incorporate rational decision-making into the model by creating a tradeoff situation between monetary reward maximization (payoff) and fairness, but again the focus was on the emotional reaction (hedonic pleasure vs. displeasure).  The Greene et al (2001) and the Ciaramelli (2007) come closest to addressing the rational processing model since both studies use the moral dilemma paradigm, but again the question asked by researches is about the extent of influence of emotional processing by contrasting normal and focally brain-damaged patients.  With the extensive development of technology, it seems that it would be prudent to apply these new research methods (i.e., neuroimaging) to the older discarded theories as well as the newer trendy ones, if only to compare and contrast the results of previously utilized methods in comparing theories.  I would be interested to see which brain areas would be recruited in normal subjects to judge moral dilemmas in which emotion has been minimized/excluded or controlled for in the task itself (if this is possible to do, which is debateable).  While it is evident that emotion plays a significant role in effecting these decisions, as far as I can see, rational moral reasoning itself still has a role as well but has been largely ignored.

Camille Barnes

unread,
Apr 7, 2010, 10:50:10 AM4/7/10
to socialneuro780
I read Monica's post after writing my response paper, so it is more of
an overview of all the articles, not a focus on one in particular,
SORRY!

Camille Response: Intention and Moral Reasoning
This collection of readings focused on moral decision making.
Ciaramelli, Muccioli, La`davas, di Pellegrino used patients with
brain lesions in the ventromedial prefrontal cortex to explore the
brain regions involved in moral decision making. These participants
were presented with a moral dilemma, were asked a question about the
appropriateness of an action one might perform in the scenario, were
asked a question about the content of the scenario, and Indicated if
the response is appropriate or inappropriate. They completed 3
sessions of 15 scenarios each all presented on the computer. They
found that individuals with brain lesions in the ventromedial
prefrontal cortex were less inclined and slower to approve moral
violations compared to actions with no moral implication. Also,
patients were more likely to approve moral violations in personal
moral dilemmas, whereas their performance in impersonal moral dilemmas
was not different than that of control participants. I like how these
authors presented this article. They began with evidence already
found from brain activation studies and developmental evidence. Then
they proposed their study as confirming evidence using a different
method. The addition of a new means of testing the hypothesis
provides a stronger argument.
Greene, Sommerville, Nystrom, Darley, Cohen recognize that
both reason and emotion have a role in moral decision making. To
further understand the brain areas active in moral decision making,
they explored brain activation during the processing of moral
dilemmas. Participants were presented with personal moral, non-
personal moral, and non-moral dilemmas. After viewing each dilemma,
participants rated the action in the dilemma as appropriate or
inappropriate. During this task, participants brains were imaged via
fMRI. They found that during personal moral dilemmas emotion areas
more active: medial portions of Brodmann's Areas( BA) 9 and 10 (medial
frontal gyrus), BA 31 (posterior cingulated gyrus),and B A 39 (angular
gyrus bilateral), and that there was less activation in working
memory areas.
Moll,de Oliveira-Souza, Eslinger, Bramati, Moura,
Andreiuolo, Pessoa used pictures to explore the role of emotion in
moral processing. Participants were scanned while viewing pictures of
emotionally charged scenes with and without moral content as well as
emotionally neutral pictures (6 different categories of images in
all). After fMRI scanning, subjects rated each picture for moral
content, emotional valence, and level of arousal on visual analog
scales. The experimenters found that moral stimuli activated the
right medial OFC and the medial frontal gyrus (MedFG) and the cortex
surrounding the right posterior superior temporal sulcus (STS). They
also proposed a pathway for moral processing; there was a highly
specific and statistically significant increase in connectivity
between the medial OFC and the STS, the precuneus, and the same
regions of the MedFG, during viewing of morally relevant images. In
this study, I did not understand what the purpose of many of the
categories of images was, the authors just told us what the categories
were, but did not provided theory to why they are needed. A The
authors state that “We suggest that an impaired ability to
automatically and rapidly process moral emotions in response to signs
of moral violations may be a critical mechanism underlying this
dissociation.” Since timing is so critical, I thought using an EEG
maybe useful in understanding this process.
Moll, Oliveris Sousa, et al. looked at agency and morality
and the brain areas associated with them. They presented written
statements to participants that described action scenarios
(‘‘scripts’’) that independently addressed (1) the contribution of
agency to brain activation independently of emotional processing, and
(2) the effects of distinct classes of moral emotions namely,
prosocial and other-critical emotions while controlling for agency.
Scripts expressed either: no agency, neutral agency, Indignation-
self, indignation-other, disgust, compassion, embarrassment, or
guilt. They found that anterior medial PFC and temporal poles were
more consistently activated by prosocial emotions, whereas the dorsal
ACC, lateral OFC and the ventral temporo- occipital cortex responded
more consistently to other-critical emotions. Empathetic emotions
(guilt/compassion) activated the midbrain and of ventral striatum.
Monin, Pizzaro, Beer present a review article about the
debate between the role of reason and emotion in moral decision
making. Basically, they state that the reason there is so much debate
in this area is because the experimenters arguing for a predominant
role of reason (moral dilemmas which elicit use of reasoning) are
using different methodology than the experimenters arguing for the
predominate role of emotion (moral reactions which encourage us of
emotions in decision making. The authors agree that these 2 moral
situations evoke different processes, and that there is value in using
different methodology to understand moral decision making. The also
express the need to include moral temptation, moral self-image, and
lay theories of morality in study of morality.
Tabibnia, Satpute, Lieberman use the ultimatum game to explore
whether perceptions of fairness elicit positive emotions. They used
methodology that looked separately at fairness and monetary gain. One
participant makes an offer to give the other participant some sum of
money. If participant accepts the offer, then both participants get
the amount proposed, but is participant 2 rejects the offer, no one
receives money. They found that participants experienced greater self-
reported positive emotion and activation of reward brain areas
(ventral striatum, the amygdala, VMPFC, OFC, and a midbrain region
near the substantia nigra) when fair offers were presented. Also,
accepting unfair offers required emotional regulation, activating the
right VLPFC , which reduces negative affect associated with the
anterior insula. I liked how this study designed the offers in the
ultimatum game so that they could differentiate monetary gain from
perceptions of fairness.
I did have one concern with these studies. With all of these
studies having so few participants, I wonder how much of brain imaging
is capturing social desirability effects. Because even if data is
anonymous, participants want to appear moral, and therefore
activations may reflect in part desire to appear moral and not actual
moral decision making. It seems that social desirability would be
higher in the moral situations than non-moral since they are greater
reflection of the self. I am not sure if there is a method of
assessing social desirability concerns, but if so I would have liked
to have seen that addressed.

Jenny Perella

unread,
Apr 7, 2010, 2:41:17 PM4/7/10
to socialneuro780

I decided to focus on the Tabibnia et al article this week. In their 2
studies, the authors explored whether receiving a fair offer was
associated with positive affect above and beyond the positive affect
that accompanies material gains. In particular, they wanted to know if
fair treatment activated the reward system, thus making it desirable,
and if unfair treatment had a biological reason to be aversive. If
fair treatment is rewarding, then people should be happier with a fair
offer than with an unfair offer, given that the amount of money
offered is the same. Similarly, it was then expected that the reward
regions of the brain would be more active during fair vs. unfair
offers: ventral striatum, amygdala, vmpfc, ofc, and midbrain dopamine
areas. The authors were also interested in whether emotion regulation
brain regions (decreased anterior insula activation) would be
important when accepting unfair offers, implicating a decreased desire
to reject the offer.

In both studies, participants played the role of the responder in the
ultimatum game, deciding whether or not to accept a fair or unfair
offer. I liked that they used the ultimatum game – I think it is one
of the more natural behavioral measures. The first study was just
looking at self-reported happiness and contempt associated with fair
and unfair offers. I wondered why they didn’t have more participants
in the first study (n = 29) since it wasn’t a brain scan study, but
this is small flaw. I also wondered if there would have been a
difference in ratings if the participants had been asked to rate their
happiness and contempt for each offer as it occurred, rather than
subsequent to playing the game. Answering these questions after the
game has been played makes the scenarios more hypothetical and less
personally relevant; in addition, past research has shown that people
are not very good (accurate) at predicting how some unexperienced
event would make them feel. Regardless, the authors found that
participants reported greater happiness for fair offers than unfair
offers of equal value, and reported greater contempt for unfair offers
vs. fair offers.

The second study used fMRI to scan participants while they were
considering fair or unfair offers. Fair offers were associated with
higher activation in the ventral striatum, amygdala, vmpfc, ofc, and
the midbrain region near the substantia nigra, as predicted. However,
there was no evidence to suggest that these regions were also more
active when accepting vs. rejection unfair offers. This suggests that
the acceptance of unfair offers is driven by logic, not emotions. I
liked that the authors considered whether or not activation of the
reward system during fair offers could be due to carry-over effects
from the first part of the trial (they determined this was not the
case).

Throughout this article, the authors kept mentioning that it is
difficult to separate emotional responses to fairness from emotional
responses to monetary payoff. They held compared fair and unfair
offers for equal monetary values, but I wondered why they didn’t also
look at the proposer’s (rather than the responder’s) affect. Here, the
participant would still be receiving an unfair offer ($8/10), but the
offer would favor the proposer (an unfair situation that is associated
with high monetary payoff). I think that this would be an interesting
addition to their current study; we now know about brain activation
and the acceptance of an unfair offer that disadvantages someone, but
what about brain involvement during the acceptance of an unfair offer
that favors someone? I also wondered if fairness matters less as the
monetary value of the reward increases. Behavior may change such that
more unfair offers are accepted, but would happiness decrease in these
offers?

Stuart Daman

unread,
Apr 7, 2010, 4:29:57 PM4/7/10
to socialneuro780
I read the Tabibnia, Satpute & Lieberman (2008) paper the most
closely. Some of the things they showed followed directly from some of
the other papers. Namely, when making an essentially immoral decision
(or being victim of immorality, in their own case) some special areas
are activated in order to override the initial expectation (a morally-
consistent behavior).
In their studies, which I think were very well written (perhaps
because of the journal in which it was published, and perhaps even
more surprisingly considering it was co-authored by Matt Lieberman),
they looked at how fairness relates to brain activation. They seemed
to be interested in brain areas similar to the other papers (e.g.
vmPFC).
I liked that the researchers did two studies, the first of which was
basically a self-report version of the study, which they then
replicated and mapped to brain areas in the second study. I thought
that their use of the ultimatum game was neat too, I'm surprised that
I hadn't heard of it before. Overall, they basically showed that fair
bets resulted in activation of rewards areas of the brain. The
important aspect of this is that it was not the amount of money that
determined the hedonic activation, it was how fair the bet was. In
other words, if you are given one million dollars and your co-player
is also, you'll be happy; but he gets ten million dollars to your one
million, you will not be so happy. It's not the amount of reward that
counts in this situation, it's how evenly the rewards are distributed.
One thing they emphasize is that this is interesting and important
because it shows that fairness is a sort of reward for people, not
that fairness is simply the expected norm that results in no special
processing. In addition, they say that fairness is processed
automatically, which kind of suggests that it is the expected norm,
otherwise it might not be processed automatically. Responses to
unfairness were more complicated. Offers with higher yields were
sometimes accepted, but with some extra activation in self-control
related areas, suggested that some extra processing is going on when a
person accepts an unfair offer.
The authors comment on a TMS study, which is a method I've always
found interesting. They pulse a strong magnet at specific parts of the
brain, in effect creating a temporary lesion there. Anyway, in the
study they mention, disruption of the right dlPFC messed with the
rejection of unfair offers, suggesting that the right dlPFC maybe
important to maintaining the overall goal of earning money, and also
that the right vlPFC may be more important to overriding the goal of
making money to turn down unfair offers.
Perhaps

And I have just recently received in my inbox that Jenny focused on
this article too. xP

In reference to the other articles, I didn't spend much time on the
review paper, I liked the paper that used patients with brain damage
instead of fMRI (I particularly found interesting their discussion of
how the amygdala may be important for affective info regarding
immediate outcomes and the vmPFC regarding long-term affective
outcomes), and the other 3 or 4 were very similar and blurred in my
memory without careful review.

David Dinwiddie

unread,
Apr 8, 2010, 6:13:19 PM4/8/10
to socialn...@googlegroups.com
I focused on the article by Moll et. al. (2002). The primary purpose of the article was to compare the neural correlates of moral emotions to those of basic emotions. According to the article moral emotions "differ from basic emotions in that they are linked to the interests or welfare eitherof society as a whole or of persons other than the agent." Moral emotions allow for immediate appraisals of interpersonal events. Based on prior research the authors hypothesized that moral emotions would activate the orbitofrontal cortex to a greater extent that basic emotions. In both types of emotions they expected the amygdala, insula and subcortical nuclei to be activated.
 
Only 7 participants were used for their study (5 men and 2 women). Participants were scanned in an fMRI while looking at pictures that were either emotionally charged with moral content, emotionally charged without moral content or emotionally neutral. The morally charged pictures could be war scenes, physical assaults or poor children on the streets. The emotionally charged charged pictures could be negative (body lesions) or positive (landscapes). interesting pictures, neutral pictures and scrambled pictures were also included. After the scanning was over participants rated the pictures of moral and emotional content. A block design was used with three pictures presented in each block. Each picture was presented for 5 seconds and within a block the pictures were within one category.There were 8 blocks of pictures used per category. There was 15 seconds between blocks so that emotional levels could return to baseline.
 
The moral pictures were rated as being the most morally charged while there was no differences found between the other sets of pictures. the unpleasant stimuli were rated as more unpleasant than the moral pictures. In terms of arousal the neutral pictures led to lower levels of arousal than the other conditions but the others conditions did not differ from one another in arousal. (no mention of the scrambled pictures in the behavioral results)
 
In the primary neural analysis brain activation was compared in the negative emotion and negative moral conditions relative to the neutral pictures. The moral and non-moral conditions had several areas of activation in common. These regions include the "extended amygdala and upper midbrain bilaterally, periaqueductal gray matter, right thalamus, and superior colliculus, right insula/inferior frontal gyrus, right anterior frontal cortex, bilateral posterior temporal- occipital cortex and right intraparietal sulcus. These results are consistant with previous results which measure areas of activation to unpleasant stimuli. With moral stimuli there was increased activation in the right medial OFC, medial frontal gyrus, and the cortex surrounding the right posterior superior temporal sulcus. For the non-moral images there was greater activation in the right middle frontal gyrus and the right anterior insula. These results are not due to emotional valence or visual arousal.
 
Because there were no behavioral measures during the scans, the activation in the OFC and MedFG supports the idea that moral emotions are implicit. Activation of these regions is likely critical in linking emotional experiences of an individual to a moral appraisal. Damage to the OFC, MedFG or STS could have trouble with moral emotions.
 
I thought this study was well done and they took great care in controling for other variables which could explain activation in various regions of the brain. These results could be useful in potentially helping patients with impaired moral judgements.



--
To unsubscribe, reply using "remove me" as the subject.

Monicarodr

unread,
Apr 9, 2010, 7:45:19 AM4/9/10
to socialneuro780
I found that much of what we have been reading for this week is
characterized by dichotomies (e.g., reason versus affect, cognition
versus emotion, personal versus impersonal moral violations, fairness
versus utilitarianism, moral sensitivity versus social agency,
hardwired versus cultural), just as the topic of morality, in which we
differentiate right from wrong.

One of the main dichotomies of moral philosophy in Western cultures is
indeed that of that of: Is there an “a priori” right or wrong? Or are
actions right or wrong depending on their consequences? Kant versus
Mill, or deontologists versus utilitarians.

Kant was the main proponent of “deontological ethics”, (root of word
duty is deon). He believed that some actions are wrong no matter what
are the consequences that from these actions. Kant argued that the
only absolutely good thing is a “good will” or the motives/intentions
of the person carrying out the action. According to Kant, the
consequences of actions, even if good, are not morally right if the
person acts on bad intentions or bad will. The highest good is good
without qualification if one is acting from duty.

In contrast, Mill, who was the main proponent of Bentham’s
utilitarianism, believed that what is right is "the greatest good for
the greatest number of people", which refers to the idea that actions
are determined solely by their utility in providing happiness or
pleasure to the greatest number of sentient beings. In sharp contrast
to Kant, for utilitarianism the moral worth of an action is determined
by its outcome.

Social neuroscience, by focusing on the neural basis of human
morality, as Greene argues, we may be finding out how much of the
dichotomy between reasoning and higher cognition versus intuitive,
emotional may actually be reflecting structural properties of the
human brain. Greene and colleagues have shown that brain areas
associated with emotional social processing (medial prefrontal cortex,
posterior cingulated/precuneus, superior temporal sulcus, inferior
parietal lobe) were more active when the participants considered
personal moral dilemmas whereas the areas associated with cognitive
control (right DLPFC, bilateral inferior parietal lobe) were more
active when participants were dealing with impersonal moral dilemmas.

Therefore, both rationalists and utilitarians are right, except for
the fact that socio-emotional processing is involved in
“deontological” intuitions, and cognitive control processing is
involved when making utilitarian judgments. Moll and colleagues also
argue for a moral sensitivity construct that depends on the brain
activation of areas related with social agency as well as areas
related to typical moral emotions (guilt, embarrassment,
compassion).Basic and moral emotions activate the amygdala, thalamus,
and upper midbrain. However, the medial prefrontal cortex, and the
orbital prefrontal cortex, and the STS are recruited when moral
appraisals are involved.

Beka argues, however, that we may be highlighting the role of
intuitive, emotional activation patterns without giving a fair chance
to rational, non-emotional moral dilemmas in which rational moral
reasoning may have a larger role.

I found interesting that Camille, as always, got to the heart of the
matter straightforwardly: How much of this work is capturing social
desirability effects?

Jenny discussed Tabibnia et al article, which examined whether
fairness activates the reward system. Her interesting proposals are to
examine brain activations of the participant when the proposer is the
one who is treated fairly or advantageously, as well as changes in the
relevance of fairness with changes in monetary values.

Is Stuart giving an answer to the latter? Stuart also mentions the
role of emotion regulation in these studies. I was also surprised to
see how much self-control and emotion regulation appears to be a
central component in moral judgments.

David, who seems to have an eye for the clinical neuroscience work,
examined Ciaramelli et al, in which the role of the ventromedial
prefrontal cortex in moral judgment is examined with patients with
lesions in this area. Here, the authors also pose relevant questions
regarding the role of the ventromedial prefrontal cortex.

The ventromedial prefrontal cortex was associated with several areas
related to self-control functioning such as lack anticipation of
future consequences, reduced self-conscious emotions, failures to use
emotional cues to inhibit morally unacceptable behaviors. Monin
Pizarro and Beer focus extensively on moral temptations as
prototypical moral conundrums.

Reply all
Reply to author
Forward
0 new messages