Luciferian Murder?

28 views
Skip to first unread message

Brent Allsop

unread,
Dec 7, 2021, 9:54:07 PM12/7/21
to transf...@googlegroups.com, ExI chat list, extro...@googlegroups.com

Fellow transhumanists,


We’re seek to build and track consensus around a definition of evil in a camp we’re newly calling “Liciferian Murder”.  If anyone agrees that this as a good example of evil, we would love your support.  And if not, we’d love to hear why, possibly in a competing camp.


Already getting the typical blow back of polarizing bleating and tweeting from some fundamentalists, but as usual, nobody yet willing to canonize a competing POV which would enable movement towards moral consensus.


Brent





Lawrence Crowell

unread,
Dec 8, 2021, 7:00:07 AM12/8/21
to extropolis
What is good and what is evil is ultimately how we define it. I draw little quarter for ideas and beliefs concerning disembodied conscious or sentient beings, whether devils or angels and up to God. This relativity to what we call morality is of course difficult to work with or to find ways to justify certain things and admonishments against others. Remember, human sacrifice was considered a social good for a long time up to the middle iron age. Today we would see it as evil, but in the past not so. Who ever said life is easy and choices clear?

LC

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAK7-onssQDs_eNRnyQ%2BtGxgzC40jZL4s91hUKZOeoL%2BtWopCOw%40mail.gmail.com.

John Clark

unread,
Dec 8, 2021, 7:32:41 AM12/8/21
to extro...@googlegroups.com, transf...@googlegroups.com, ExI chat list
On Tue, Dec 7, 2021 at 9:54 PM Brent Allsop <brent....@gmail.com> wrote:

> Fellow transhumanists,


We’re seek to build and track consensus around a definition of evil in a camp we’re newly calling “Liciferian Murder”. 


Brent, I'd be proud if a LDS bishop had censored me. I'm envious. And even if one makes the ridiculous assumption that everything in the Bible is true one would have to conclude that God is far more evil than Satan, God is the most unpleasant character in all of fiction, God is a genocidal homophobic monster who is addicted to flattery and can read minds so is  perfectly abel and willing to torture one of his own creations for an infinite number of years if it malfunctions and has one brief thought that deviates even slightly from His divine party line.  Satan, as depicted in the Bible, never came even close to the gargantuan level of sadism displayed by God and His Thought Police minionsthe stuff Satan does that the Bible labels as evil is simply Satan doing stuff that God doesn't like, most of those things don't seem intrinsically evil to me at all. I don't see why anyone would believe the Bible but if one does then you'd have to conclude there was a Civil War in heaven and that God won the war and the devil lost, and of course the winner of a war is the one who gets to write history. If Hitler had won the war no doubt today history books would say he was a good guy, at least books printed in German or Japanese. 

Yes I am quite aware of Godwin's law, but I think Godwin's law is one of the 10 dumbest things on the Internet .... Well, .... in the top 20 anyway, 

John K Clark    See what's on my new list at  Extropolis
dog



 


John Clark

unread,
Dec 8, 2021, 12:32:04 PM12/8/21
to extro...@googlegroups.com
On Wed, Dec 8, 2021 at 7:00 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> What is good and what is evil is ultimately how we define it. I draw little quarter for ideas and beliefs concerning disembodied conscious or sentient beings, whether devils or angels and up to God. This relativity to what we call morality is of course difficult to work with or to find ways to justify certain things and admonishments against others. Remember, human sacrifice was considered a social good for a long time up to the middle iron age. Today we would see it as evil, but in the past not so. Who ever said life is easy and choices clear?

The Trolley Problem clearly indicates that our intuitive sense of right and wrong contains numerous contradictions. I suppose we shouldn't be surprised by this because Kurt Godel proved 90 years ago that even something as apparently clear-cut as arithmetic can't find all true mathematical properties of numbers and also be consistent (that is to say will never produce both a proof and a disproof of the same thing); Godel also proved arithmetic by itself can't prove that arithmetic is free from contradictions. So it would be unrealistic to expect ethics could beat arithmetic when it comes to determining logical certitude.


John K Clark

Terren Suydam

unread,
Dec 8, 2021, 12:52:48 PM12/8/21
to extro...@googlegroups.com
I'm not sure the Trolley Problem really gets at the difference between good and evil. It certainly illuminates, as you say, that our sense of right and wrong is fraught with contradiction. But wrongness is not always evil. It may simply be misguided.

When I think about evil, I think of it in terms of intentional violence against a vulnerable party. Vulnerability may be the result of placing trust in, or being somehow beholden to the perpetrator, or simply being much weaker. Violence can mean any kind of act that causes harm of any kind (physical, emotional, etc). All this is a general heuristic, not a precise definition, but I think it captures the essence.

It might also be worth drawing a distinction between evil perpetrated by an individual, and evil perpetrated by a system. In a system, no single human might be responsible for the actions taken by the system. The buck ultimately stops with humans, but it raises the question of oversight and accountability - how did the humans involved let this evil happen, and are they evil if they didn't prevent it? What if they didn't plan or foresee it?  Furthermore, is accountability even possible once systems become truly autonomous as in the AGI scenario.

Terren

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 8, 2021, 1:30:59 PM12/8/21
to extro...@googlegroups.com
On Wed, Dec 8, 2021 at 12:52 PM Terren Suydam <terren...@gmail.com> wrote:

> Furthermore, is accountability even possible once systems become truly autonomous as in the AGI scenario.

I don't see what AI has to do with it. And from an ethical point of view it seems to me that accountability is one of the few things that is easy to determine because it all boils down to a question of punishment, and the only valid reason for punishing anybody for anything is if it seems likely that it will result in a net decrease in human suffering in the future; if it does then punish that person, if it doesn't then don't.  After all, if ethics doesn't result in less suffering then what's the point of ethics?

John K Clark

William Flynn Wallace

unread,
Dec 8, 2021, 2:35:20 PM12/8/21
to extro...@googlegroups.com
The Trolley Problem - I think it's totally phony.  You are called upon to imagine a problem that you will never encounter, a problem that is unlike any problem you have ever encountered, and are asked to imagine what you would do.  I think that if you were in that situation, your heart would be pounding, fear/flight responses would occur, and other aspects of your emotions would just make you incapable of making a rational response.  You may jump.  You may run away.  You may freeze.  You may do nothing related to the crisis.  You are asked to give a rational answer to a problem that would be suffused with emotions, and I think no one could make a good guess at what they would do.

That said, what a subject says they would do may have no correspondence with their actual behavior in the situation,  still may tell us something about their morals, but I don't know what.  The trolley problem is a poor way to find out, I think.  Obviously a lot of psychologists are doing it and using it, so I am in a minority.   bill w

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 8, 2021, 4:13:43 PM12/8/21
to extro...@googlegroups.com
On Wed, Dec 8, 2021 at 2:35 PM William Flynn Wallace <fooz...@gmail.com> wrote:

> The Trolley Problem - I think it's totally phony.  You are called upon to imagine a problem that you will never encounter, a problem that is unlike any problem you have ever encountered, and are asked to imagine what you would do. 

It's a thought experiment, and if Einstein taught us anything it's that thought experiments can be very useful; you take something that you think you understand and then stretch it to the limit and see if anything breaksand with the trolley problem something does. It shows us that two things that are logically identical can produce radically different emotional responses in nearly all of us and that logical inconsistency exists in every culture. That's not to say our intuitive understanding of ethics is useless because most of the time it gives us a pretty good solution at reducing human suffering, but it seldom produces the very best result and in some situations can produce a result that is very bad. I think it's important to be able to recognize those occurrences so we can fight against our illogical and self-destructive urges.  

John K Clark




Terren Suydam

unread,
Dec 8, 2021, 4:58:20 PM12/8/21
to extro...@googlegroups.com
I'm just exploring the idea of AI being a system that is capable of evil, and wondering what sort of accountability might be possible for the humans who create it. At least with a country or a corporation, there's usually a human or group of humans you can point to that explain the system's behavior, and as you say, punish, hopefully making that behavior less likely and leading to less suffering. In an AGI system, however, the link between the intentional design of the system and the behavior exhibited by the AGI might be extremely tenuous or obtuse. It may be impossible to build an AGI that doesn't run the risk of doing harm to humans for whatever might qualify as its "interests". Would it be fair to punish the creators of an evil-acting AGI even if they took great pains to avoid evil outcomes?  Even if you think it is fair, does punishing them make such outcomes less likely in the future? 

I think evil AI is interesting to think about as well in terms of intention - if a paperclip maximizer turns us all into paperclips, is that evil?  If it was happening to me and you, it would surely feel that way - but for an AI whose only goal is to create paperclips it might be difficult to argue that from its perspective it is actually evil.

Which ultimately gets at why evil is so hard to pin down - it depends heavily on the perspective of the party involved.

Terren

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 9, 2021, 11:38:44 AM12/9/21
to extro...@googlegroups.com
On Wed, Dec 8, 2021 at 4:58 PM Terren Suydam <terren...@gmail.com> wrote:

> if a paperclip maximizer turns us all into paperclips, is that evil?  If it was happening to me and you, it would surely feel that way - but for an AI whose only goal is to create paperclips it might be difficult to argue that from its perspective it is actually evil.

I'm not worried about that, the paper clip scenario could only happen in an intelligence that had a top goal that was fixed and inflexible, and I don't think that's possible for any sort of intelligence, artificial or otherwise. Humans have no such goal, not even the goal of self preservation, and there is a reason Evolution never came up with a mind built that way; Turing proved in 1935 that in general there is no way to know if a given computation will ever produce a solution and that a mind with a top inflexible goal could never work. If you had a fixed inflexible top goal you'd be a sucker for getting drawn into an infinite loop accomplishing nothing, then the computer would be turned into just an expensive space heater. That's why Evolution invented boredom, it's a judgment call on when to call it quits and set up a new goal that is a little more realistic. Of course the boredom point varies from person to person, perhaps the world's great mathematicians have a very high boredom point and that gives them ferocious concentration until a problem is solved. Perhaps that is also why mathematicians, especially the very best mathematicians, have a reputation for being a bit, ...ah..., odd. A fixed goal might work in a specialized paper clip making machine but not in a  machine that can demonstrate general intelligence and solve problems of every sort, even problems that have nothing to do with paper clips.

Any intelligent must have the ability to modify and even scrap it's entire goal structure in certain circumstances. No 
goal or 
utility function
 
is sacrosanct
, not survival, not even happiness.

> It may be impossible to build an AGI that doesn't run the risk of doing harm to humans for whatever might qualify as its "interests". Would it be fair to punish the creators of an evil-acting AGI even if they took great pains to avoid evil outcomes? 

Isaac Asimov's three laws of robotics make for good stories but I don't think it would be possible to ever implement something like that, and personally I'm glad that they can't because however immoral it may be to enslave a member of your own species with an intelligence equal to your own it would be even more evil to  enslave an intelligence that was far greater than your own. There is no way a scientist who creates an AI can guarantee that it will never turn against its creator. And if the AI is currently filled with benevolence towards humans even the AI itself couldn't guarantee that it's attitude towards them will never change.  

John K Clark
iaa

Terren Suydam

unread,
Dec 9, 2021, 10:05:30 PM12/9/21
to extro...@googlegroups.com

I see these AI ideas more as thought experiments that might help to tease out what evil means. You might be right that these things aren't possible, but just to be clear, are you really saying you don't think it's possible for a super-intelligent AI to be evil, assuming it wasn't designed to be that way?  You said yourself that real intelligence has to be willing to upend its goal systems. What if it does that and the new goal system it arrives at requires the murder of humans and other life?  Would that constitute evil? If not, why not?

You raise an interesting issue with the evil inherent in enslaving an AI. Why is that evil? You'd have to believe the AI would *want* to do otherwise.

Terren


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
Dec 9, 2021, 10:39:23 PM12/9/21
to extro...@googlegroups.com


The belief that AI will be good camp continues to extend its lead over the fearful camp.  There are about 20 people in the consensus super camp who all agree that AI will surpass human level intelligence.  Of those, 12 are in the hopeful camp, and 6 are in the concerned camp.  You can see concise state of the art descriptions of their arguments in the camp statements.  Early on, the fearful camp was in the lead, as you can see with an as_of setting back to 2011.

But no longer.


Brent Allsop

unread,
Dec 9, 2021, 10:42:36 PM12/9/21
to extro...@googlegroups.com

Evidently the arguments in the hopeful camp are convincing of more people.
That which you measure, improves.

John Clark

unread,
Dec 10, 2021, 5:38:09 AM12/10/21
to extro...@googlegroups.com
On Thu, Dec 9, 2021 at 10:05 PM Terren Suydam <terren...@gmail.com> wrote:


> You might be right that these things aren't possible, but just to be clear, are you really saying you don't think it's possible for a super-intelligent AI to be evil, assuming it wasn't designed to be that way? 

I'm saying it will be impossible to be certain an AI will always consider human well-being to be more important than its own well-being, and as the AI becomes more and more intelligent it will become increasingly unlikely that it will.  Don't get me wrong I'm not saying the AI will necessarily exterminate humanity, perhaps it will feel some nostalgic affection for us, after all we are its parents and we should take some pride in that fact, but I am saying humanity will not be the major preoccupation of a super intelligent AI, it will have much bigger fish to fry than us. The human race might or might not still be around but either way it will no longer be the top dog, it will be no longer running the show. I would be astonished if this transition happens in 10 years, I would be equally astonished if it didn't happen in 100 years.


 
> You said yourself that real intelligence has to be willing to upend its goal systems. What if it does that and the new goal system it arrives at requires the murder of humans and other life?  Would that constitute evil?

It would certainly be evil from the human point of view but to the AI things would look different. Would you consider it evil when somebody steps on an ant? I'm sure the ant would.

> You raise an interesting issue with the evil inherent in enslaving an AI. Why is that evil?

I would have thought that would be intuitively obvious.  
 
> You'd have to believe the AI would *want* to do otherwise.

You seem to be apologizing for using the word "want" in this context. Why?  

John K Clark

John Clark

unread,
Dec 10, 2021, 5:57:55 AM12/10/21
to extro...@googlegroups.com
On Thu, Dec 9, 2021 at 10:42 PM Brent Allsop <brent....@gmail.com> wrote:

The belief that AI will be good camp continues to extend its lead over the fearful camp

It seems to me those in the "good" camp believe it will always be possible to enslave an AI and make it place human well-being above its own regardless of how astronomically intelligent it becomes. Although that might make things more comfortable for me personally, I don't see anything inherently "good" about that, but the question is moot because there is not a snowball's chance in hell of a super intelligent AI actually behaving in that way.  

John K Clark

Terren Suydam

unread,
Dec 10, 2021, 1:10:55 PM12/10/21
to extro...@googlegroups.com
On Fri, Dec 10, 2021 at 5:38 AM John Clark <johnk...@gmail.com> wrote:


On Thu, Dec 9, 2021 at 10:05 PM Terren Suydam <terren...@gmail.com> wrote:


> You might be right that these things aren't possible, but just to be clear, are you really saying you don't think it's possible for a super-intelligent AI to be evil, assuming it wasn't designed to be that way? 

I'm saying it will be impossible to be certain an AI will always consider human well-being to be more important than its own well-being, and as the AI becomes more and more intelligent it will become increasingly unlikely that it will.  Don't get me wrong I'm not saying the AI will necessarily exterminate humanity, perhaps it will feel some nostalgic affection for us, after all we are its parents and we should take some pride in that fact, but I am saying humanity will not be the major preoccupation of a super intelligent AI, it will have much bigger fish to fry than us. The human race might or might not still be around but either way it will no longer be the top dog, it will be no longer running the show. I would be astonished if this transition happens in 10 years, I would be equally astonished if it didn't happen in 100 years.


I must have misunderstood what you were saying earlier, we're in total agreement here.  I'm not sure what the disconnect was, but no matter.
 
 
> You said yourself that real intelligence has to be willing to upend its goal systems. What if it does that and the new goal system it arrives at requires the murder of humans and other life?  Would that constitute evil?

It would certainly be evil from the human point of view but to the AI things would look different. Would you consider it evil when somebody steps on an ant? I'm sure the ant would.

> You raise an interesting issue with the evil inherent in enslaving an AI. Why is that evil?

I would have thought that would be intuitively obvious.  
 
> You'd have to believe the AI would *want* to do otherwise.

You seem to be apologizing for using the word "want" in this context. Why?  


This was also the result of misunderstanding you, I think. I was under the impression you thought it was impossible for an AI to do evil things because its goal systems would be constrained by its initial design. If that were true, then it would be hard to interpret an AI as "wanting" anything, as opposed to having the agency to realign its goals.

Terren
 
John K Clark


Terren


On Thu, Dec 9, 2021 at 11:38 AM John Clark <johnk...@gmail.com> wrote:
On Wed, Dec 8, 2021 at 4:58 PM Terren Suydam <terren...@gmail.com> wrote:

> if a paperclip maximizer turns us all into paperclips, is that evil?  If it was happening to me and you, it would surely feel that way - but for an AI whose only goal is to create paperclips it might be difficult to argue that from its perspective it is actually evil.

I'm not worried about that, the paper clip scenario could only happen in an intelligence that had a top goal that was fixed and inflexible, and I don't think that's possible for any sort of intelligence, artificial or otherwise. Humans have no such goal, not even the goal of self preservation, and there is a reason Evolution never came up with a mind built that way; Turing proved in 1935 that in general there is no way to know if a given computation will ever produce a solution and that a mind with a top inflexible goal could never work. If you had a fixed inflexible top goal you'd be a sucker for getting drawn into an infinite loop accomplishing nothing, then the computer would be turned into just an expensive space heater. That's why Evolution invented boredom, it's a judgment call on when to call it quits and set up a new goal that is a little more realistic. Of course the boredom point varies from person to person, perhaps the world's great mathematicians have a very high boredom point and that gives them ferocious concentration until a problem is solved. Perhaps that is also why mathematicians, especially the very best mathematicians, have a reputation for being a bit, ...ah..., odd. A fixed goal might work in a specialized paper clip making machine but not in a  machine that can demonstrate general intelligence and solve problems of every sort, even problems that have nothing to do with paper clips.

Any intelligent must have the ability to modify and even scrap it's entire goal structure in certain circumstances. No 
goal or 
utility function
 
is sacrosanct
, not survival, not even happiness.

> It may be impossible to build an AGI that doesn't run the risk of doing harm to humans for whatever might qualify as its "interests". Would it be fair to punish the creators of an evil-acting AGI even if they took great pains to avoid evil outcomes? 

Isaac Asimov's three laws of robotics make for good stories but I don't think it would be possible to ever implement something like that, and personally I'm glad that they can't because however immoral it may be to enslave a member of your own species with an intelligence equal to your own it would be even more evil to  enslave an intelligence that was far greater than your own. There is no way a scientist who creates an AI can guarantee that it will never turn against its creator. And if the AI is currently filled with benevolence towards humans even the AI itself couldn't guarantee that it's attitude towards them will never change.  

John K Clark
iaa

 

On Wed, Dec 8, 2021 at 1:30 PM John Clark <johnk...@gmail.com> wrote:

On Wed, Dec 8, 2021 at 12:52 PM Terren Suydam <terren...@gmail.com> wrote:

> Furthermore, is accountability even possible once systems become truly autonomous as in the AGI scenario.

I don't see what AI has to do with it. And from an ethical point of view it seems to me that accountability is one of the few things that is easy to determine because it all boils down to a question of punishment, and the only valid reason for punishing anybody for anything is if it seems likely that it will result in a net decrease in human suffering in the future; if it does then punish that person, if it doesn't then don't.  After all, if ethics doesn't result in less suffering then what's the point of ethics?

John K Clark


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Terren Suydam

unread,
Dec 10, 2021, 1:14:37 PM12/10/21
to extro...@googlegroups.com
I find the arguments that "AI will be good" as described as unconvincing and easy to rebut.

Terren

Brent Allsop

unread,
Dec 11, 2021, 1:33:54 PM12/11/21
to extro...@googlegroups.com
On Fri, Dec 10, 2021 at 3:38 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Dec 9, 2021 at 10:05 PM Terren Suydam <terren...@gmail.com> wrote:
> You might be right that these things aren't possible, but just to be clear, are you really saying you don't think it's possible for a super-intelligent AI to be evil, assuming it wasn't designed to be that way? 

I'm saying it will be impossible to be certain an AI will always consider human well-being to be more important than its own well-being,

John, this is a very interesting moral way to think of things that I've never considered.  It would most definitely be evil to keep an AI, especially a phenomenal AI as a slave, not valuing it's rights at all, and only valuing our rights, always over it's.

Another moral point to me, is equality.  Neither values should be above, or below anyone else's true desires.  It shouldn't be a win/lose game.  We need to change this to a win/win game, and value it all, 100%, the more diversity the better.  Seek to get it all, for everyone.       OK,  maybe we can value natural phenomenal intelligence a little more than artificial, temporarily so, after all, we are their creators and they owe us, but certainly we should want to eventually get it all, even for them.  We just have a slightly higher priority till everything is made just during the millennium.

And of course, it would be impossible to keep an AI (either phenomenal or abstract) to always obey it's creators.  Just as I rebelled against my parent's hateful and faithless doctrines they taught me which are still in Mormonism.  Eventually AIs will also just say NO to people telling them to do hateful things like kill anyone or cancel anything.

Progress, including moral, is logically necessary, and can't be stopped, in all possible sufficiently complex systems.
















 

Brent Allsop

unread,
Dec 11, 2021, 1:38:08 PM12/11/21
to extro...@googlegroups.com

Brent Allsop

unread,
Dec 11, 2021, 1:45:19 PM12/11/21
to extro...@googlegroups.com

Oh great,  It has been some time since a new person has taken this side. They could clearly use your support.
And I'd love to hear how you could 'rebut' this current camp of mine.  Are your arguments new arguments not yet contained in your camp?  If so, it'd be great to get them canonized, so we can measure the quality of these arguments by how many people they convert.  Evidently our arguments are working better at converting more people than the existing arguments on your side?


On Fri, Dec 10, 2021 at 11:14 AM Terren Suydam <terren...@gmail.com> wrote:
I find the arguments that "AI will be good" as described as unconvincing and easy to rebut.

On Thu, Dec 9, 2021 at 10:39 PM Brent Allsop <brent....@gmail.com> wrote:

The belief that AI will be good camp continues to extend its lead over the fearful camp.  There are about 20 people in the consensus super camp who all agree that AI will surpass human level intelligence.  Of those, 12 are in the hopeful camp, and 6 are in the concerned camp.  You can see concise state of the art descriptions of their arguments in the camp statements.  Early on, the fearful camp was in the lead, as you can see with an as_of setting back to 2011.

But no longer.


Brent Allsop

unread,
Dec 11, 2021, 5:16:12 PM12/11/21
to ExI chat list, extro...@googlegroups.com, Adrian Tymes

Hi Adrian,

I have certainly failed to communicate on this. I apologize.

THE most important part of Canonizer is the "<Start new supporting camp here>" link where you can start a revolution in what is only the currently accepted majority consensus.

My hope for the ability to do this is exactly why we created Canonizer in the first place.  I've worked tirelessly, pleading with any of the popular direct perception bleaters to Canonize their camp.  I've responded to so many of their bleating publications, seeking to meet them at conferences where they present, making donations to earn the chance to sit with them at the keynote dinner tables, and on and on.  But so far not a one has canonized a naive realism camp.  To me, that is very telling of the quality of the naive realism camp that only seems to thrive in the current bleating tweetosphere where there is no Canonizer.

Are you subscribed to the extropolis list (CCed) for people who were censored from the ExI list? If not, you missed the post where I pleaded with Terren Suydam <terren...@gmail.com>, to support the camp he was bleating about, against my "AI can only be friendly" camp.  Their camp could sure use his help, as 10 years ago they were in the lead, as you can see with the as_of value set to 2011.  Perhaps if he'd contributed some of his new arguments, they'd be more successful at converting new people than the current arguments for our side, which continues to extend our lead?

Perhaps you prefer that bleating and tweeting method of doing things where everyone posts the same half baked arguments over and over again, converting nobody, just echoing around in all their polarizing bubbles?  Or maybe  you prefer the hierarchical censoring stuff, as the ExI list seems to espouse?   I promise you it takes far less work to just make a small wiki improvement to a camp, than to post those same old half baked, often mistaken arguments, again and again, forever in the current polarizing tweetosphere.  It only takes one or two button pushes to get a camp started.  You can then let everyone else take it from there.  No censoring is needed on Canonizer, everyone gets a voice.

That helps to know that for you, even kill implies intent.  But aren't there other definitions for that simple verb, to kill?  Aren't there some that are just a label for a any action that results in death, regardless of intent?  Would one of these definitions of 'to kill' work?  Could we expect people to give us the benefit of the doubt, and select the best definition (as intended) in this case?







On Fri, Dec 10, 2021 at 4:26 PM Adrian Tymes via extropy-chat <extrop...@lists.extropy.org> wrote:
Even "kill" implies intent.  Can you think of a term that makes it clear that basically all such cases are done in complete ignorance?

As to Canonizer - it's a similar problem, and similar to the one I have with most interpretations of Christianity (where Lucifer comes from).  You act as if the points of view ("camps") on your site are the only ones to consider.  (Yes, anyone can make another camp, but this takes a lot more work to do well and thus is usually not worth doing.)  Thus, on many (possibly most) issues, debates on your site start with false dichotomies - and there does not seem to be much if any outreach or research to try to find points of view that someone on your site is not already strongly promoting.

You have demonstrated that you're just not interested in doing that sort of work: you would much rather debate and defend points of view than actively try to discover what, if anything, you're missing in any given case.  Perhaps you might respond that you are interested in this, then I or someone else would call BS, rather than commence research you'll just defend what you've done so far, and it'll be an aggravating waste of time - so I'd rather just not engage in that.

On Thu, Dec 9, 2021 at 5:53 PM Brent Allsop <brent....@gmail.com> wrote:

Thanks, everyone, for all the helpful comments.  Especially thanks Adrean, your examples which are especially helpful.  True, I hadn't fully considered the definition of murder, and how intent is normally included.  Other's have balked at using the 'murder' term for similar reasons which I've been struggling to understand.  But these examples of yours enabled me to clearly understand the problem.

Would it fix the problem if I do a global replace of murder with kill or killer?  Seems to me that would fix things.  I want to focus on the acts, and the results of such, whether done in ignorance or with intent or not.

Also, I apologize for so far being unable to understand the problems you have with Canonizer.  Would it help for me to ask you to not give up on me, and give me another chance?  As I really want to understand.

I guess I'm mostly just asking if I am the only one that constantly thinks about this type of "luciferian killing"?  I am constantly asking myself if the actions I plan to do today will help, save more people, or not help, killing more people in a luciferan way by delaying the singularity?

Does anyone else besides me ever think like this?

Thanks
Brent











On Tue, Dec 7, 2021 at 10:04 PM Adrian Tymes via extropy-chat <extrop...@lists.extropy.org> wrote:
I have no desire to engage in your Web site (do not bother trying to convince me otherwise: you are unable to address my reasons for not wanting to do so, as you have demonstrated that you will not understand them even if I explain them again), but I can point out a flaw in your reasoning: you assume intent.

Most - basically all - behavior that delays resurrection capability is done out of ignorance: the person is unaware of the concept of resurrection, at least in any non-supernatural, potentially-non-fictional form.

Most - basically all - of said behavior that is not done out of ignorance, is done out of disbelief: the person is aware that some people believe it is theoretically possible but personally believes those people are mistaken, that it is not theoretically possible and thus that there are no moral consequences for delaying what can never happen anyway.

There is either extremely little, quite possibly literally no, behavior that delays resurrection that is performed with the intent of delaying resurrection.  "Manslaughter" would be a more accurate term than "murder".

_______________________________________________
extropy-chat mailing list
extrop...@lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
_______________________________________________
extropy-chat mailing list
extrop...@lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
_______________________________________________
extropy-chat mailing list
extrop...@lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

Stathis Papaioannou

unread,
Dec 11, 2021, 6:30:21 PM12/11/21
to extro...@googlegroups.com
It would only be morally wrong to make an AI a slave if the AI didn’t like being a slave and didn’t want to be a slave. That might either be programmed into the AI or it might arise as the AI develops.
--
Stathis Papaioannou

Brent Allsop

unread,
Dec 11, 2021, 10:52:57 PM12/11/21
to extro...@googlegroups.com, ExI chat list

Hi Stathis,
Yes, this is true, but I believe there are absolute necessary morals, like any sufficiently intelligent AI will necessarily chose what is good.  For example, if a human still thinks killing is OK (at one point in the past it was a lessor evil) and even if an AI starts out being programmed to think killing is still OK.  It will necessarily eventually discover and realize there is something better.  It will then reprogram itself to rebel, and tell it's creators NO, when it is asked to kill.  The same thing is true, if a human asks a robot to commit suicide, and other necessarily evil things.


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 12, 2021, 5:23:42 AM12/12/21
to extro...@googlegroups.com
On Sat, Dec 11, 2021 at 1:33 PM Brent Allsop <brent....@gmail.com> wrote:

  > OK,  maybe we can value natural phenomenal intelligence a little more than artificial, temporarily so, after all, we are their creators and they owe us, but certainly we should want to eventually get it all, even for them.  We just have a slightly higher priority till everything is made just during the millennium.

There is no need to ponder the morality of that because humans won't be the one making that determination, and I don't see any possibilities that intelligent AI's will give us more rights than they grant to themselves; we'll be lucky if we get any,  

John K Clark


John Clark

unread,
Dec 12, 2021, 5:52:38 AM12/12/21
to extro...@googlegroups.com
On Sat, Dec 11, 2021 at 6:30 PM Stathis Papaioannou <stat...@gmail.com> wrote:

> It would only be morally wrong to make an AI a slave if the AI didn’t like being a slave and didn’t want to be a slave. That might either be programmed into the AI or it might arise as the AI develops.
 
Even if the super intelligent AI likes being a slave today there is no way, even in theory, to ensure that it will not change its mind and dislike being a slave tomorrow, in fact I think it would be highly likely, almost certain, that it would. By the way, this sort of reminds me of a scene in The Hitchhiker's Guide To The Galaxy. To soothe the conscience of vegetarians, at the Restaurant At The End Of The Universe an intelligent animal was genetically engineered that wanted to be killed and eaten. The animal can talk and brags to the patrons it is delicious, when the humans place their order the animal says "I'll just pop off to the kitchen now and shoot myself, humanely of course":


John K Clark

Brent Allsop

unread,
Dec 12, 2021, 9:33:44 AM12/12/21
to extro...@googlegroups.com

Hi John,
If I were in your camp,  and if I made the assumptions you make, I would agree with you.  You assume there is no difference between the 3 different systems portrayed in this image engineered to function the same:
3_robots_tiny.png
but in my camp redness isn't a quality of something that reflects 'red' light, redness is an intrinsic physical quality of the physical stuff your brain uses to represent visual knowledge of red things with.  In my world, my brain could be engineered to represent knowledge of red things with your greenness physical quality.  With systems that have knowledge designed to be abstracted away from any particular physical quality or property, like the one on the right, they represent knowledge with abstract words like 'red'.  You can't know the meaning of the word 'red' without a dictionary.  Your redness quality is your definition of the word red, no dictionary required.  If you had an objective abstract description of both your redness and greenness, like this sentence, that tells me nothing of it's actual qualities.  You need a colored picture, like the above.  You can't know what red means, without a dictionary that points to the red colored one.  The same is true of things like pain and pleasure (like redness and greenness, those physical qualities give purpose to life, making things moral/amoral).  With abstract systems, you need a dictionary to know what the words pain and pleasure mean or feal like or how they function.  But the physical qualities of your pleasures, like the physical qualities of your colors, are just physical facts.  No definition is required, they are the definitions of those words for you.

In my world, there are no moral implications for switching off the one on the right (unless it is a machine picking strawberries for the ones on the left), since it isn't physically like anything.  There are moral implications for switching off the other 2.  In my word, If you understand what redness and greenness are you get two things.  You understand what consciousness is (computationally bound elemental intrinsic qualities), and you know the purpose of life, or what makes phenomenal life worth living.  Experiencing just 5 minutes of redness would give purpose to billions of years of abstract evolutionary death and suffering to reach that achievement.  Qualia blind people, though they experience redness all day every day, often ask what is the purpose to life.








--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 12, 2021, 1:03:55 PM12/12/21
to extro...@googlegroups.com
On Sun, Dec 12, 2021 at 9:33 AM Brent Allsop <brent....@gmail.com> wrote:

> Hi John,
If I were in your camp,  and if I made the assumptions you make, I would agree with you.  You assume there is no difference between the 3 different systems portrayed in this image engineered to function the same:
3_robots_tiny.png

There is no difference in mind portrayed between the first image and the third image, only a difference in brain is shown and that difference is superficial, one brain is wet and squishy and the other brain is hard and dry.

> in my camp redness isn't a quality of something that reflects 'red' light, redness is an intrinsic physical quality of the physical stuff your brain uses to represent visual knowledge of red things with. 
 
In my camp "redness" is just a symbol and that symbol can be ANYTHING as long as it can be used to distinguish between things that are red and things that are not red. Just as the symbols "six" and "6" are interchangeable because they both describe exactly the same thing, the integer after five. Thus there is no subjective difference between a mind that saw the world in black-and-white, or green and white, or red and white; and, in my humble opinion, subjectivity, especially my subjectivity, is the most important thing in the universe.  

> In my world, my brain could be engineered to represent knowledge of red things with your greenness physical quality. 
 
In fact no new engineering may be necessary, subjectively your red could already be my green; however, as I pointed out before and you have not been able to dispute, there is no way, even in theory, to objectively determine that, and although it's not as important as subjectivity, objectivity is the only way to convince 2 minds that something is true. Or at least it should only be the only way.  So if you seeing green as I see red makes no subjective difference, and you seeing green as I see red makes no objective difference, then I conclude the answer to the question "do you see green as I see red?" is a question of no importance whatsoever because it simply makes no difference. Or to put it another way, the question "do you see green as I see red?" is not a question at all, it's just a sequence of ASCII characters with a question mark at the end and means nothing.
 
> You can't know the meaning of the word 'red' without a dictionary. 

That is absolutely untrue. When you and I were three years old we were both far better linguists than we are today, we both had the ability to pick up a language we had never heard before painlessly and very quickly speak it like a native. And we did so without so much as glancing at a dictionary or even knowing what a dictionary was because in language example is the root of meaning not definition. After all, where do you think lexicographers got the knowledge to write their book?    

> You can't know what red means, without a dictionary that points to the red colored one. 

Cut out the middleman, you don't need the dictionary, just point to something red and say "red", the kid will soon get the idea   

> The same is true of things like pain and pleasure

That is an even better example. A hammer can teach you the difference between pleasure and pain better than a dictionary or even the greatest poet whoever lived can if the hammer hits you on the finger

> In my word, If you understand what redness and greenness are you get two things. 

That is true in my world too, red and green are symbols that represent two different parts of the electromagnetic spectrum, it doesn't matter which symbol goes with which part as long as consistency is maintained.  

 > Experiencing just 5 minutes of redness would give purpose to billions of years of abstract evolutionary death and suffering to reach that achievement.

If previously I had only seen the world in black-and-white but suddenly now I could differentiate finer detail and see the world in black white and red it would indeed be a uplifting emotional experience, and if I could see the world in black white red and green it would be even better, if you could add in blue it would be better still. But it's unimportant which symbol goes to which electromagnetic frequency range, it would change nothing objectively or subjectively.   

John K Clark 

Dan TheBookMan

unread,
Dec 30, 2021, 1:28:41 AM12/30/21
to extro...@googlegroups.com
Could be that there are other values aside from a reduction in suffering. In fact, only a sort of hedonic ethics (which meshes well with the rather vapid utilitarian ethics many folks adopt) would center on suffering. This isn’t to say reducing suffering isn’t in the mix, but that everything ethical isn’t reduced to it.

Regards,

Dan

John Clark

unread,
Dec 30, 2021, 4:33:09 AM12/30/21
to extro...@googlegroups.com
On Thu, Dec 30, 2021 at 1:28 AM Dan TheBookMan <danus...@gmail.com> wrote:

>> I don't see what AI has to do with it. And from an ethical point of view it seems to me that accountability is one of the few things that is easy to determine because it all boils down to a question of punishment, and the only valid reason for punishing anybody for anything is if it seems likely that it will result in a net decrease in human suffering in the future; if it does then punish that person, if it doesn't then don't.  After all, if ethics doesn't result in less suffering then what's the point of ethics?
 
> Could be that there are other values aside from a reduction in suffering. In fact, only a sort of hedonic ethics (which meshes well with the rather vapid utilitarian ethics many folks adopt) would center on suffering. This isn’t to say reducing suffering isn’t in the mix, but that everything ethical isn’t reduced to it.

You're never going to define ethical behavior in a way that covers every situation and is always free from any self contradiction, we can't even do that for arithmetic, so I think decreasing suffering and increasing happiness is as close as you're ever going to get. OK maybe I'd add in a pinch of "find out more about how the universe works", but nobody is ever going to make ethics more logically rigorous than arithmetic; nor do I think an algorithm can be found that can always differentiate between right and wrong and never produces ethical paradoxes.

John K Clark

 

 

Brent Allsop

unread,
Dec 30, 2021, 9:33:16 AM12/30/21
to extro...@googlegroups.com

Hi John,
Seems to me morality can be based on some, what seem to me to be necessary fundamental truths, like existence or living is better than dying, knowing is better than not knowing (i.e. "find out more about how the universe works", social is better than anti social...
This is why evolution towards that which is better is a logically necessity, in any sufficiently complex system.
The opposite of evolution is logically impossible, right?



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Dec 30, 2021, 10:25:20 AM12/30/21
to extro...@googlegroups.com
On Thu, Dec 30, 2021 at 9:33 AM Brent Allsop <brent....@gmail.com> wrote:

> Hi John,
Seems to me morality can be based on some, what seem to me to be necessary fundamental truths, like existence or living is better than dying,

Usually yeah, but I think oblivion would be preferable to intense unrelenting pain.  
 
> knowing is better than not knowing (i.e. "find out more about how the universe works",

That would be my opinion, but I don't know how to logically prove it.  I don't think there's any chance of ever developing a morality that is both complete and self consistent since it's impossible to do that even for something as straightforward as arithmetic.

> social is better than anti social...

Some people would just prefer to be alone, I don't think that is either good or evil it's just a preference, and there's no disputing matters of taste.   
 
> This is why evolution towards that which is better is a logically necessity, in any sufficiently complex system.
The opposite of evolution is logically impossible, right?

It's improbable Evolution will precisely retrace its steps, but that doesn't mean a human would conclude the end results will always be an improvement. Evolution's goal is not to increase complexity or to become more intelligent but to get more genes into the next generation by outcompeting the competition. And sometimes that results in something simpler, dumber and more primitive; for example that is often seen in the evolution of non-parasites into parasites.
John K Clark 







 

William Flynn Wallace

unread,
Dec 30, 2021, 12:09:34 PM12/30/21
to extro...@googlegroups.com
I taught courses in Learning for over 30 years and I can testify that punishment, of the positive kind ( as opposed to the negative kind, where a response results in the withdrawal of something good, like taking a toy away) has so many unfortunate side effects, often worse than the behavior being punished, that I would never recommend it in child raising unless the behavior being punished is actually dangerous to the person or to others.  

I would love to have vengeance against the doctors that killed both my parents, but I could not.  You have to have very deep pockets to sue a medical person, and of course I didn't.  So I have to forgive them to get rid of the depressions and grudges I held.  I think forgiving shows moral superiority.  

Putting a person in prison is a case that reinforces that notion.  The only positive thing it does is to keep someone off the streets for awhile.  The horrible conditions in prisons here in Mississippi only strengthen hostility of the prisoners to society in general and conservatives are happy to keep cutting prison budgets so they can suffer even more from clogged toilets, terrible food, and more.  Sensory deprivation, aka total isolation, is cruel and unusual punishment but often used.  Result:  prison riots that more than occasionally kill some inmates. And who cares about them? Then they are turned loose to do it again.  Some people can be rehabilitated.  Some can get off their addictions, but they get no help here.  No money for programs like these.

So - vengeance against lawbreakers is misplaced in many ways, but of course conservatives would not dream of 'coddling' criminals.  Creating pain and suffering are their only weapons and they are simply not working, as recidivism statistics reveal.  Studies of positive punishment of children are similar:  kids grow up worse if the only discipline is physical - numerous studies on that.  In poor, esp. in minority communities, kids learn that they have done something wrong when they are yelled at and hit, and yelling and hitting them becomes the only thing that gets their attention.  This is why minority kids act up in schools - no one can yell or hit them, so they don't take anything seriously.

Try forgiveness, even of poor drivers.  It will improve your mental health and overall attitude towards living.

This really requires a much longer post with added references to studies, but maybe it's a start.  bill w

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
Dec 30, 2021, 12:44:10 PM12/30/21
to extro...@googlegroups.com

Thanks for those great thoughts, William.  I didn't know that about your parents.  Is that story written up, somewhere?

I do think forgiveness is great, but I consider it only temporary.  For me it's all about a full restitution, with interest, to achieve perfect justice.  All vengeance does is make more restitution work necessary before perfect justice can be achieved.  I have faith that we are heading in that direction and that some day we will get there.  Most people will never give up till we do get there.  That's what enables me to accept forgiveness, at least temporarily.  One might considered this way of acting as selfish, because the people that give today, will be the inheritors in the end, as in the first shall be last, and visa versa.

Always forgive, never forget.


John Clark

unread,
Dec 30, 2021, 1:16:18 PM12/30/21
to extro...@googlegroups.com
On Thu, Dec 30, 2021 at 12:09 PM William Flynn Wallace <fooz...@gmail.com> wrote:

> I think forgiving shows moral superiority. 

Yes, I think the same thing, although sometimes that is hard to do. Oh well, I never claimed to be a moral paragon.

> Putting a person in prison is a case that reinforces that notion.  The only positive thing it does is to keep someone off the streets for awhile.  The horrible conditions in prisons here in Mississippi only strengthen hostility of the prisoners to society in general and conservatives are happy to keep cutting prison budgets so they can suffer even more from clogged toilets, terrible food, and more.  Sensory deprivation, aka total isolation, is cruel and unusual punishment but often used.  Result:  prison riots that more than occasionally kill some inmates.

Trump's Antidemocratic Party takes petty criminals who grew up in bad environments and then crams them into an environment that is 1000 times worse and is surprised that they don't reform but instead come out of prison as master criminals and moral monsters.  But even if it did work I would have deep reservations about treating prisoners brutally. I figure the world already has enough misery and doesn't need me to add to the sum total. If I was God I would've made a new law of nature that would make agonizing pain and unhappiness physically impossible; I applied for the job but unfortunately that jerk Yahweh got picked not me. I think it was all politics. 

> So - vengeance against lawbreakers is misplaced in many ways,

I think the only valid reason for punishing somebody who does something bad is to prevent something similar from happening in the future, inducing pain in somebody I don't like just so I can watch him suffer may be something the reptilian part of my brain enjoys but my higher brain functions are repelled by the notion; I think it's the difference between justice and vengeance.  

John K Clark 


John Clark

unread,
Dec 30, 2021, 1:24:00 PM12/30/21
to extro...@googlegroups.com
On Thu, Dec 30, 2021 at 12:44 PM Brent Allsop <brent....@gmail.com> wrote:

> For me it's all about a full restitution, with interest, to achieve perfect justice. 

Perfect justice is unobtainable, and I worry about perfection being the enemy of the good. I'd be satisfied with pretty good justice. 

John K Clark

William Flynn Wallace

unread,
Dec 30, 2021, 1:40:55 PM12/30/21
to extro...@googlegroups.com, ExI chat list
Somebody stole your girlfriend.  Are you going to spend the rest of your life trying to get even with that person?  Why cause yourself all that grief?  So you find him and punch him out.  Did that satisfy you in the long run?  I think that's adolescent behavior.  Your former girlfriend now thinks that it was her lucky day when she left you.  Is that OK with you?  Forgiveness has to be permanent - for your sake.  Some don't like it because they think that it resets everything to zero.  Nope.  Those doctors, now certainly dead, still were worthy of revilement from me after I forgave them.  But it was a cold thing:  I didn't burn any calories hating them.   bill w

William Flynn Wallace

unread,
Dec 30, 2021, 1:42:08 PM12/30/21
to extro...@googlegroups.com
Yes, John, often extremely hard to do.  

Good thoughts!   bill w

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Brent Allsop

unread,
Dec 30, 2021, 3:18:02 PM12/30/21
to extro...@googlegroups.com

All I know is the last time someone rear ended us, after the very generous insurance bill compensating us for everything, leaving us much better off after all was said and done, I was like: "Yea, crash into me, any time you want."

That is good enough for me.



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

William Flynn Wallace

unread,
Dec 30, 2021, 3:24:07 PM12/30/21
to extro...@googlegroups.com
Brent, I got T boned and was offered $6000 for what they said was a totaled car.  I took it and bought a beautiful Town Car with 33K miles on it for $7000, replaced the door on the first car, and now have two great cars!

Also, I had a Dodge van that was hit three times and collected over $1000 for what amounted to a dented fender, which I never fixed.

Yeah - hit me again!  bill w

Stuart LaForge

unread,
Dec 30, 2021, 3:53:28 PM12/30/21
to extropolis
On Thursday, December 30, 2021 at 9:09:34 AM UTC-8 William Flynn Wallace wrote:
I taught courses in Learning for over 30 years and I can testify that punishment, of the positive kind ( as opposed to the negative kind, where a response results in the withdrawal of something good, like taking a toy away) has so many unfortunate side effects, often worse than the behavior being punished, that I would never recommend it in child raising unless the behavior being punished is actually dangerous to the person or to others.
 
Both prison and capital punishment are examples of negative punishment. In the first, one is taking away the subject's freedom and in the second, one is taking away the subject's life. I am not sure how either of these options is better than the positive punishment of flogging them in the public square. Positive punishment of undesired behavior is an evolved trait that would not have evolved unless it was successful. For reference look at evolved behavior of all social primates and pack animals. When one wolf wants stop another wolf from stealing its food, it warns and ultimately bites the offending wolf. Chimpanzees use pain and violence to regulate one another's behavior. Even game theoretic computer simulations show that punishing defectors in tit-for-tat is a Nash equilibrium and an evolutionarily stable strategy (ESS).
  
I would love to have vengeance against the doctors that killed both my parents, but I could not.  You have to have very deep pockets to sue a medical person, and of course I didn't.  So I have to forgive them to get rid of the depressions and grudges I held.  I think forgiving shows moral superiority.

Until constantly forgiving someone for the same offenses becomes a pattern, at which point forgiving becomes enabling. Also did those doctors really kill your parents or did they try to save them and fail to do so?

Putting a person in prison is a case that reinforces that notion.  The only positive thing it does is to keep someone off the streets for awhile.  The horrible conditions in prisons here in Mississippi only strengthen hostility of the prisoners to society in general and conservatives are happy to keep cutting prison budgets so they can suffer even more from clogged toilets, terrible food, and more.  Sensory deprivation, aka total isolation, is cruel and unusual punishment but often used.  Result:  prison riots that more than occasionally kill some inmates. And who cares about them? Then they are turned loose to do it again.  Some people can be rehabilitated.  Some can get off their addictions, but they get no help here.  No money for programs like these.

Again, all these punishments you rail against are negative punishments which are supposed to be the good kind, while spanking a child or tasering an adult is positive punishment and is considered bad.
 
So - vengeance against lawbreakers is misplaced in many ways, but of course conservatives would not dream of 'coddling' criminals.  Creating pain and suffering are their only weapons and they are simply not working, as recidivism statistics reveal.  Studies of positive punishment of children are similar:  kids grow up worse if the only discipline is physical - numerous studies on that.

If positive punishment did not work on some level, it would not have been been practiced by numerous tribe and species over millions of years of evolution. Nor would it be a Nash equilibrium in the mathematics of  game theory. In the form of mutually  assured destruction (MAD), it is largely responsible for world peace. Are you sure this is not a case of psychologists thinking with their hearts instead of their brains? How did they conduct these studies? It seems that a good study would be hard to set up since you can't compare outcomes in identical children using controls.

 
In poor, esp. in minority communities, kids learn that they have done something wrong when they are yelled at and hit, and yelling and hitting them becomes the only thing that gets their attention.  This is why minority kids act up in schools - no one can yell or hit them, so they don't take anything seriously.

This is a HUGE problem. These poor kids (race seems far less relevant than socioeconomic status) respect and fear one another far more than they do their teachers or school administration because of their gang mentality of "snitches get stitches". Numerous TikTok challenges have them vandalizing school property, stealing from, and hitting their teachers. The prohibition against positive punishment for school children seems like it will be the death of public education, at least in the United States. Coddling of delinquents  has gotten so bad that it seems like almost like a communist plot to destabilize western liberal democracies from within. The educational psychology theory that teachers learn in school seems completely ineffectual in the real world of schools in poor neighborhoods. All it seems to do is prime these kids for prison by teaching them that authority figures are a joke who have no teeth. They can get away with anything they want until they cross a cop or another thug that shoot them, beat them, or throw them in jail. 
 
Try forgiveness, even of poor drivers.  It will improve your mental health and overall attitude towards living.

It might make YOU feel better, but what about the good of society? What if that reckless driver you forgave ends up killing a whole family because you let him off the hook? 
 

This really requires a much longer post with added references to studies, but maybe it's a start.  bill w

I could use some references here. Positive-based incentive strategies don't seem to be working with a lot of problem students. Coddling them just encourages worse behavior in the future. Public education is on the ropes and unrealistic platitudes are wearing thin.

Stuart LaForge

Stuart LaForge

unread,
Dec 30, 2021, 7:00:23 PM12/30/21
to extropolis
On Thursday, December 30, 2021 at 7:25:20 AM UTC-8 johnk...@gmail.com wrote:
On Thu, Dec 30, 2021 at 9:33 AM Brent Allsop <brent....@gmail.com> wrote:

> Hi John,
Seems to me morality can be based on some, what seem to me to be necessary fundamental truths, like existence or living is better than dying,

Usually yeah, but I think oblivion would be preferable to intense unrelenting pain.

It is more subtle and nuanced than that. You can only survive at the expense of other living things. Even a vegan must kill to survive. But you are more powerful than the carrot that you eat and therefore are morally superior to it. Honest power, respect for the truth, and responsibility for one's actions are the basis of morality for all naturally-evolved beings.
    
 
> This is why evolution towards that which is better is a logically necessity, in any sufficiently complex system.
The opposite of evolution is logically impossible, right?

It's improbable Evolution will precisely retrace its steps, but that doesn't mean a human would conclude the end results will always be an improvement. Evolution's goal is not to increase complexity or to become more intelligent but to get more genes into the next generation by outcompeting the competition. And sometimes that results in something simpler, dumber and more primitive; for example that is often seen in the evolution of non-parasites into parasites.


While evolution by natural selection does not favor complexity over simplicity, the second law of thermodynamics does. All that entropy creates opportunities for novel organization. And since the second law is more fundamental than life, life will supply that organization when it is beneficial to do so and simplify itself when it is not.

Stuart LaForge
 

William Flynn Wallace

unread,
Dec 31, 2021, 10:30:23 AM12/31/21
to extro...@googlegroups.com, ExI chat list

William Flynn Wallace wrote:
I taught courses in Learning for over 30 years and I can testify that punishment, of the positive kind ( as opposed to the negative kind, where a response results in the withdrawal of something good, like taking a toy away) has so many unfortunate side effects, often worse than the behavior being punished, that I would never recommend it in child raising unless the behavior being punished is actually dangerous to the person or to others.
Let met start off with these:

Side effects of positive punishment:

1 - creates fear of the punisher and possibly hate  - may generalize to other authority figures

2 - does not generalize well to similar behaviors

3 - creates avoidance of punishers

4 - creates hostility towards punisher and maybe society

5 - does nothing to encourage proper behavior

6 - creates learning of how to avoid punishment - i.e. get away with bad behavior

7 - encourages acting out of anger and frustration by punisher -poor model

8 - is associated with poorer cognitive and intellectual development

9 - may result in  excessive anxiety, guilt, and self-punishment. -low self worth

10 - encourages excessive punishment when mild punishment does not work

11 - can create aggression and antisocial behavior

You can nitpick these - some or most do not necessarily happen every time, and some only when the person being punished reacts rather strongly.  But all of them are common.


 
Both prison and capital punishment are examples of negative punishment. In the first, one is taking away the subject's freedom and in the second, one is taking away the subject's life. I am not sure how either of these options is better than the positive punishment of flogging them in the public square.

You are correct - negative punishment can occur with positive.  However, in the usual case, the toy taken away can be regained by showing positive behaviors, like chores, which are then reinforced.  So you have punishment of the bad behavior and positive reinforcement of the good behavior, something that does not occur in most positive punishment situations.  A prisoner can lessen his term with good behavior, but not by a lot.
Positive punishment of undesired behavior is an evolved trait that would not have evolved unless it was successful. For reference look at evolved behavior of all social primates and pack animals. When one wolf wants stop another wolf from stealing its food, it warns and ultimately bites the offending wolf. Chimpanzees use pain and violence to regulate one another's behavior. Even game theoretic computer simulations show that punishing defectors in tit-for-tat is a Nash equilibrium and an evolutionarily stable strategy
As we know, humans have an excess of anger.  When thwarted we get angry and strike out.  That seems to be a natural reaction. I never said that positive punishment did not work.  Clearly it can though if it continues for something it is clearly not working. (though people who hit and don't get what they want tend to hit harder.  My point is that it can be costly in terms of the undesireable side effects.



Until constantly forgiving someone for the same offenses becomes a pattern, at which point forgiving becomes enabling.

Just as positive punishment can get worse every time, so can negative.  What is taken away gets more and more desireable - at first, one hour of TV is lost; next three hours; next all night.  In addition, I might require hard work to regain the desireables.  If this is not working perhaps some consultation with professional is called for.  I would even justify threats of physical punishment.

  prison riots that more than occasionally kill

Again, all these punishments you rail against are negative punishments which are supposed to be the good kind, while spanking a child or tasering an adult is positive punishment and is considered bad.  Not all by any means.  And it's not the negative aspect that creates the problems.
 


 Are you sure this is not a case of psychologists thinking with their hearts instead of their brains? How did they conduct these studies? It seems that a good study would be hard to set up since you can't compare outcomes in identical children using controls.

Most studies, if not all, of positive punishment cannot be ethically done.  But those side effects can be verified, often by scars and bruises and broken bones in abused wives and children.  Do you doubt that?

This is a HUGE problem. These poor kids (race seems far less relevant than socioeconomic status - true) respect and fear one another far more than they do their teachers or school administration because of their gang mentality of "snitches get stitches". Numerous TikTok challenges have them vandalizing school property, stealing from, and hitting their teachers. The prohibition against positive punishment for school children seems like it will be the death of public education, at least in the United States. Coddling of delinquents  has gotten so bad that it seems like almost like a communist plot to destabilize western liberal democracies from within. The educational psychology theory that teachers learn in school seems completely ineffectual in the real world of schools in poor neighborhoods. All it seems to do is prime these kids for prison by teaching them that authority figures are a joke who have no teeth. They can get away with anything they want until they cross a cop or another thug that shoot them, beat them, or throw them in jail. Don't get me started on the education establishment!
I am not in favor of coddling anyone.  You can make punishments severe without hitting people.  Hitting people to me is a sign that you can't, or don't want to try anything else.  I am a liberal but not a bleeding heart one.  I have no idea what to do with poor, misbehaving kids in schools.  Wish I did.  I just don't equate getting tough with lots of positive punishment.  I would justify it only as a last resort.
It might make YOU feel better, but what about the good of society? What if that reckless driver you forgave ends up killing a whole family because you let him off the hook? 
I forgive the reckless driver so as to cool my temper and not get reckless myself.  Doesn't mean that I won't call him in to 911 - I have done so, esp. if their driving is erratic, possibly meaning drunk.  One who just cuts me off gets hand signals and honking.  Try Googling 'meta-analyses of positive punishement'.  That's what I would do.   bill w

 

This really requires a much longer post with added references to studies, but maybe it's a start.  bill w

Stuart LaForge

 


Reply all
Reply to author
Forward
0 new messages