Re: AI Dooms Us

33 views
Skip to first unread message

meekerdb

unread,
Aug 25, 2014, 3:05:04 PM8/25/14
to
Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

Brent

On 8/25/2014 10:27 AM

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor 



Chris de Morsella

unread,
Aug 25, 2014, 3:20:24 PM8/25/14
to everyth...@googlegroups.com
AI is being developed and funded primarily by agencies such as DARPA, NSA, DOD (plus MIC contractors). After all smart drones with independent untended warfighting capabilities offer a significant military advantage to the side that possesses them. This is a guarantee that the wrong kind of super-intelligence will come out of the process... a super-intelligent machine devoted to the killing of "enemy" human beings (+ opposing drones I suppose as well)

This does not bode well for a benign super-intelligence outcome does it?

From: meekerdb <meek...@verizon.net>
To:
Sent: Monday, August 25, 2014 12:04 PM
Subject: Re: AI Dooms Us

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


LizR

unread,
Aug 25, 2014, 6:37:43 PM8/25/14
to everyth...@googlegroups.com
"I'll be back!"

Platonist Guitar Cowboy

unread,
Aug 26, 2014, 10:57:09 AM8/26/14
to everyth...@googlegroups.com
If we engage a class of problems on which we can't reason, and throw tech at that, we'll catch the occasional fish, but we won't really know how or why. Some marine life is poisonous however, which might not be obvious in the catch.

I prefer "keep it simple approaches to novelty":

From G. Kreisel's "Obituary of K. Gödel":

Without losing sight of the permanent interest of his work, Gödel repeatedly stressed... how little novel mathematics was needed; only attention to some quite commonplace distinctions; in the case of his most famous work: between arithmetical truth on the one hand and derivability by formal rules on the other. Far from being uncomfortable about so to speak getting something from nothing, he saw his early successes as special cases of a fruitful general, but neglected scheme:
By attention or, equivalently, analysis of suitable traditional notions and issues, adding possibly a touch of precision, one arrives painlessly at appropriate concepts, correct conjectures, and generally easy proofs-

Kreisel, 1980.


Chris de Morsella

unread,
Aug 26, 2014, 12:40:01 PM8/26/14
to everyth...@googlegroups.com

We can engage and do so without overarching understanding of what we are doing and stuff will emerge out of our activities. AI will be (and is!) in my opinion emergent phenomena. We don’t really understand it, but we are accelerating its emergence never the less.

Modern software systems with millions of lines of code are not fully understood by anybody anymore, people know about small specific regions of a system and some architects have a fuzzy and rather vague understanding of system dynamics as a whole, but mysterious stuff is already happening (ex. Google (or some researchers from Google) has recently reported that its photo recognition smart systems are acting in ways that the programmers don’t fully comprehend and that are not deterministic – i.e. explicable based on working through the code)

If you look at where the money is in AI research and development, it is largely focused on military, security state, and other allied sectors, with perhaps an anomaly in the financial sector where big money is being thrown at smart arbitrage systems.

We will get the kind of AI we pay for.

-Chris

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Platonist Guitar Cowboy
Sent: Tuesday, August 26, 2014 7:57 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

If we engage a class of problems on which we can't reason, and throw tech at that, we'll catch the occasional fish, but we won't really know how or why. Some marine life is poisonous however, which might not be obvious in the catch.

Terren Suydam

unread,
Aug 26, 2014, 1:31:45 PM8/26/14
to everyth...@googlegroups.com
For what it's worth, the kind of autonomous human-level (or greater) type of AI, or AGI, will *most likely* require an architecture, yet to be well understood, that is of an entirely different nature relative to the kind of architectures being engineered by those interests who desire highly complicated slaves.  In other words, I'm not losing any sleep about the military accidentally unleashing a terminator.  If I'm going to lose sleep over a predictable sudden loss of well being, I will focus instead on the much less technical and much more realistic threats arising from economic/societal collapse. 

Terren

LizR

unread,
Aug 26, 2014, 5:50:09 PM8/26/14
to everyth...@googlegroups.com
On 27 August 2014 05:31, Terren Suydam <terren...@gmail.com> wrote:
For what it's worth, the kind of autonomous human-level (or greater) type of AI, or AGI, will *most likely* require an architecture, yet to be well understood, that is of an entirely different nature relative to the kind of architectures being engineered by those interests who desire highly complicated slaves.  In other words, I'm not losing any sleep about the military accidentally unleashing a terminator.  If I'm going to lose sleep over a predictable sudden loss of well being, I will focus instead on the much less technical and much more realistic threats arising from economic/societal collapse. 

Or the - probably even more likely - threat of ecological disaster.
 

Telmo Menezes

unread,
Aug 26, 2014, 6:11:03 PM8/26/14
to everyth...@googlegroups.com
Hi Terren,

On Tue, Aug 26, 2014 at 7:31 PM, Terren Suydam <terren...@gmail.com> wrote:
For what it's worth, the kind of autonomous human-level (or greater) type of AI, or AGI, will *most likely* require an architecture, yet to be well understood, that is of an entirely different nature relative to the kind of architectures being engineered by those interests who desire highly complicated slaves.

The problem is that you need to define a utility function for these slaves. Even with currently known algorithms but more computational power, the machines might be able to take steps to maximize their utility function that are beyond our own intelligence. If we try to constrain the utility function in ways that would threaten our well-being, we can only impose such constraints up to the horizon of our own intelligence, but not further.

So your cleaning system might end up figuring out a way to contaminate your house with enough radiation to keep humans from interfering with the cleaning operation for millennia. This is not a real threat because I can anticipate it, but what lies beyond the anticipatory powers of the most intelligent human alive?
 
Telmo.

spudb...@aol.com

unread,
Aug 26, 2014, 6:53:09 PM8/26/14
to everyth...@googlegroups.com
The problem and delight of the Terminator, Skynet monster, originally a mix of US-Soviet AI's merging together (courtesy Harlan Ellison) is that they lads act on their own. My own fear is not mad program code, but the humans behind the coding. A reviving cold war 3, or 4, is far more concerning that Johnny 5 going berserk. It is not the logic or illogic of devices, but ourselves.

Terren Suydam

unread,
Aug 26, 2014, 7:14:47 PM8/26/14
to everyth...@googlegroups.com
Hi Telmo,

I think if it were as simple as you make it seem, relative to what we have today, we'd have engineered systems like that already. You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI). I'm not saying this is impossible, in fact I see this as inevitable on a longer timescale. I'm saying that I doubt that the military is committing any significant resources into that kind of research when easier approaches are much more likely to bear fruit... but I really have no idea what the military is researching, so it's just a hunch.

What I would wager on is that the military is developing drones along the same lines as what Google has achieved with its self-driving cars. Highly competent, autonomous drones that excel in very specific environments. The utility functions involved would be specified explicitly in terms of "hard-coded" representations of stimuli. For AGI they would need to be equipped to invent new models of the world, articulate those models with respect to self and with respect to existing goal structures, simulate them, and act on them. I think we are a long way from those kinds of AIs. The only researcher I see making inroads towards that kind of AI is Steve Grand. 

Terren

meekerdb

unread,
Aug 26, 2014, 8:18:26 PM8/26/14
to everyth...@googlegroups.com
On 8/26/2014 4:14 PM, Terren Suydam wrote:
Hi Telmo,

I think if it were as simple as you make it seem, relative to what we have today, we'd have engineered systems like that already. You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI). I'm not saying this is impossible, in fact I see this as inevitable on a longer timescale. I'm saying that I doubt that the military is committing any significant resources into that kind of research when easier approaches are much more likely to bear fruit... but I really have no idea what the military is researching, so it's just a hunch.

What I would wager on is that the military is developing drones along the same lines as what Google has achieved with its self-driving cars. Highly competent, autonomous drones that excel in very specific environments. The utility functions involved would be specified explicitly in terms of "hard-coded" representations of stimuli. For AGI they would need to be equipped to invent new models of the world, articulate those models with respect to self and with respect to existing goal structures, simulate them, and act on them. I think we are a long way from those kinds of AIs. The only researcher I see making inroads towards that kind of AI is Steve Grand. 

I don't know much about Steve Grand, but I think Jeff Hawkins and his non-profit Numenta is doing good serious work toward AGI.

Brent

Telmo Menezes

unread,
Aug 27, 2014, 6:31:23 AM8/27/14
to everyth...@googlegroups.com
On Wed, Aug 27, 2014 at 1:14 AM, Terren Suydam <terren...@gmail.com> wrote:
Hi Telmo,

I think if it were as simple as you make it seem, relative to what we have today, we'd have engineered systems like that already.

It wasn't my intention to make it look simple. What I claim is that we already have a treasure trove of very interesting algorithms. None of them is AGI, but what they can do becomes more impressive with more computing power and access to data.

Take google translator. It's far from perfect, but way ahead anything we had a decade ago. As far as I can tell, this was achieved with algorithms that had been known for a long time, but that now can operate on the gigantic dataset and computer farm available to google.

Imagine what a simple minimax search tree could do with immense computing power and data access.
 
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.
 
I'm not saying this is impossible, in fact I see this as inevitable on a longer timescale. I'm saying that I doubt that the military is committing any significant resources into that kind of research when easier approaches are much more likely to bear fruit... but I really have no idea what the military is researching, so it's just a hunch.

Why does it matter if it's the military that does this? To a sufficiently advanced AI, we are just monkeys throwing rocks at each other. It will surely figure out a way to take control of our resources, including weaponry.
 

What I would wager on is that the military is developing drones along the same lines as what Google has achieved with its self-driving cars. Highly competent, autonomous drones that excel in very specific environments. The utility functions involved would be specified explicitly in terms of "hard-coded" representations of stimuli. For AGI they would need to be equipped to invent new models of the world, articulate those models with respect to self and with respect to existing goal structures, simulate them, and act on them. I think we are a long way from those kinds of AIs. The only researcher I see making inroads towards that kind of AI is Steve Grand.

But again, a reasonable fear is that a sufficiently powerful conventional AI is already a threat (due to increasing autonomy and data access + our possible inability to cover all the loopholes in utility functions).

Cheers
Telmo.

Terren Suydam

unread,
Aug 27, 2014, 7:53:35 AM8/27/14
to everyth...@googlegroups.com
On Wed, Aug 27, 2014 at 6:31 AM, Telmo Menezes <te...@telmomenezes.com> wrote:


On Wed, Aug 27, 2014 at 1:14 AM, Terren Suydam <terren...@gmail.com> wrote:
Hi Telmo,

I think if it were as simple as you make it seem, relative to what we have today, we'd have engineered systems like that already.

It wasn't my intention to make it look simple. What I claim is that we already have a treasure trove of very interesting algorithms. None of them is AGI, but what they can do becomes more impressive with more computing power and access to data.

I agree that can be made to do impressive things. Watson definitely impressed me.  

Take google translator. It's far from perfect, but way ahead anything we had a decade ago. As far as I can tell, this was achieved with algorithms that had been known for a long time, but that now can operate on the gigantic dataset and computer farm available to google.

Imagine what a simple minimax search tree could do with immense computing power and data access.

The space of possibilities quickly scales beyond the wildest imaginings of computing power. Chess AIs are already better than humans, because they more or less implement this approach, and it turns out you "only" need to computer a few hundred million positions per second to do that. Obviously that's a toy environment... the possibilities inherent in the real world are even be enumerable according to some predefined ontology (i.e. that would be required to specify in a minimax type AI).
 
 
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.

AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours. 

It comes down to how the utility function is defined. For Google Car, the utility function probably tests actions along the lines of "get from A to B safely, as quickly as possible". If a Google Car is engineered with evolutionary methods to generate novel solutions (would be overkill but bear with me), the novelty generated is contained within the utility function. It might generate a novel route that conventional map algorithms wouldn't find, but it would be impossible for it to find a solution like "helicopter the car past this traffic jam".
 
 
I'm not saying this is impossible, in fact I see this as inevitable on a longer timescale. I'm saying that I doubt that the military is committing any significant resources into that kind of research when easier approaches are much more likely to bear fruit... but I really have no idea what the military is researching, so it's just a hunch.

Why does it matter if it's the military that does this? To a sufficiently advanced AI, we are just monkeys throwing rocks at each other. It will surely figure out a way to take control of our resources, including weaponry.
 

I think the thread started with a focus on killing machines. But your point is taken.
 

What I would wager on is that the military is developing drones along the same lines as what Google has achieved with its self-driving cars. Highly competent, autonomous drones that excel in very specific environments. The utility functions involved would be specified explicitly in terms of "hard-coded" representations of stimuli. For AGI they would need to be equipped to invent new models of the world, articulate those models with respect to self and with respect to existing goal structures, simulate them, and act on them. I think we are a long way from those kinds of AIs. The only researcher I see making inroads towards that kind of AI is Steve Grand.

But again, a reasonable fear is that a sufficiently powerful conventional AI is already a threat (due to increasing autonomy and data access + our possible inability to cover all the loopholes in utility functions).


The threats involved with AIs are contained within the scope of their utility functions. As it turns out, the moment you widen the utility function beyond a very narrow (and specifiable) domain, AI gets much, much harder.

Terren
 
Cheers
Telmo.
 

Telmo Menezes

unread,
Aug 27, 2014, 12:21:24 PM8/27/14
to everyth...@googlegroups.com
On Wed, Aug 27, 2014 at 1:53 PM, Terren Suydam <terren...@gmail.com> wrote:

On Wed, Aug 27, 2014 at 6:31 AM, Telmo Menezes <te...@telmomenezes.com> wrote:


On Wed, Aug 27, 2014 at 1:14 AM, Terren Suydam <terren...@gmail.com> wrote:
Hi Telmo,

I think if it were as simple as you make it seem, relative to what we have today, we'd have engineered systems like that already.

It wasn't my intention to make it look simple. What I claim is that we already have a treasure trove of very interesting algorithms. None of them is AGI, but what they can do becomes more impressive with more computing power and access to data.

I agree that can be made to do impressive things. Watson definitely impressed me.  

Take google translator. It's far from perfect, but way ahead anything we had a decade ago. As far as I can tell, this was achieved with algorithms that had been known for a long time, but that now can operate on the gigantic dataset and computer farm available to google.

Imagine what a simple minimax search tree could do with immense computing power and data access.

The space of possibilities quickly scales beyond the wildest imaginings of computing power. Chess AIs are already better than humans, because they more or less implement this approach, and it turns out you "only" need to computer a few hundred million positions per second to do that. Obviously that's a toy environment... the possibilities inherent in the real world are even be enumerable according to some predefined ontology (i.e. that would be required to specify in a minimax type AI).

Ok, but of course minimax was also a toy example. Several algorithms that already exist could be combined: deep learning, bayesian belief networks, genetic programming and so on. A clever combination of algorithms plus the still ongoing exponential growth in available computational power could soon unleash something impressive. Of course I am just challenging your intuition, mostly because it's a fun topic :) Who knows who's right...

Another interesting/scary scenario to think about is the possibility of a self-mutating computer program proliferating under our noses until it's too late (and exploiting the Internet to create a very powerful meta-computer by stealing a few cpu cycles from everyone).
 
 
 
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.

AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours. 

It comes down to how the utility function is defined. For Google Car, the utility function probably tests actions along the lines of "get from A to B safely, as quickly as possible". If a Google Car is engineered with evolutionary methods to generate novel solutions (would be overkill but bear with me), the novelty generated is contained within the utility function. It might generate a novel route that conventional map algorithms wouldn't find, but it would be impossible for it to find a solution like "helicopter the car past this traffic jam".

What prevents the car from transforming into an helicopter and flying is not the utility function but the set of available actions. I have been playing with evolutionary computation for some time now, and one thing I learned is to not trust my intuition on the real constraints implied by such set of actions.
 
 
 
I'm not saying this is impossible, in fact I see this as inevitable on a longer timescale. I'm saying that I doubt that the military is committing any significant resources into that kind of research when easier approaches are much more likely to bear fruit... but I really have no idea what the military is researching, so it's just a hunch.

Why does it matter if it's the military that does this? To a sufficiently advanced AI, we are just monkeys throwing rocks at each other. It will surely figure out a way to take control of our resources, including weaponry.
 

I think the thread started with a focus on killing machines. But your point is taken.
 

What I would wager on is that the military is developing drones along the same lines as what Google has achieved with its self-driving cars. Highly competent, autonomous drones that excel in very specific environments. The utility functions involved would be specified explicitly in terms of "hard-coded" representations of stimuli. For AGI they would need to be equipped to invent new models of the world, articulate those models with respect to self and with respect to existing goal structures, simulate them, and act on them. I think we are a long way from those kinds of AIs. The only researcher I see making inroads towards that kind of AI is Steve Grand.

But again, a reasonable fear is that a sufficiently powerful conventional AI is already a threat (due to increasing autonomy and data access + our possible inability to cover all the loopholes in utility functions).


The threats involved with AIs are contained within the scope of their utility functions. As it turns out, the moment you widen the utility function beyond a very narrow (and specifiable) domain, AI gets much, much harder.

As above...

Telmo.
 

Terren
 
Cheers
Telmo.
 

Terren Suydam

unread,
Aug 27, 2014, 2:16:53 PM8/27/14
to everyth...@googlegroups.com
On Wed, Aug 27, 2014 at 12:21 PM, Telmo Menezes <te...@telmomenezes.com> wrote:

On Wed, Aug 27, 2014 at 1:53 PM, Terren Suydam <terren...@gmail.com> wrote:

The space of possibilities quickly scales beyond the wildest imaginings of computing power. Chess AIs are already better than humans, because they more or less implement this approach, and it turns out you "only" need to computer a few hundred million positions per second to do that. Obviously that's a toy environment... the possibilities inherent in the real world are even be enumerable according to some predefined ontology (i.e. that would be required to specify in a minimax type AI).

Ok, but of course minimax was also a toy example. Several algorithms that already exist could be combined: deep learning, bayesian belief networks, genetic programming and so on. A clever combination of algorithms plus the still ongoing exponential growth in available computational power could soon unleash something impressive. Of course I am just challenging your intuition, mostly because it's a fun topic :) Who knows who's right...

I think these are overlapping intuitions. On one hand, there is the idea that given enough computing/data resources, something can be created that - regardless of how limited its domain of operation - is still a threat in unexpected ways. On the other hand is the idea that AIs which pose real threats - threats we are not capable of stopping - require a quantum leap forward in cognitive flexibility, if you will.

Although my POV is aligned with the latter intuition, I actually agree with the former, but consider the kinds of threats involved to be bounded in ways we can in principle control. Although in practice it is possible for them to do damage so quickly we can't prevent it. 

Perhaps my idea of intelligence is too limited. I am assuming that something capable of being a real threat will be able to generate its own ontologies, creatively model them in ways that build on and relate to existing ontologies, simulate and test those new models, etc., generate value judgments using these new models with respect to overarching utility function(s). It is suspiciously similar to human intelligence. The difference is that as an *artificial* intelligence with a different embodiement and different algorithms, the modeling they would arrive at could well be strikingly different from how we see the world, with all the attendant problems that could pose for us given the eventually superior computing power.
 
Another interesting/scary scenario to think about is the possibility of a self-mutating computer program proliferating under our noses until it's too late (and exploiting the Internet to create a very powerful meta-computer by stealing a few cpu cycles from everyone).

I think something like this could do a lot of damage very quickly, but by accident... in a similar way perhaps to the occasional meltdowns caused by the collective behaviors of micro-second market-making algorithms.  I find it exceedingly unlikely that an AGI will spontaneously emerge from a self-mutating process like you describe. Again, if this kind of thing were likely, or at least not extremely unlikely, I think it suggests that AGI is a lot simpler than it really is.
 
 
 
 
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.

AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours. 

It comes down to how the utility function is defined. For Google Car, the utility function probably tests actions along the lines of "get from A to B safely, as quickly as possible". If a Google Car is engineered with evolutionary methods to generate novel solutions (would be overkill but bear with me), the novelty generated is contained within the utility function. It might generate a novel route that conventional map algorithms wouldn't find, but it would be impossible for it to find a solution like "helicopter the car past this traffic jam".

What prevents the car from transforming into an helicopter and flying is not the utility function but the set of available actions. I have been playing with evolutionary computation for some time now, and one thing I learned is to not trust my intuition on the real constraints implied by such set of actions.

I was actually talking about contracting a helicopter ride which seems easier :-)  The set of actions available to an AI is limited to the way it models the world.  Without a capacity for intelligently expanding its world model, no AI is going to do anything outside of the domain it is defined in. Google Car won't ever think to contract a helicopter ride until either A) Google engineers program it to consider that as an option or B) Google engineers give the Car the ability to start modelling the world on its own terms. If B then it could be a long time before the Car discovers what a helicopter is, what it's capable of, how it could procure one, etc.  The helicopter example is a bad one actually because it's a solution you or me can easily conceive of, so it seems mundane or easy. 

Terren

meekerdb

unread,
Aug 27, 2014, 3:49:47 PM8/27/14
to everyth...@googlegroups.com
On 8/27/2014 4:53 AM, Terren Suydam wrote:
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.

AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours. 

On the other hand we're not that good at choosing goals for ourselves - e.g. ISIS has chosen the goal of imposing a ruthless religious tyranny.

Brent

Terren Suydam

unread,
Aug 27, 2014, 4:58:31 PM8/27/14
to everyth...@googlegroups.com
The quality of the goal system is not what defines intelligence (though it may suffice to define wisdom).

Terren

Platonist Guitar Cowboy

unread,
Aug 27, 2014, 5:11:41 PM8/27/14
to everyth...@googlegroups.com
Legitimacy of proof and evidence (e.g. for a set of cool algorithms concerning AI, more computing power, big data etc), is an empty question to ask, outside a specified theory. It's like some alien questioning whether the rules of soccer on earth are "valid in absolute sense".

Are we after freedom from contradictions? Completeness? Utility function, what are the references, where is the ultimate list?

ISTM Gödel's work has more to say about AI and monkey rock throwing theologies than we might be inclined to assume.
 

Telmo Menezes

unread,
Aug 28, 2014, 7:30:20 AM8/28/14
to everyth...@googlegroups.com
On Wed, Aug 27, 2014 at 8:16 PM, Terren Suydam <terren...@gmail.com> wrote:

On Wed, Aug 27, 2014 at 12:21 PM, Telmo Menezes <te...@telmomenezes.com> wrote:

On Wed, Aug 27, 2014 at 1:53 PM, Terren Suydam <terren...@gmail.com> wrote:

The space of possibilities quickly scales beyond the wildest imaginings of computing power. Chess AIs are already better than humans, because they more or less implement this approach, and it turns out you "only" need to computer a few hundred million positions per second to do that. Obviously that's a toy environment... the possibilities inherent in the real world are even be enumerable according to some predefined ontology (i.e. that would be required to specify in a minimax type AI).

Ok, but of course minimax was also a toy example. Several algorithms that already exist could be combined: deep learning, bayesian belief networks, genetic programming and so on. A clever combination of algorithms plus the still ongoing exponential growth in available computational power could soon unleash something impressive. Of course I am just challenging your intuition, mostly because it's a fun topic :) Who knows who's right...

I think these are overlapping intuitions. On one hand, there is the idea that given enough computing/data resources, something can be created that - regardless of how limited its domain of operation - is still a threat in unexpected ways. On the other hand is the idea that AIs which pose real threats - threats we are not capable of stopping - require a quantum leap forward in cognitive flexibility, if you will.

Agreed.
 

Although my POV is aligned with the latter intuition, I actually agree with the former, but consider the kinds of threats involved to be bounded in ways we can in principle control. Although in practice it is possible for them to do damage so quickly we can't prevent it. 

Perhaps my idea of intelligence is too limited. I am assuming that something capable of being a real threat will be able to generate its own ontologies, creatively model them in ways that build on and relate to existing ontologies, simulate and test those new models, etc., generate value judgments using these new models with respect to overarching utility function(s). It is suspiciously similar to human intelligence.

I wonder. What you describe seems like the way of thinking of a person trained in the scientific method (a very recent discovery in human history). Is this raw human intelligence? I suspect raw human intelligence is more like a kludge. It is possible to create rickety structures of order on top of that kludge, by a process we call "education".
 
The difference is that as an *artificial* intelligence with a different embodiement and different algorithms, the modeling they would arrive at could well be strikingly different from how we see the world, with all the attendant problems that could pose for us given the eventually superior computing power.

Ok.
 
 
Another interesting/scary scenario to think about is the possibility of a self-mutating computer program proliferating under our noses until it's too late (and exploiting the Internet to create a very powerful meta-computer by stealing a few cpu cycles from everyone).

I think something like this could do a lot of damage very quickly, but by accident... in a similar way perhaps to the occasional meltdowns caused by the collective behaviors of micro-second market-making algorithms.

Another example is big societies designed by humans. 
 
 I find it exceedingly unlikely that an AGI will spontaneously emerge from a self-mutating process like you describe. Again, if this kind of thing were likely, or at least not extremely unlikely, I think it suggests that AGI is a lot simpler than it really is.

This is tricky. The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp. But the set of programs expressible in 1000 lines of lisp includes some really crazy, counter-intuitive stuff (e.g. the universal dovetailer). Genetic programming has been shown to be able to discover relatively short solutions that are better than anything a human could come up with, due to counter-intuitiveness.
 
 
 
 
 
You're talking about an AI that arrives at novel solutions, which requires the ability to invent/simulate/act on new models in new domains (AGI).

Evolutionary computation already achieves novelty and invention, to a degree. I concur that it is still not AGI. But it could already be a threat, given enough computational resources.

AGI is a threat because it's utility function would necessarily be sufficiently "meta" that it could create novel sub-goals. We would not necessarily be able to control whether it chose a goal that was compatible with ours. 

It comes down to how the utility function is defined. For Google Car, the utility function probably tests actions along the lines of "get from A to B safely, as quickly as possible". If a Google Car is engineered with evolutionary methods to generate novel solutions (would be overkill but bear with me), the novelty generated is contained within the utility function. It might generate a novel route that conventional map algorithms wouldn't find, but it would be impossible for it to find a solution like "helicopter the car past this traffic jam".

What prevents the car from transforming into an helicopter and flying is not the utility function but the set of available actions. I have been playing with evolutionary computation for some time now, and one thing I learned is to not trust my intuition on the real constraints implied by such set of actions.

I was actually talking about contracting a helicopter ride which seems easier :-)  The set of actions available to an AI is limited to the way it models the world.  Without a capacity for intelligently expanding its world model, no AI is going to do anything outside of the domain it is defined in. Google Car won't ever think to contract a helicopter ride until either A) Google engineers program it to consider that as an option or B) Google engineers give the Car the ability to start modelling the world on its own terms.

Ok, but the google engineers might be giving the system more freedom than they assume.

Telmo.
 
If B then it could be a long time before the Car discovers what a helicopter is, what it's capable of, how it could procure one, etc.  The helicopter example is a bad one actually because it's a solution you or me can easily conceive of, so it seems mundane or easy. 

Terren

--

Telmo Menezes

unread,
Aug 28, 2014, 7:33:29 AM8/28/14
to everyth...@googlegroups.com
Agreed.

Telmo Menezes

unread,
Aug 28, 2014, 7:37:32 AM8/28/14
to everyth...@googlegroups.com
Or maybe we are not very good at choosing goals for societies.

Telmo.
 

Brent

Terren Suydam

unread,
Aug 28, 2014, 4:05:02 PM8/28/14
to everyth...@googlegroups.com
On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

Although my POV is aligned with the latter intuition, I actually agree with the former, but consider the kinds of threats involved to be bounded in ways we can in principle control. Although in practice it is possible for them to do damage so quickly we can't prevent it. 

Perhaps my idea of intelligence is too limited. I am assuming that something capable of being a real threat will be able to generate its own ontologies, creatively model them in ways that build on and relate to existing ontologies, simulate and test those new models, etc., generate value judgments using these new models with respect to overarching utility function(s). It is suspiciously similar to human intelligence.

I wonder. What you describe seems like the way of thinking of a person trained in the scientific method (a very recent discovery in human history). Is this raw human intelligence? I suspect raw human intelligence is more like a kludge. It is possible to create rickety structures of order on top of that kludge, by a process we call "education".
 

I don't mean to imply formal learning at all. I think this even applies to any animal that dreams during sleep (say). Modeling the world is a very basic function of the brain, even if the process and result is a kludge. With language and the ability to articulate models, humans can get very good indeed at making them precise and building structures, rickity or otherwise, upon the basic kludginess you're talking about.
 
I think something like this could do a lot of damage very quickly, but by accident... in a similar way perhaps to the occasional meltdowns caused by the collective behaviors of micro-second market-making algorithms.

Another example is big societies designed by humans. 

Big societies act much more slowly. But they are their own organisms, we don't design them anymore than our cells design us. We are not really that good at seeing how they operate, for the same reason we find it hard to perceive how a cloud changes through time.
 
 
 I find it exceedingly unlikely that an AGI will spontaneously emerge from a self-mutating process like you describe. Again, if this kind of thing were likely, or at least not extremely unlikely, I think it suggests that AGI is a lot simpler than it really is.

This is tricky. The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp. But the set of programs expressible in 1000 lines of lisp includes some really crazy, counter-intuitive stuff (e.g. the universal dovetailer). Genetic programming has been shown to be able to discover relatively short solutions that are better than anything a human could come up with, due to counter-intuitiveness.

I suppose it is possible and maybe my estimate of how likely it is is too low. All the same I would be rather shocked if AGI could be implemented in 1000 lines of code. And no cheating - each line has to be less than 80 chars ;-)  Bonus points if you can do it in Arnold.

T

LizR

unread,
Aug 28, 2014, 6:09:37 PM8/28/14
to everyth...@googlegroups.com
Lots of games come with AI :-)


--

LizR

unread,
Aug 28, 2014, 6:14:18 PM8/28/14
to everyth...@googlegroups.com
PS "Arnold" is hilarious. I recognised quite a few quotes ... but where was this one?

ENDLESS LOOP - To crush your enemies, see them driven before you, and to hear the lamentation of their women.

Platonist Guitar Cowboy

unread,
Aug 28, 2014, 6:30:01 PM8/28/14
to everyth...@googlegroups.com
On Fri, Aug 29, 2014 at 12:14 AM, LizR <liz...@gmail.com> wrote:
PS "Arnold" is hilarious. I recognised quite a few quotes ... but where was this one?

ENDLESS LOOP - To crush your enemies, see them driven before you, and to hear the lamentation of their women.
That line still makes me laugh every time I bump into it.

Stephen Paul King

unread,
Aug 29, 2014, 12:59:06 AM8/29/14
to everyth...@googlegroups.com, cdemo...@yahoo.com
Are our fears of AI running amuck and killing random persons based on unfounded assumptions?

Stephen Paul King

unread,
Aug 29, 2014, 1:25:08 AM8/29/14
to everyth...@googlegroups.com, cdemo...@yahoo.com
" "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. "

  Sanity is not a common property of crowds, we are not considering "wisdom" but actual observer behaviors of humans in large groups. If we define "wise" behavior that which does not generate higher entropy in its environment, crows, more often than not, tend to not be wise.

   If an AI where to emerge from the interactions of many computers, would it be expected to be "sane"? What is sanity anyway?

  Another question is: Would AI have a view of the universe that can be matched up with ours? If not, how would we expect it to "see the world" that it interacts with? Our worlds and that of AI may be disjoint!


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--

Kindest Regards,

Stephen Paul King

Senior Researcher

Mobile: (864) 567-3099

Step...@provensecure.com

 http://www.provensecure.us/

 

“This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed, and may contain information that is non-public, proprietary, privileged, confidential and exempt from disclosure under applicable law or may be constituted as attorney work product. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this message in error, notify sender immediately and delete this message immediately.”

Chris de Morsella

unread,
Aug 29, 2014, 3:16:51 AM8/29/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Stephen Paul King

 

Are our fears of AI running amuck and killing random persons based on unfounded assumptions?

 

Perhaps, and I see your point.

However, am going to try to make the following case:

If we take AI as some emergent networked meta-system, arising in a non-linear, fuzzy, non-demarcated manner from pre-existing (increasingly networked) proto-AI smart systems (+vast repositories), such as already exist… and then drill down through the code layers – through the logic (DNA) – embedded within and characterizing all those sub systems, and factor in all the many conscious and unconscious human assumptions and biases that exist throughout these deeply layered systems… I would argue that what could emerge (& given the trajectory will emerge fairly soon I think) will very much have our human fingerprints sown all the way through its source code, its repositories, its injected values. At least initially.

I am concerned by the kinds of “values” that are becoming encoded in sub-system after sub-system, when the driving motivation for these layered complex self-navigating, increasingly autonomous systems is to create untended killer robots as well as social data mining smart agents to penetrate social networks and identify targets. If this becomes the major part of the code base from which AI emerges then isn’t it a fairly good reason to be concerned about the software DNA of what could emerge? If the code base is driven by the desire to establish and maintain a system characterized by having a highly centralized and vertical social control, deep data mining defended by an army increasingly comprised of autonomous mobile warbots… isn’t this a cause for concern?

But then -- admittedly -- who really knows how an emergent machine based (probably highly networked) self-aware intelligence might evolve; my concern is the initial conditions (algorithms etc.) we are embedding into the source code from which an AI would emerge.

Telmo Menezes

unread,
Aug 29, 2014, 6:23:33 AM8/29/14
to everyth...@googlegroups.com
Arnold is excellent! :)
I raise you Piet:
 

T

Telmo Menezes

unread,
Aug 29, 2014, 6:28:39 AM8/29/14
to everyth...@googlegroups.com
On Fri, Aug 29, 2014 at 7:25 AM, Stephen Paul King <Step...@provensecure.com> wrote:
" "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. "

  Sanity is not a common property of crowds, we are not considering "wisdom" but actual observer behaviors of humans in large groups. If we define "wise" behavior that which does not generate higher entropy in its environment, crows, more often than not, tend to not be wise.

   If an AI where to emerge from the interactions of many computers, would it be expected to be "sane"? What is sanity anyway?

I would argue that sanity in the social sense is essentially a religious construct (even of zero-god religions).
 

  Another question is: Would AI have a view of the universe that can be matched up with ours? If not, how would we expect it to "see the world" that it interacts with? Our worlds and that of AI may be disjoint!

This ends up being explored in the movie Her (with which I was very pleasantly surprised). But I won't spoil it for those who haven't watched.

Telmo.

Terren Suydam

unread,
Aug 29, 2014, 10:38:06 AM8/29/14
to everyth...@googlegroups.com
On Fri, Aug 29, 2014 at 6:23 AM, Telmo Menezes <te...@telmomenezes.com> wrote:



On Thu, Aug 28, 2014 at 10:05 PM, Terren Suydam <terren...@gmail.com> wrote:

On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

Although my POV is aligned with the latter intuition, I actually agree with the former, but consider the kinds of threats involved to be bounded in ways we can in principle control. Although in practice it is possible for them to do damage so quickly we can't prevent it. 

Perhaps my idea of intelligence is too limited. I am assuming that something capable of being a real threat will be able to generate its own ontologies, creatively model them in ways that build on and relate to existing ontologies, simulate and test those new models, etc., generate value judgments using these new models with respect to overarching utility function(s). It is suspiciously similar to human intelligence.

I wonder. What you describe seems like the way of thinking of a person trained in the scientific method (a very recent discovery in human history). Is this raw human intelligence? I suspect raw human intelligence is more like a kludge. It is possible to create rickety structures of order on top of that kludge, by a process we call "education".
 

I don't mean to imply formal learning at all. I think this even applies to any animal that dreams during sleep (say). Modeling the world is a very basic function of the brain, even if the process and result is a kludge. With language and the ability to articulate models, humans can get very good indeed at making them precise and building structures, rickity or otherwise, upon the basic kludginess you're talking about.
 
I think something like this could do a lot of damage very quickly, but by accident... in a similar way perhaps to the occasional meltdowns caused by the collective behaviors of micro-second market-making algorithms.

Another example is big societies designed by humans. 

Big societies act much more slowly. But they are their own organisms, we don't design them anymore than our cells design us. We are not really that good at seeing how they operate, for the same reason we find it hard to perceive how a cloud changes through time.
 
 
 I find it exceedingly unlikely that an AGI will spontaneously emerge from a self-mutating process like you describe. Again, if this kind of thing were likely, or at least not extremely unlikely, I think it suggests that AGI is a lot simpler than it really is.

This is tricky. The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp. But the set of programs expressible in 1000 lines of lisp includes some really crazy, counter-intuitive stuff (e.g. the universal dovetailer). Genetic programming has been shown to be able to discover relatively short solutions that are better than anything a human could come up with, due to counter-intuitiveness.

I suppose it is possible and maybe my estimate of how likely it is is too low. All the same I would be rather shocked if AGI could be implemented in 1000 lines of code. And no cheating - each line has to be less than 80 chars ;-)  Bonus points if you can do it in Arnold.

Arnold is excellent! :)
I raise you Piet:


Wow, Piet is awesome, so creative - thanks for the link! However, it's probably not the best for color-blind folks like myself. 

Terren

Stephen Paul King

unread,
Aug 30, 2014, 11:35:17 PM8/30/14
to everyth...@googlegroups.com
Hi Chris,

  Here is the thing. Does not the difficulty in creating a computational simulation of the brain in action give you pause? Why are we assuming that the AI will have a "mind" (program) that can be parsed by humans?

   AFAIK, AGI (following Ben Goertzel's convention) will be completely incomprehensible to us. If we are trying to figure out its "values", what could we do better than to run the thing in a sandbox and let it interact in with "test AI". Can we "prove" that is intelligent?

   I don't think so! Unless we could somehow "mindmeld" with it and the mindmeld results in a mutual "understanding", how could we have a proof. But melding minds together is a hard thing to do....


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--

LizR

unread,
Aug 30, 2014, 11:54:42 PM8/30/14
to everyth...@googlegroups.com
I think the only test we have available for consciousness etc (for computers or people) is the good old Turing test. Once our AI starts killing of astronauts because they may interfere with its main mission (I was always with HAL on this one, what exactly was the point of those humans, again?) that looks like a good point to stop arguing the finer details and start pulling out the memory cubes.

Chris de Morsella

unread,
Aug 31, 2014, 1:17:03 AM8/31/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Saturday, August 30, 2014 8:35 PM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

Hi Chris,

 

  Here is the thing. Does not the difficulty in creating a computational simulation of the brain in action give you pause?

 

Difficult yes, impossible I don’t think so. A simulation of the human brain need not have the same scale as an actual human brain (after all it is a model). For example statistical well bounded statements can be made about many social behavioral outcomes based on relatively small sampling sets. This also applies to the brain. A model could have a small fraction of the real brain’s complexity and scale and yet produce pretty accurate results.

Of course it is complex for us to imagine today… the human brain is after all vastly parallel with an immense number of connections 100s of trillions. Even within a single synapse (one of a large number of synapses) there is a world of exquisite molecular scale complexity and it seems multi-channeled to me.

However, it is also true that the global networked meta-cloud (the dynamic process driven interconnected cloud of clouds operating over the underlying global scale physical and technological infrastructure) is also scaling up to immense numbers of disparate computational elements with thousands of trillions of network vertices.

Perhaps I don’t understand the thrust of this statement? Why should it give me pause?

The brain is a magnificent but of biology an admirable compact hyper energy efficient computational engine of unequaled parallelism. Yes, we agree.

On the other hand the geometric growth rates of informatics capacity – in all dimensions: storage, speed, network size, traffic, cross talk, numbers of cores, memory, capacity of the various pipes.. you name it is also literally exploding out in scale. And on the level of fundamental understanding we are establishing a finer and finer grained understanding about the brain and how it works – dynamically in real time – and doing so from many various angles and scales of observation (from macro down to the electro-chemical molecular machinery of a single synapse). There are major initiatives in figuring out (at least at the macro scale) the human brain connectome. The micro architecture of the brain (at the scale of a single arrayed column -- usually around six neurons deep) is also being better understood, as are the various sensorial processing, memory, temporal, decisional and other brain algorithms.

A huge exciting challenge certainly… but for my way of thinking about this, not a cause for pause, rather a call to delve deeper into it and try to put it all together.

 

 

Why are we assuming that the AI will have a "mind" (program) that can be parsed by humans?

 

Who is assuming that? I was arguing that the code we create today will be the DNA of what emerges, by virtue of being the template in which subsequent development emerges from. Are you saying that our human prejudices, assumptions, biases, needs, desires, objectives, habits, ways of thinking… that all this assortment of hidden variables is not influencing the kind of code that is written. The hundreds of millions of lines of code written by programmers – mostly living and working in just a small number of technological centers operating on planet earth --  that all of this vast output of code is somehow unaffected by our humanness, by our nature?

Personally I would find that astounding and think it would seem rather obvious that in fact it is very much influenced by our nature and our objectives and needs.

I am not assuming anything by making the statement that whatever does emerge (assuming a self-aware intelligence does emerge) will have emerged out from a primordial soup that we cooked up and will have had its roots and beginnings from a code base of human creation, created for human ends and objectives with human prejudices and modes of thinking literally hard coded into the mind boggling numbers of services, objects, systems, frameworks and what have you that exist and are now all connecting up into non-locational dynamic cloud architectures.

 

   AFAIK, AGI (following Ben Goertzel's convention) will be completely incomprehensible to us. If we are trying to figure out its "values", what could we do better than to run the thing in a sandbox and let it interact in with "test AI". Can we "prove" that is intelligent?

We don’t know what it will turn out to become, but we can say with certainty that it will emerge from the code, from the algorithms from the physical chip architectures, network architectures, etc. that we have created. This is clearly an a priori assumption if we are speaking about human spawned AI – it has to emerge from human creation (unless we are speaking of alien AI of course).

We cannot even prove that we are intelligent or that we even exist. We do not even know what we are. We think we think, but measurements of brain activity indicate the thinking has already happened before we have had the though we though we thunk!

Mere narrators of our minds we are. We do not even understand how we think…. Or why we get our values. For example I tis becoming clear that our gut flora and fauna have more influence over our moods and desires than was previously realized… how much of “our” thoughts, decisions are really just our human host executing on the microbial desires and decisions of the five pounds or so of biological complexity in our guts?

 

   I don't think so! Unless we could somehow "mindmeld" with it and the mindmeld results in a mutual "understanding", how could we have a proof. But melding minds together is a hard thing to do....

 

In thirty to forty years we may begin to converge as humans become increasingly cyborgs…. And I am being very conservative. Apple is about to unleash smart watches and google has its glasses… already people are thinking of having their biometrics wired up to a networked monitoring service… people born blind are being given rudimentary artificial vision. Nano-scale molecular machinery techniques and self-assembling systems approaches are pushing the scale down to levels where informatics may soon become incorporated throughout the body and the brain itself.

My question for you is how much longer do you think we will remain recognizably human? Twenty years, fifty years.. a hundred perhaps. I just don’t see us stopping at some arbitrary wall unless our technology itself is collapsed by our collapsing resource base.. or unless (it has been argued) there is a point at which increasingly complex systems begin to fail. And this has merit as an argument too. But then taking computer architecture for example instead of scaling in complexity it scales out… multi-core architectures for example (each single core’s complexity within manageable bounds)

I assume nothing (or at least make an attempt)… we very much do live in interesting times… on this I think we can agree.

Chris de Morsella

unread,
Aug 31, 2014, 1:30:18 AM8/31/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of LizR
Sent: Saturday, August 30, 2014 8:55 PM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

I think the only test we have available for consciousness etc (for computers or people) is the good old Turing test. Once our AI starts killing of astronauts because they may interfere with its main mission (I was always with HAL on this one, what exactly was the point of those humans, again?) that looks like a good point to stop arguing the finer details and start pulling out the memory cubes.

 

AI might also accelerate its development at such breakneck speed that it very rapidly loses all interest in us, our planet, this galaxy, this particular underlying “physical reality” (whatever that may turn out to be) and exit our perceived universe into some other dimension beyond our reach or comprehension.

LizR

unread,
Aug 31, 2014, 1:36:47 AM8/31/14
to everyth...@googlegroups.com
On 31 August 2014 17:30, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of LizR
Sent: Saturday, August 30, 2014 8:55 PM
To: everyth...@googlegroups.com


Subject: Re: AI Dooms Us

 

I think the only test we have available for consciousness etc (for computers or people) is the good old Turing test. Once our AI starts killing of astronauts because they may interfere with its main mission (I was always with HAL on this one, what exactly was the point of those humans, again?) that looks like a good point to stop arguing the finer details and start pulling out the memory cubes.

 

AI might also accelerate its development at such breakneck speed that it very rapidly loses all interest in us, our planet, this galaxy, this particular underlying “physical reality” (whatever that may turn out to be) and exit our perceived universe into some other dimension beyond our reach or comprehension.

 
Yes indeed, like the children in "Childhood's End", the neutron star beings in "Dragon's Egg" or the human race falling into the technological singularity in "Marooned in Realtime".

However it's possible they might at least feel enough gratitude to upload us, perhaps into a zoo...

Or then again they might have a more Dalek like attitude...

LESTERSON: I want to help you.
DALEK: Why?
LESTERSON: (like a Dalek) I am your servant.
DALEK: We do not need humans now.
LESTERSON: Ah, but you wouldn't kill me. I gave you life.
DALEK: Yes, you gave us life
(It exteminates him.)

Chris de Morsella

unread,
Aug 31, 2014, 1:55:12 AM8/31/14
to everyth...@googlegroups.com

(It exteminates him.)

 

Classic J

 

John Clark

unread,
Aug 31, 2014, 12:27:10 PM8/31/14
to everyth...@googlegroups.com
On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

 > The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp.

That is not a crazy idea because we know for a fact that in the entire human genome there are only 3 billion base pairs. There are 4 bases so each base can represent 2 bits, there are 8 bits per byte so that comes out to just 750 meg, and that's enough assembly instructions to make not just a brain and all its wiring but a entire human baby. So the instructions MUST contain wiring instructions such as "wire a neuron up this way and then and then repeat that procedure exactly the same way 917 billion times". And there is a huge amount of redundancy in the human genome, if you used a file compression program like ZIP on that 750 meg you could easily put the entire thing on half a CD, not a DVD not a Blu ray just a old fashioned steam powered vanilla CD.

  John K Clark




meekerdb

unread,
Aug 31, 2014, 2:14:47 PM8/31/14
to everyth...@googlegroups.com
But the baby learns a lot as it grows, incorporating information form its environment.  I think this would need to be counted in the Kolmogorov complexity; even if the initial, learning program was relatively small.

Brent

Chris de Morsella

unread,
Aug 31, 2014, 3:18:54 PM8/31/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Sunday, August 31, 2014 9:27 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

  John K Clark

 

Just want to point out that the process of DNA expression is highly dynamic and is multi-factored (including environmental driven epigenetic feedback). This is especially so during the process of embryogenesis, an unfolding developmental choreographed switching process that is controlled by epigenetic programming (methylation/demethylation and other mechanisms). The mammalian genomes undergo very extensive genomic reprogramming during embryogenesis.

DNA is not a direct single layered – single meaning -- instruction set encoded and fixed. The expression of hereditary information is a multi-layered, dynamic and epigenetically influenced (driven) process, and this is especially true during embryogenesis. The same strand of DNA, depending on the dynamic action of the large number of transcription factors (there are 2600 identified proteins in the human genome that contain DNA-binding domains and it is thought that 10% of our genome is involved in encoding this large family of transcription factors) can encode very different mRNA and result in different outcomes and be used for different purposes.

It is – IMO – necessary to understand DNA as an encoded bundle of potential instructions that through a highly dynamic transcription process becomes actualized.

The actual expression of the underlying DNA blueprint is best understood in terms of it being a dynamic, environmentally and developmentally influenced process. DNA is arranged… and re-arranged – read forwards or backwards with coding sections turned on or off – in a large number of different ways.

-Chris




meekerdb

unread,
Aug 31, 2014, 5:06:12 PM8/31/14
to everyth...@googlegroups.com
On 8/31/2014 12:18 PM, 'Chris de Morsella' via Everything List wrote:

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Sunday, August 31, 2014 9:27 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes <te...@telmomenezes.com> wrote:

 

 > The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp.


That is not a crazy idea because we know for a fact that in the entire human genome there are only 3 billion base pairs. There are 4 bases so each base can represent 2 bits, there are 8 bits per byte so that comes out to just 750 meg, and that's enough assembly instructions to make not just a brain and all its wiring but a entire human baby. So the instructions MUST contain wiring instructions such as "wire a neuron up this way and then and then repeat that procedure exactly the same way 917 billion times". And there is a huge amount of redundancy in the human genome, if you used a file compression program like ZIP on that 750 meg you could easily put the entire thing on half a CD, not a DVD not a Blu ray just a old fashioned steam powered vanilla CD.

  John K Clark

 

Just want to point out that the process of DNA expression is highly dynamic and is multi-factored (including environmental driven epigenetic feedback). This is especially so during the process of embryogenesis, an unfolding developmental choreographed switching process that is controlled by epigenetic programming (methylation/demethylation and other mechanisms). The mammalian genomes undergo very extensive genomic reprogramming during embryogenesis.

DNA is not a direct single layered – single meaning -- instruction set encoded and fixed. The expression of hereditary information is a multi-layered, dynamic and epigenetically influenced (driven) process, and this is especially true during embryogenesis. The same strand of DNA, depending on the dynamic action of the large number of transcription factors (there are 2600 identified proteins in the human genome that contain DNA-binding domains and it is thought that 10% of our genome is involved in encoding this large family of transcription factors) can encode very different mRNA and result in different outcomes and be used for different purposes.

It is – IMO – necessary to understand DNA as an encoded bundle of potential instructions that through a highly dynamic transcription process becomes actualized.

The actual expression of the underlying DNA blueprint is best understood in terms of it being a dynamic, environmentally and developmentally influenced process. DNA is arranged… and re-arranged – read forwards or backwards with coding sections turned on or off – in a large number of different ways.

-Chris


But all the information for that back and forth on and off transcription factors, binding domains, etc. has to be either encoded in the DNA or extracted from the environment.

Brent

Chris de Morsella

unread,
Aug 31, 2014, 5:40:03 PM8/31/14
to everyth...@googlegroups.com



From: meekerdb <meek...@verizon.net>
To: everyth...@googlegroups.com
Sent: Sunday, August 31, 2014 2:06 PM
Sure.. of course, but the epigenetic overlay, which is an external factor adds a layer of indirection and complexity to the process that is only beginning to be understood and can affect the genetic expression outcome in fundamental ways. Even with DNA expression there is a whole lot of feedback systems that are at work which alter the end expression that is actually produced.
Any system that can become influenced by external feedback mechanisms is far more complex than a purely directed system.

LizR

unread,
Aug 31, 2014, 8:24:39 PM8/31/14
to everyth...@googlegroups.com
This is enough information to build a general purpose conscious being, it would appear, but a baby is only born with some fairly simple instinctive behaviour (plus the adolescent gains some more instinctive behaviour at puberty). Even the visual cortex, which is probably not conscious and probably comes out roughly similar in most people, is created by trial and error. The neocortex must be even more so, to the Nth degree. Hence you have 750 meg of data (or whatever the figure is) that builds an infant, then you have a world which educates them.

As per what I was saying about Watson (or whatever it's called), the baby needs to be immersed in an environment in order to develop any form of consciousness beyond the rudimentary raw feels provided by nature - that is, it needs to be educated by interaction with the environment, and with other people (i.e. assimilate culture).

Russell Standish

unread,
Sep 1, 2014, 12:35:11 AM9/1/14
to everyth...@googlegroups.com
On Mon, Sep 01, 2014 at 12:24:37PM +1200, LizR wrote:
>
> As per what I was saying about Watson (or whatever it's called), the baby
> needs to be immersed in an environment in order to develop any form of
> consciousness beyond the rudimentary raw feels provided by nature - that
> is, it needs to be educated by interaction with the environment, and with
> other people (i.e. assimilate culture).
>

This actually supplies a good reason for why we should find ourselves
in a regular, lawlike universe. We can get by with a smaller genome,
and learn the rest of the stuff that makes up our mental life, which
is a more likely scenario (even evolutionary speaking) than having a
large genome directly encoding our knowledge.

Of course, that is only possible if in fact the environment is regular
enough to be learnable.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)
----------------------------------------------------------------------------

LizR

unread,
Sep 1, 2014, 12:37:25 AM9/1/14
to everyth...@googlegroups.com
On 1 September 2014 16:36, Russell Standish <li...@hpcoders.com.au> wrote:
On Mon, Sep 01, 2014 at 12:24:37PM +1200, LizR wrote:
>
> As per what I was saying about Watson (or whatever it's called), the baby
> needs to be immersed in an environment in order to develop any form of
> consciousness beyond the rudimentary raw feels provided by nature - that
> is, it needs to be educated by interaction with the environment, and with
> other people (i.e. assimilate culture).
>

This actually supplies a good reason for why we should find ourselves
in a regular, lawlike universe. We can get by with a smaller genome,
and learn the rest of the stuff that makes up our mental life, which
is a more likely scenario (even evolutionary speaking) than having a
large genome directly encoding our knowledge.

Of course, that is only possible if in fact the environment is regular
enough to be learnable.

That seems to make sense. Do you mean that observers can only evolve in a universe with a lawlike environment, because it isn't "worth" evolving brains if the environment is unpredictable? (Assuming one even could, in that case...)

meekerdb

unread,
Sep 1, 2014, 3:05:17 AM9/1/14
to everyth...@googlegroups.com
On 8/31/2014 9:36 PM, Russell Standish wrote:
> On Mon, Sep 01, 2014 at 12:24:37PM +1200, LizR wrote:
>> As per what I was saying about Watson (or whatever it's called), the baby
>> needs to be immersed in an environment in order to develop any form of
>> consciousness beyond the rudimentary raw feels provided by nature - that
>> is, it needs to be educated by interaction with the environment, and with
>> other people (i.e. assimilate culture).
>>
> This actually supplies a good reason for why we should find ourselves
> in a regular, lawlike universe. We can get by with a smaller genome,
> and learn the rest of the stuff that makes up our mental life, which
> is a more likely scenario (even evolutionary speaking) than having a
> large genome directly encoding our knowledge.
>
> Of course, that is only possible if in fact the environment is regular
> enough to be learnable.

So that's why Amoeba dubia has a genome 200x bigger than ours? It must live in a very
irregular environment.

Brent

Telmo Menezes

unread,
Sep 1, 2014, 6:45:24 AM9/1/14
to everyth...@googlegroups.com
One of the main problems with genetic programming is bloat control: how to prevent the system from generating larger and larger programs. This is not an easy problem to solve. A friend of mine did a PhD just on this topic. One of the tricky things that happens is that, even if you discourage bloat with naive approaches -- e.g. have program size negatively impact fitness -- you are fighting adaption. Program chunks can find ways to survive, for example by making their own removal too destructive.

In nature there are natural incentives against bloat. Mutations in gene transmission are a big one: the larger the (meaningful) DNA, the more likely it is that it will fail to transmit due to mutational noise in the channel.

On the other hand, if organisms become too good at fighting the noise in the channel they lose adaptability. So we get into "evolution of evolvability" issues.

So it's not so surprising that all sorts of weird fluctuations are seen in nature, as the case of a single cell organism with a much larger DNA than our own. I don't think we have the tools to fully understand this level of complexity yet.

Telmo.
 

Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Telmo Menezes

unread,
Sep 1, 2014, 6:49:33 AM9/1/14
to everyth...@googlegroups.com
Agreed, but this is precisely what makes the AGI scenario scary. Imagine this potentially simple algorithm (similar to the one encoded in our DNA) being able to bootstrap itself with the information available on the Internet. Now imagine it has access to computational resources that makes it 1000x faster than the average human brain....
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

John Clark

unread,
Sep 1, 2014, 10:13:05 AM9/1/14
to everyth...@googlegroups.com
On Sun, Aug 31, 2014 at 2:14 PM, meekerdb <meek...@verizon.net> wrote:
 >>> The Kolmogorov complexity of AGI could be relatively low -- maybe it can be expressed in 1000 lines of lisp.

>> That is not a crazy idea because we know for a fact that in the entire human genome there are only 3 billion base pairs. There are 4 bases so each base can represent 2 bits, there are 8 bits per byte so that comes out to just 750 meg, and that's enough assembly instructions to make not just a brain and all its wiring but a entire human baby. So the instructions MUST contain wiring instructions such as "wire a neuron up this way and then and then repeat that procedure exactly the same way 917 billion times". And there is a huge amount of redundancy in the human genome, if you used a file compression program like ZIP on that 750 meg you could easily put the entire thing on half a CD, not a DVD not a Blu ray just a old fashioned steam powered vanilla CD.
> But the baby learns a lot as it grows,

And the AI will also learn as it grows . I'm sure Telmo wasn't foolish enough to suggest that 1000 lines of lisp is all the information a AI would even need to know, but maybe just maybe it might be enough to form a seed than could turn into a mind that could learn and grow without limit. 

  John K Clark



Stephen Paul King

unread,
Sep 1, 2014, 11:57:16 AM9/1/14
to everyth...@googlegroups.com
Hi Brent,

   Have you seen any studies of the "Ameoba dubia" that look into what their genome is expressing?  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2933061/  seems to suggest to me the possibility that the genome is acting as a "brain"!




--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Stephen Paul King

unread,
Sep 1, 2014, 12:01:54 PM9/1/14
to everyth...@googlegroups.com
Hi Telmo,

  Access to resources seems to only allow for reproduction and continuation. For an AGI to "act on the world" it has to be able to use those resources in a manner that implies that it can "sense the world" that it exist within. This seems to be a catch-22 situation. ISTM, that if a computation has no means to model itself as existing in a world or the equivalent, how would it ever operate as if it did in the first place?
Blind clock-work....?
    


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

John Clark

unread,
Sep 1, 2014, 12:42:48 PM9/1/14
to everyth...@googlegroups.com
On Sun, Aug 31, 2014 at 3:18 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

> Just want to point out that the process of DNA expression is highly dynamic and is multi-factored

Yes it certainly is, but however dynamic the DNA information is it's still just 750 meg (actually it's much less than that considering the massive amount of redundancy in our genome). And Telmo's 1000 lines of lisp would also have to be highly dynamic.  

> The mammalian genomes undergo very extensive genomic reprogramming during embryogenesis.


And where did the information about how to do that reprogramming come from? From the original 750 meg.

 

This is especially so during the process of embryogenesis, an unfolding developmental choreographed switching process that is controlled by epigenetic programming (methylation /demethylation and other mechanisms).


Methylation means that occasionally a Methyl group might be added to one of the DNA bases, a base would have a Methyl group or it would not so it's still digital. There are 4 bases so AT MOST each of the 3 billion bases would represent 3 bits instead of 2, so the information content would increase from 750 Meg to 1.12 Gig and with a file compression program like ZIP you could still fit all of it on a CD.  

But in reality Epigenetic information is pretty clearly of minor importance compared with the DNA sequence information, so I doubt it would even cause it to increase to 751 Meg. And the evidence that Epigenetic heredity exists for more than one generation is very meager.
 

DNA is not a direct single layered – single meaning -- instruction set encoded and fixed.


You can assign as many layers of meaning on it as you like but nothing can change the fact that you could put all the information in the entire human DNA genome on a old fashioned CD and still have enough room on it for a Beatles album from 1965.
 
>  The same strand of DNA, depending on the dynamic action of the large number of transcription factors

A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.

> It is – IMO – necessary to understand DNA as [...]


I'm not saying that understanding how 750 Meg of DNA information manages to produce a human being will be easy, figuring out how Telmo's 1000 lines of lisp works will not be easy either, but I am saying that's all the information there is.

  John K Clark
 

Stephen Paul King

unread,
Sep 1, 2014, 1:03:56 PM9/1/14
to everyth...@googlegroups.com
Hi John,

   Hold it! Where is the information about the physical system required to run that 750 Meg of information contained? I think that it is a mistake to assume that Nature builds information sets that have nothing at all to do with the particulars of the hardware. My reasoning here is that it is the "hardware" that is acted upon to select the fittest genome - if we follow Dawkins' line - and the fact that most of the DNA code is made up of instructions to create this and that sequence of hydrocarbons - aka proteins, sugars and peptides. 

   We should *not* think of the computational aspect of living systems as blind to hardware, ala computational universality.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

John Clark

unread,
Sep 1, 2014, 1:54:53 PM9/1/14
to everyth...@googlegroups.com
On Mon, Sep 1, 2014 at 1:03 PM, Stephen Paul King <Step...@provensecure.com> wrote:

> Hold it! Where is the information about the physical system required to run that 750 Meg of information contained?

DNA contains information on how to make stuff but it doesn't actually do anything, only proteins and RNA do things.  DNA by itself just sits there and a 1000 line lisp program printed out onto 100 pages of paper would just sit there, both need hardware to run on. To run the 750 meg DNA program a cell needs Mitochondrial RNA, Transfer RNA, Messenger RNA, and several thousand protein enzymes. And to answer your question, every single bit of information needed to make all those different types of RNA and all those different types of proteins is contained in that original 750 Meg, it's equivalent to not only containing the program but also all the information you need to make the computer to run the program on. And if that reminds you a little of the chicken or the egg problem welcome to the club, it has caused origin of like theorists no end of problems. Did DNA, which knows what to do but can't do it, come first or did proteins, which can do things but doesn't know what to do, come first?

  John K Clark



meekerdb

unread,
Sep 1, 2014, 2:29:33 PM9/1/14
to everyth...@googlegroups.com
The most popular theory is that RNA came first.

Brent

Chris de Morsella

unread,
Sep 1, 2014, 2:45:31 PM9/1/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Monday, September 01, 2014 9:43 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Sun, Aug 31, 2014 at 3:18 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

> Just want to point out that the process of DNA expression is highly dynamic and is multi-factored

 

Yes it certainly is, but however dynamic the DNA information is it's still just 750 meg (actually it's much less than that considering the massive amount of redundancy in our genome). And Telmo's 1000 lines of lisp would also have to be highly dynamic.  

 

Amazing isn’t it. The elegance of self-assembling processes that can do so much with so little input. I doubt 1000 lines of computer code is a large enough initial instruction set even for a highly self-generating system. Maybe a few million lines of code might do it though, if it was code that generated other code and so forth in a cascading process similar to embryogenesis in eukaryotes.

> The mammalian genomes undergo very extensive genomic reprogramming during embryogenesis.

 

And where did the information about how to do that reprogramming come from? From the original 750 meg.

 

Much of it did certainly. But it also comes from the environment… e.g. from an external source. The outcome of embryogenesis is affected by epigenetic influences that alter what genetic information is expressed and also crucially when (at what point in sequences of expression) it occurs. This external epigenetic programming instructions are completely outside of that original bundle of genetic information.


 

This is especially so during the process of embryogenesis, an unfolding developmental choreographed switching process that is controlled by epigenetic programming (methylation /demethylation and other mechanisms).


Methylation means that occasionally a Methyl group might be added to one of the DNA bases, a base would have a Methyl group or it would not so it's still digital. There are 4 bases so AT MOST each of the 3 billion bases would represent 3 bits instead of 2, so the information content would increase from 750 Meg to 1.12 Gig and with a file compression program like ZIP you could still fit all of it on a CD.  

But the fact is that the epigenetic external information is not available until run-time. It exists outside the organism in its environment.

But in reality Epigenetic information is pretty clearly of minor importance compared with the DNA sequence information, so I doubt it would even cause it to increase to 751 Meg. And the evidence that Epigenetic heredity exists for more than one generation is very meager.

 

I do not agree that the understanding and quantification of epigenetic influences on human development (especially during embryogenesis) is as settled or of minor consequence as you state. There is evidence for example that it persists for three generations in studies of cigarette smokers progeny, and I have read studies that point to high stress in one generation resulting in epigenetic hereditary outcomes in subsequent generations. Even identical twins as they grow and live through their different lives – even their originally identical DNA diverges in its expressed outcome due to epigenetic affects. 

DNA is not a direct single layered – single meaning -- instruction set encoded and fixed.

 

You can assign as many layers of meaning on it as you like but nothing can change the fact that you could put all the information in the entire human DNA genome on a old fashioned CD and still have enough room on it for a Beatles album from 1965.

 

And 32 or so fundamental values define (fix, quantify) all the laws of our universe. Amazing complexity can emerge from simple initial conditions.

 

>  The same strand of DNA, depending on the dynamic action of the large number of transcription factors

 

A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.

From that original bundle of genetic code + environmental influences. 90% of the living things in a human body DO NOT have human DNA (not by weight of course but by census)… our behavior, our desires, our decisions, our thoughts, dreams, cravings, fears… our volition… is at least in part being driven by these other non-human organisms (especially the huge diverse community of microorganisms living in our guts).

The kind of flora and fauna we have in our guts in many ways determines who we are, what we think and what we desire. It affects out well-being (or lack of it), our emotions and our goals. This genetic information is not part of the human DNA, but humans have coevolved with these communities of microorganisms and many of them play important (perhaps vital) roles in our Darwinian fitness.

The information that triggers a whole slew of affects resulting in a changed outcome for the organism could very well have originated in some microorganism inhabiting that individuals gut. Our immune system especially seems to have co-evolved to work symbiotically with many different species of microorganisms.

We require a vast library of CDs to live healthy lives…. Not just our DNA CD, but all the DNA CDs of the thousands of organisms that a healthy human animal requires (or greatly benefits from having within them). We are not isolated organisms apart from the many other cohabitating organisms that journey through life living inside our bodies.

> It is – IMO – necessary to understand DNA as [...]

 

I'm not saying that understanding how 750 Meg of DNA information manages to produce a human being will be easy, figuring out how Telmo's 1000 lines of lisp works will not be easy either, but I am saying that's all the information there is.

  John K Clark

 

I agree that it is amazingly compact. We may differ on where we draw the line. I do not see a single human (or other eukaryote) only in terms of its own DNA + epigenetic meta-programming over the DNA base, but also in terms of the ecosystem that exists within.  Both the beneficial and the parasitic species within us hugely affect our lives – as they do with every multi-cellular species we know about.

We are walking talking ecosystems with biotic auras as unique as fingerprints (in fact forensic science is beginning to study this as a potential investigative tool)

-Chris

 

Stephen Paul King

unread,
Sep 1, 2014, 5:03:14 PM9/1/14
to everyth...@googlegroups.com
Hi John,

   The chicken or the egg problem is not hard to solve; just figure out how to get something that is a little bit like both and has an evolution path into one or the other... 
   But your missing my point here. There is an already existing environment of physical stuff and interactions that is required for the expression of the information associated with a genome. That is what makes up a "world" for the genome and very little if any of it is encoded in that 750 Meg.
   Maybe I should have been a bit more clear in my earlier posts.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Stephen Paul King

unread,
Sep 1, 2014, 5:05:24 PM9/1/14
to everyth...@googlegroups.com
Hi Brent,

  DNA, RNA, whatever. Does information care how it is expressed? Semantics... :-) I seem to be mostly agreeing with Chris' reasoning in his latest post.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Stephen Paul King

unread,
Sep 1, 2014, 5:52:47 PM9/1/14
to everyth...@googlegroups.com
Hi Chris,

   A colleague of mine has found a few possible examples of "self-assembling code" but they are not strings of bits, they are better described as a form of topological object. They are based on a different model of computation:


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Chris de Morsella

unread,
Sep 1, 2014, 6:44:10 PM9/1/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Monday, September 01, 2014 10:55 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Mon, Sep 1, 2014 at 1:03 PM, Stephen Paul King <Step...@provensecure.com> wrote:

 

> Hold it! Where is the information about the physical system required to run that 750 Meg of information contained?

 

DNA contains information on how to make stuff but it doesn't actually do anything, only proteins and RNA do things.  DNA by itself just sits there and a 1000 line lisp program printed out onto 100 pages of paper would just sit there, both need hardware to run on. To run the 750 meg DNA program a cell needs Mitochondrial RNA, Transfer RNA, Messenger RNA, and several thousand protein enzymes.

Don’t forget to mention the ribosomes.

 

And to answer your question, every single bit of information needed to make all those different types of RNA and all those different types of proteins is contained in that original 750 Meg, it's equivalent to not only containing the program but also all the information you need to make the computer to run the program on. And if that reminds you a little of the chicken or the egg problem welcome to the club, it has caused origin of like theorists no end of problems. Did DNA, which knows what to do but can't do it, come first or did proteins, which can do things but doesn't know what to do, come first?

  John K Clark

 

Can a single complex multi-cellular organism be understood or defined completely without also viewing it in its larger multi-species context?

Within our own selves; we are not alone! And we do not function in life on our own either. Our living bodies are thriving diverse communities of microorganisms as well. Without all of that externally stored DNA and all that dynamic interactions with these other co-evolved organisms would we even be able to survive for long? We certainly cannot live without them and remain in good health.

Without also accounting for all the services the beneficial micro flora and fauna provide us and then adding this externally reposited DNA into the tally of the set of information needed to produce a healthy human individual… well without doing this we are just looking at the tip of the genetic and biological iceberg.

-Chris

 

--

You received this message because you are subscribed to the Google Groups "Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Stephen Paul King

unread,
Sep 1, 2014, 7:02:41 PM9/1/14
to everyth...@googlegroups.com
Excellent point, Chris. Entities do not exist in isolation from each other... We have to include the "world" or "environment" of an entity when we consider it in our models and reasonings. 
   I wonder how an AGI will develop a model of its world and what kind of world would it be.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Chris de Morsella

unread,
Sep 1, 2014, 7:54:40 PM9/1/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Monday, September 01, 2014 4:03 PM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

Excellent point, Chris. Entities do not exist in isolation from each other... We have to include the "world" or "environment" of an entity when we consider it in our models and reasonings. 

   I wonder how an AGI will develop a model of its world and what kind of world would it be.

Read a study that is in the science news lately that within a period even as brief as just 24 hours a person or familiar grouping of peoples biotic auras will completely take over and colonize the environment of a hotel room. We are dragging a microscopic jungle with us wherever we go and in the spots we habituate – in those same exact spots our fingerprint specific micro-biota also sets up shop.

There are more than fifty different known species of microorganisms that have evolved to live on human tooth enamel (and similar numbers for dogs, cats, rabbits, crocodiles.. ) that is just the enamel surface… have not even hit the gum line where there is a veritable population explosion and many more microorganisms.

We live & breathe, are bathed in… a living biotic earth planet soup. Our bodies are like sieves and we are filled by a still poorly understood micro-biotic ecosystem that interacts with our own body’s cells in so many ways both beneficial and parasitic.

The reductionist view of seeing an organism in isolation of its environment (including its inner environment) misses the mark and fails to capture the dynamic living reality that we are walking talking ecosystems… each single one of us…. Veritable jungles living deep inside us. Our lives are shared lives.

Life or perhaps living systems, involve multiple actors.

-Chris

Chris de Morsella

unread,
Sep 1, 2014, 8:32:12 PM9/1/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Monday, September 01, 2014 2:53 PM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

Hi Chris,

 

   A colleague of mine has found a few possible examples of "self-assembling code" but they are not strings of bits, they are better described as a form of topological object. They are based on a different model of computation:

http://chorasimilarity.wordpress.com/2014/09/01/quines-in-chemlambda/

 

software systems increasingly are becoming comprised of services (making use of other services (that call into other services (etc.))) In the ecosystem of cloud facing services those that are performant etc. will tend to rise and become incorporated – often, increasingly in a late binding manner, through a process called dependency injection – into other assemblies of multiple different services and internal logic that increasingly are themselves becoming exposed as yet other services.

Meta systems, comprised of loosely coupled archipelagos of distinct areas of responsibility and roles linked together in the cloud through dynamic queues are taking off. Large systems such as say Netflix heavily rely on this architecture.

IMO – this is an architecture in which a form of digital Darwinian evolution can more easily occur – as compared with traditionally application models -- with the services being the organisms and the cloud being the ecosystem. As the adoption of dependency injection models increases and systems become more late bound with the better exemplars of specific services (say logging, monitoring and alarming for example) becoming injected into live systems (often without even needing to bring them down) best of breed pressures will begin to drive the service organisms to evolve into becoming more effective and better options.

Stephen Paul King

unread,
Sep 1, 2014, 8:32:27 PM9/1/14
to everyth...@googlegroups.com
Hi Chris,

   Exactly! I recall David Bohm speaking about "interpenetration" in this sense. My current work is on computational environments and I am surprised as to how little research has been done in this area that I can find. 

Stephen Paul King

unread,
Sep 1, 2014, 8:38:10 PM9/1/14
to everyth...@googlegroups.com
Hi Chris,

   I agree. What we see in the current development is, literally, evolution - I would not say that it is "Darwinian" per se as it is not smooth or continuous. It looks more like a punctuated equilibrium over many interacting asynchronous systems. What I don't see is an analogue of a genome, such that the Dawkins model is supported.

I just recently found talks on "dependency injection". Please tell me more!

Chris de Morsella

unread,
Sep 1, 2014, 11:08:00 PM9/1/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Monday, September 01, 2014 5:38 PM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

Hi Chris,

 

   I agree. What we see in the current development is, literally, evolution - I would not say that it is "Darwinian" per se as it is not smooth or continuous. It looks more like a punctuated equilibrium over many interacting asynchronous systems. What I don't see is an analogue of a genome, such that the Dawkins model is supported.

 

I just recently found talks on "dependency injection". Please tell me more!

 

Also known as inversion of control. Essentially it involves the implementation of interfaces. The interface being the contract. How the service implementing the contract goes about doing so is an internal matter, what matters to the client is that the contract is honored and the given service is performed. Complex systems are assemblages of simpler systems… file systems, data repositories, messaging systems, and so on. These systems can be composited together using interfaces and abstract containers – instead of returning a concrete container of something the thing can return something (could be anything) that fulfills a shared contract.

Late binding dependency injection is a means of supplying at the late deployed run time phase of a configured set of libraries… perhaps behind other endpoints and so forth that will implement the required interface and provide the needed service. The consuming program need not worry about how a given dependency will be fulfilled – that is the injected libraries responsibility.

Using a combination of programming behind the abstraction of interfaces and IOC containers – frameworks that perform late binding dependency injection to fulfill the service needs a program can free itself from any particular implementation and smoothly evolve to other better implementations as long as its contracts i.e. defined implemented interfaces can be fulfilled.

Stephen Paul King

unread,
Sep 1, 2014, 11:12:50 PM9/1/14
to everyth...@googlegroups.com
Hi Chris,

   Could we discuss this further outside of the group?

Chris de Morsella

unread,
Sep 2, 2014, 12:03:35 AM9/2/14
to everyth...@googlegroups.com

Sure

Pierz

unread,
Sep 2, 2014, 7:21:59 AM9/2/14
to everyth...@googlegroups.com
I have to say I find the whole thing amusing. Tegmark even suggested we should be spending one percent of GDP trying to research this terrible threat to humanity and wondered why we weren't doing it. Why not? Because, unlike global warming and nuclear weapons, there is absolutely no sign of the threat materializing. It's an absolutely theoretical risk based on a wild extrapolation. To me the whole idea of researching defences against a future robot attack is like building weapons to defend ourselves against aliens. So far, the major threat from computers is their stupidity, not their super-intelligence. It's the risk that they will blindly carry out some mechanical instruction (think of semi-autonomous military drones) without any human judgement. Some of you may know the story of the Russian commander who prevented World War III by overriding protocol when his systems told him the USSR was under missile attack. The computer systems f%^*ed up, he used his judgement and saved the world. The risk of computers will always be their mindless rigidity, not their turning into HAL 9000. Someone on the thread said something about Google face recognition software exhibiting behaviour its programmers didn't understand and they hadn't told it to do. Yeah. My programs do that all the time. It's called a bug. When software reaches a certain level of complexity, you simply lose track of what it's doing. Singularity, shmigularity.

On Tuesday, August 26, 2014 5:05:04 AM UTC+10, Brent wrote:
Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

Brent

On 8/25/2014 10:27 AM

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor 



Terren Suydam

unread,
Sep 2, 2014, 9:03:16 AM9/2/14
to everyth...@googlegroups.com

One day, a printout of this email will be found among the post apocalyptic wreckage by one of the few remaining humans and they will enjoy the first laugh they've had in a year.

Just kidding. I have no idea how to calibrate this threat. I'm pretty skeptical, but some awfully smart people are seriously concerned about it.

Terren

spudb...@aol.com

unread,
Sep 2, 2014, 9:23:23 AM9/2/14
to everyth...@googlegroups.com
We have to have a real success to get people emotionally geared to AI threat remediation. It's not SKYNET that threatens us, its ourselves. Plus, the ruling elites, who's politics are now Progressive, are not motivated to deal such a problem. We don't have international unrest protesting any technical issue. Last month, as predicted, it was the israel-Gaza war, next month, (maybe) the Ukraine war (no protests here!). To wit, rulers or ruled, we do not have a mind set for problem resolution, worldwide. It was ever thus, but now it is hurting our species, or starting to. 

This is an inherent problem for many thinkers not to wish to involve politics in technical discussions, which I sympathize with. However, because of human nature, its difficult to separate the two. This is not Planet Vulcan and humans are not always rational actors. Radar, the Jet engine, the rocket, satellites, nuclear power, all came from war-a very emotional process indeed!


-----Original Message-----
From: Pierz <pie...@gmail.com>
To: everything-list <everyth...@googlegroups.com>
Sent: Tue, Sep 2, 2014 7:22 am
Subject: Re: AI Dooms Us

I have to say I find the whole thing amusing. Tegmark even suggested we should be spending one percent of GDP trying to research this terrible threat to humanity and wondered why we weren't doing it. Why not? Because, unlike global warming and nuclear weapons, there is absolutely no sign of the threat materializing. It's an absolutely theoretical risk based on a wild extrapolation. To me the whole idea of researching defences against a future robot attack is like building weapons to defend ourselves against aliens. So far, the major threat from computers is their stupidity, not their super-intelligence. It's the risk that they will blindly carry out some mechanical instruction (think of semi-autonomous military drones) without any human judgement. Some of you may know the story of the Russian commander who prevented World War III by overriding protocol when his systems told him the USSR was under missile attack. The computer systems f%^*ed up, he used his judgement and saved the world. The risk of computers will always be their mindless rigidity, not their turning into HAL 9000. Someone on the thread said something about Google face recognition software exhibiting behaviour its programmers didn't understand and they hadn't told it to do. Yeah. My programs do that all the time. It's called a bug. When software reaches a certain level of complexity, you simply lose track of what it's doing. Singularity, shmigularity.

On Tuesday, August 26, 2014 5:05:04 AM UTC+10, Brent wrote:
Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause. Repeating, the fault lies not in AI, but in ourselves, Horatio. 


Brent

On 8/25/2014 10:27 AM

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor 



John Clark

unread,
Sep 2, 2014, 9:57:47 AM9/2/14
to everyth...@googlegroups.com
On Mon, Sep 1, 2014 at 2:45 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

>>  Amazing isn’t it. The elegance of self-assembling processes that can do so much with so little input.


Yes, very amazing! 
 

> I doubt 1000 lines of computer code is a large enough initial instruction set even for a highly self-generating system.

 
My intuition would say that you're right but my intuition would also say that 750 Meg is not enough information to make a human baby, and yet we know for a fact that it is. So I must conclude that our intuition is not very good in matters of this sort. 

>>> The same strand of DNA, depending on the dynamic action of the large number of transcription factors

>> A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.
 
> From that original bundle of genetic code + environmental influences.

I don't know what you're talking about. What a protein can do is a function of it's shape, and the shape of a transcription factor, just like any other protein, is entirely determined by its amino acid sequence, and that is entirely determined by the Messenger RNA sequence, and that is entirely determined by the DNA sequence in the genome. Proteins with the same amino acid sequence always fold up in exactly the same way, at least under all environmental conditions found inside living cells. Yes if environmental conditions are very extreme, if things are very hot or the pH is super high or super low  the protein can become denatured and fold up into weird useless shapes, but such conditions are always fatal to life so it's irrelevant. Under all lifelike conditions proteins always fold up in the exact same way.   
 
> 90% of the living things in a human body DO NOT have human DNA not by weight of course but by census
 
Our primary interest around here is the brain and what the brain does, mind; and I don't see the relevance bacteria have to that. But if you want to include the genome of E coli  that's fine, there's plenty of unused space on that CD for it.
 
> The kind of flora and fauna we have in our guts in many ways determines who we are, what we think and what we desire.

So the key to consciousness and the factor that determines our personal identity lies in our poo?  

> It affects out well-being

So would an inflamed toenail, but I don't think a investigation of that affliction will bring much enlightenment on the nature of intelligence or consciousness.  

> I do not see a single human (or other eukaryote) only in terms of its own DNA + epigenetic meta-programming over the DNA base, but also in terms of the ecosystem that exists within.

That is where we differ and I think that is your fundamental error, you believe you must understand everything before you can understand anything, in other words you do what is becoming increasingly fashionable these days, you reject reductionism in spite of it having worked so spectacularly well during the last 400 years. I don't.

 John K Clark



John Clark

unread,
Sep 2, 2014, 10:24:07 AM9/2/14
to everyth...@googlegroups.com
On Mon, Sep 1, 2014 at 5:03 PM, Stephen Paul King <Step...@provensecure.com> wrote:

> The chicken or the egg problem is not hard to solve; just figure out how to get something that is a little bit like both and has an evolution path into one or the other.

That's why origin of life theorists turned to RNA, it can store information, not as well as DNA but it can store it. And unlike DNA which always has the same shape (a double helix) RNA can fold up into complex shapes so it can catalyze chemical reactions; it can't do it as well as proteins can but it can do it. The RNA world would be far simpler than our current world but it would still be pretty complex, so something even simpler probably came before it. Graham Cairns-Smith has a theory about that involving clay.
 
 > But your missing my point here. There is an already existing environment of physical stuff and interactions that is required for the expression of the information associated with a genome.

And that existing environment of physical stuff came from the organism's parents, and the information on how to build that environment came from the organism's grandparents and .....

  John K Clark


John Clark

unread,
Sep 2, 2014, 10:35:14 AM9/2/14
to everyth...@googlegroups.com
On Mon, Sep 1, 2014 at 6:43 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

>  Can a single complex multi-cellular organism be understood or defined completely without also viewing it in its larger multi-species context?

Nothing can be understood completely but by looking at things in isolation we can usually understand things pretty well; good thing too because if we had to understand everything before we understood anything we'd be totally ignorant about everything. That's why I like reductionism so much.

  John K Clark

 

Telmo Menezes

unread,
Sep 2, 2014, 11:51:25 AM9/2/14
to everyth...@googlegroups.com
On Mon, Sep 1, 2014 at 6:01 PM, Stephen Paul King <Step...@provensecure.com> wrote:
Hi Telmo,

  Access to resources seems to only allow for reproduction and continuation. For an AGI to "act on the world" it has to be able to use those resources in a manner that implies that it can "sense the world" that it exist within. This seems to be a catch-22 situation. ISTM, that if a computation has no means to model itself as existing in a world or the equivalent, how would it ever operate as if it did in the first place?
Blind clock-work....?

Hi Stephen,

So what I'm proposing is in agreement with John Clark's clarification: the 1000 lines of lisp would define an algorithm that could learn whatever it needed from its environment. Consider the possibilities afforded by an environment consisting of some computer connected to the Internet in 2014. Banking systems. Hiring people around the world to do things. 99% of all the books ever written. The wikipedia. Security cameras. Email.

Sure, the AGI would live in an alien world in some sense, but this alien world has a very large and rich interface with our own. I believe the AGI would have enough information and available actions to form a very good theory of self. No?

Btw, the possibility of the Amoeba DNA acting as a brain is really interesting, thanks for that. For a long time I have the intuition that the computer we use to host our own mind includes gene expression mechanisms. Why wouldn't it? Neurotransmitters appear to be able to influence gene expression (e.g. http://www.researchgate.net/publication/7146079_Gene_expression_is_differentially_regulated_by_neurotransmitters_in_embryonic_neuronal_cortical_culture/links/0fcfd50a3f2cb43ce7000000) and, conversely, neurotransmitters are ultimately expressed by genes. Evolution doesn't seem to care so much about good software development practices, namely modularity...

I used to be a big believer in biologically-inspired AI efforts. Paradoxically, the more I learn about biology the more I suspect this is not such a good idea. Untangling the mess that evolution created to instantiate AGIs is perhaps incidental to writing an AGI algo.

Cheers
Telmo.

Telmo Menezes

unread,
Sep 2, 2014, 11:58:29 AM9/2/14
to everyth...@googlegroups.com
Hi Pierz,


On Tue, Sep 2, 2014 at 1:21 PM, Pierz <pie...@gmail.com> wrote:
I have to say I find the whole thing amusing. Tegmark even suggested we should be spending one percent of GDP trying to research this terrible threat to humanity and wondered why we weren't doing it. Why not? Because, unlike global warming and nuclear weapons, there is absolutely no sign of the threat materializing. It's an absolutely theoretical risk based on a wild extrapolation. To me the whole idea of researching defences against a future robot attack is like building weapons to defend ourselves against aliens. So far, the major threat from computers is their stupidity, not their super-intelligence. It's the risk that they will blindly carry out some mechanical instruction (think of semi-autonomous military drones) without any human judgement. Some of you may know the story of the Russian commander who prevented World War III by overriding protocol when his systems told him the USSR was under missile attack. The computer systems f%^*ed up, he used his judgement and saved the world. The risk of computers will always be their mindless rigidity, not their turning into HAL 9000. Someone on the thread said something about Google face recognition software exhibiting behaviour its programmers didn't understand and they hadn't told it to do. Yeah. My programs do that all the time. It's called a bug. When software reaches a certain level of complexity, you simply lose track of what it's doing. Singularity, shmigularity.

So I take it you're not a computationalist?

Cheers
Telmo.
 


On Tuesday, August 26, 2014 5:05:04 AM UTC+10, Brent wrote:
Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

Brent

On 8/25/2014 10:27 AM

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor 



Bruno Marchal

unread,
Sep 2, 2014, 12:40:59 PM9/2/14
to everyth...@googlegroups.com
On 25 Aug 2014, at 21:04, meekerdb wrote:

Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

I agree. ISIS, Hamas, Muslim Brotherhood, etc. 

It is not Islam, in my current opinion, but I read the Hamas chart, and, well, I have only read one half of the Quran for now, and it is hard to interpret (I do think the hamas is inconsistent with the surrah of the poets and the surrah of the table), but I read entirely "mein kampf", and the chart of the hamas extends mein kampf, and indeed those guys works hard and patiently to produce a pause, may be one more millenium of obscurity.

Religion are like drug, the more you repress them, the more they get solid. The christian era is already a consequence of the attempt by the Romans to eradicate christianity from the empire, we know the result. We can't win a war against Islam, but we still can win a war against nazism (program of eliminating categories of people).


We should better not confuse the pseudo-muslim nazis from the muslims. 



On 8/25/2014 10:27 AM

Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor 




The probability is still higher that Human Intelligence May Doom Us more efficaciously.

We never know what we do, in term of the long run. 

Ah! I told you that we should not have left the Ocean!

...

People want to be "politically correct" and gives the same voice to the just and the unjust. 
If you define the just by the one who does not impose its will on other, and the unjust by the one who does impose its will on the other, you can understand that there is a polarity where political correctness is wrong and dangerous. Voltaire said the beautiful (but dangerous) statement "I disagree with what you say, but I am willing to die for you having the right to say it". He should have added the proviso that this is so as long as you don't tell lies on me or my friends or use words or weapon to limit my liberty and will". 

Freedom of expression is freedom of hypothesis/theories and duty of explanations and consultations.

Of course we live our tolerance of the use of authoritative arguments in theology and human science since 523 AC. It is "our" tradition.

If theology never left the academy, today there would be "Nobel Prize" or other Field Medals in theology, and I guess that an unanimity would exist that anyone killing in the name of the one having no name, would be mocked as the very one lacking faith, and blaspheming.

If we remain as bad in human and fundamental science (meaning really: being so ignorant of the difficulties and questions, and being so able to deny evidences, or to confuse A-> B with B ->A), then it is pretty clear we will be as much bad in the machine science, to our detriment. If we don't recognize ourself in them, they will not recognize ourselves either. Many futures are possible. I would suggest trying the harm reduction path. 

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Richard Ruquist

unread,
Sep 2, 2014, 1:26:58 PM9/2/14
to everyth...@googlegroups.com
On Tue, Sep 2, 2014 at 12:40 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 25 Aug 2014, at 21:04, meekerdb wrote:

Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

I agree. ISIS, Hamas, Muslim Brotherhood, etc. 

It is not Islam, in my current opinion, but I read the Hamas chart, and, well, I have only read one half of the Quran for now, and it is hard to interpret (I do think the hamas is inconsistent with the surrah of the poets and the surrah of the table), but I read entirely "mein kampf", and the chart of the hamas extends mein kampf, and indeed those guys works hard and patiently to produce a pause, may be one more millenium of obscurity.

Religion are like drug, the more you repress them, the more they get solid. The christian era is already a consequence of the attempt by the Romans to eradicate christianity from the empire, we know the result.

Bruno,

According to Harvard scholars the Romans invented Christianity to keep the Jews in check:

meekerdb

unread,
Sep 2, 2014, 1:40:26 PM9/2/14
to everyth...@googlegroups.com
On 9/2/2014 9:40 AM, Bruno Marchal wrote:

On 25 Aug 2014, at 21:04, meekerdb wrote:

Bostrom says, "If humanity had been sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figured out how to do so safely. And then maybe wait another generation or two just to make sure that we hadn't overlooked some flaw in our reasoning. And then do it -- and reap immense benefit. Unfortunately, we do not have the ability to pause."

But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to produce a pause.

I agree. ISIS, Hamas, Muslim Brotherhood, etc. 

It is not Islam, in my current opinion, but I read the Hamas chart, and, well, I have only read one half of the Quran for now, and it is hard to interpret (I do think the hamas is inconsistent with the surrah of the poets and the surrah of the table), but I read entirely "mein kampf", and the chart of the hamas extends mein kampf, and indeed those guys works hard and patiently to produce a pause, may be one more millenium of obscurity.

Religion are like drug, the more you repress them, the more they get solid. The christian era is already a consequence of the attempt by the Romans to eradicate christianity from the empire, we know the result.

I don't think that's right.  The Romans were quite tolerant of varied religions. It wasn't Roman repression that caused the Christians to sack the Museum and murder Hypatia in Alexandria.  It was Christian intolerance and a drive to stamp out every vestige of the Greek and Roman paganism - including their science and art.  Christian theologians emphasized faith; curiosity and reason led to sin.   You see the same fanaticism in the Taliban and now ISIS.


We can't win a war against Islam, but we still can win a war against nazism (program of eliminating categories of people).


We should better not confuse the pseudo-muslim nazis from the muslims.

It would be easier to tell them apart if the muslims would unite against the pseudo-muslims.  Since they allegedly greatly out number them, they should easily squash ISIS without the need of western intervention.  But perhaps they don't see them as "pseudo"; maybe they see them as fellow sunnis bringing true religion to the yazidis as Mohamed did, by conquest.

Brent

LizR

unread,
Sep 2, 2014, 7:19:18 PM9/2/14
to everyth...@googlegroups.com
On the subject of AI dooming us, at least we have John Mikes' benevolent aliens looking out for us. Unless their aim was to get the AIs ... but why not build one themselves? (Come to think of it why not build US themselves?)


Stephen Paul King

unread,
Sep 2, 2014, 7:31:18 PM9/2/14
to everyth...@googlegroups.com
What if the aliens are AI?


On Tue, Sep 2, 2014 at 7:19 PM, LizR <liz...@gmail.com> wrote:
On the subject of AI dooming us, at least we have John Mikes' benevolent aliens looking out for us. Unless their aim was to get the AIs ... but why not build one themselves? (Come to think of it why not build US themselves?)


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Sep 2, 2014, 8:03:18 PM9/2/14
to everyth...@googlegroups.com
On 3 September 2014 11:31, Stephen Paul King <Step...@provensecure.com> wrote:
What if the aliens are AI?

In that case they were built by someone else. 

Stephen Paul King

unread,
Sep 2, 2014, 8:43:05 PM9/2/14
to everyth...@googlegroups.com
Hi LizR,

   My point about Aliens being AGI is simple. A sufficiently advanced alien civilization may very likely have had a Singularity of its own in the past and what survived are the machines! 

   We forget that the Turing test is merely a test for an ability to deceive humans.... 

"In that case they were built by someone else. "

   I don't think that AI works like that, now that I am thinking about it. One could take the ID argument seriously and reach that conclusion. I don't think that an AGI can be "designed" any more than you and I are not designed.
   OTOH, -Following the ID concept for a bit longer - intelligent entities can create conditions and environments within which AGI can evolve. I submit that we will be just as unable to fathom the operations of the "mind" of an AGI as we are of each other's minds. This "unfathomability" is an inherent property of a mind. It is the inability to predict exactly its behavior. 

   My "proof" - if I should call it that - is a bit technical. It involves an argument based on the ability of pair of computers to simulate each others behavior and to have the simulations predicted by another computer. If one computer X could exactly simulate another computer Y, then it is easy to show that X could include Y as a sub-algorithm of some kind and thus X would be able to "inspect" arbitrary content of the mind of B. 

   Is this correct so far?




--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Sep 2, 2014, 9:49:01 PM9/2/14
to everyth...@googlegroups.com
On 3 September 2014 12:43, Stephen Paul King <Step...@provensecure.com> wrote:
Hi LizR,

   My point about Aliens being AGI is simple. A sufficiently advanced alien civilization may very likely have had a Singularity of its own in the past and what survived are the machines! 

Agreed. 

   We forget that the Turing test is merely a test for an ability to deceive humans.... 

I hadn't forgotten that, though I'm not sure of the relevance in context. But anyway, to a sufficiently advanced AI a human being might not count as a "person", in that their behaviour is more or less predictable. "It almost fooled me, but it turned out to be just another DNA robot pretending to be sentient..."

"In that case they were built by someone else. "

   I don't think that AI works like that, now that I am thinking about it. One could take the ID argument seriously and reach that conclusion. I don't think that an AGI can be "designed" any more than you and I are not designed.

I said built, not designed. The hardware itself is designed, and built, but the AI that lives inside it is something else again (the same is true of brains, of course - our offspring aren't designed ... despite our best efforts). A good fictional example is HAL in 2001 who was built, as hardware, and then the software was trained - brought up as much as possible like you would a child (hence Dr Chandra and "Daisy, Daisy".)

By definition, AFAIK, an artificial intelligence runs on hardware that was built. That's the distinction that makes it "artificial" - supposedly, though it may turn out to be a non-distinction if we find that circuits can be created that grow dynamically as they learn, like neurons - there are such things, as recently mentioned on this forum. At that point the "buit" distinction will go out the window I imagine.
 
   OTOH, -Following the ID concept for a bit longer - intelligent entities can create conditions and environments within which AGI can evolve. I submit that we will be just as unable to fathom the operations of the "mind" of an AGI as we are of each other's minds. This "unfathomability" is an inherent property of a mind. It is the inability to predict exactly its behavior. 

Agreed. In particular, we can't predict our own behaviour.

   My "proof" - if I should call it that - is a bit technical. It involves an argument based on the ability of pair of computers to simulate each others behavior and to have the simulations predicted by another computer. If one computer X could exactly simulate another computer Y, then it is easy to show that X could include Y as a sub-algorithm of some kind and thus X would be able to "inspect" arbitrary content of the mind of B. 

   Is this correct so far?

Yes I think it's simliar to the halting problem, you can "Godelise" it. We exhibit this ourselves: we can't model our own behaviour to sufficient accuracy to predict it, except approximately. (Some people think this is what we mean by Free Will, though I'd rather not open that can of worms myself.)

Stephen Paul King

unread,
Sep 2, 2014, 11:09:21 PM9/2/14
to everyth...@googlegroups.com
Hi LizR,

  But here is the thing: the hardware to run AGI already exists! From what I have gathered so far in my research it is a sufficiently complex and dynamic network. The AGI, AFAIK, is a "software" machine. It does not need particular hardware, it just needs the functions that are required to exist and to be sequentiable properly.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Sep 2, 2014, 11:23:24 PM9/2/14
to everyth...@googlegroups.com
On 3 September 2014 15:09, Stephen Paul King <Step...@provensecure.com> wrote:
Hi LizR,

  But here is the thing: the hardware to run AGI already exists! From what I have gathered so far in my research it is a sufficiently complex and dynamic network. The AGI, AFAIK, is a "software" machine. It does not need particular hardware, it just needs the functions that are required to exist and to be sequentiable properly.

That may well be true. But of course the hardware has to be connected up correctly, there has to be enough storage connected, and it has to have the right software. The last is the trickiest part, I imagine (unless "Dial F for Frankenstein" is correct and you merely have to connect enough stuff together...)

PS I'm not sure what sequentiable means, by the way. Wiktionary isn't being any help.

Stephen Paul King

unread,
Sep 2, 2014, 11:43:46 PM9/2/14
to everyth...@googlegroups.com
Right, the connections have to be correct, but there is a weird trick here. Recall how an encrypted message can appear to be random noise? There is a form of computation that would look like noise if one where only looking at some subset of the network that is running a distributed computation. If that distributed computation is an AGI, we would never know it is there and neither would it know we are here.


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Stephen Paul King

unread,
Sep 2, 2014, 11:45:59 PM9/2/14
to everyth...@googlegroups.com
Hi LizR,

  Sequentiable means that the correct sequence of operations occurs. Information is sensitive to orderings after all. 101001010010 is not the same number as 00100110001....


On Tue, Sep 2, 2014 at 11:23 PM, LizR <liz...@gmail.com> wrote:

--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Sep 2, 2014, 11:57:23 PM9/2/14
to everyth...@googlegroups.com
On 3 September 2014 15:43, Stephen Paul King <Step...@provensecure.com> wrote:
Right, the connections have to be correct, but there is a weird trick here. Recall how an encrypted message can appear to be random noise? There is a form of computation that would look like noise if one where only looking at some subset of the network that is running a distributed computation. If that distributed computation is an AGI, we would never know it is there and neither would it know we are here.

I suspect it would look like noise as far as I am concerned anyway. I'm not sure if this gets us any closer to an AGI, however. Presumably you still need to set it up correctly to start with (or something does....eg a long period of learning). Or are you saying that there may be AIs around already unaware of our existence and vice versa?

LizR

unread,
Sep 2, 2014, 11:59:26 PM9/2/14
to everyth...@googlegroups.com
On 3 September 2014 15:45, Stephen Paul King <Step...@provensecure.com> wrote:
Hi LizR,

  Sequentiable means that the correct sequence of operations occurs. Information is sensitive to orderings after all. 101001010010 is not the same number as 00100110001....

Is it a real word? (Personally I'd go for "correctly ordered" or "in the right order" rather than "sequentiable properly".)

Stephen Paul King

unread,
Sep 3, 2014, 12:15:28 AM9/3/14
to everyth...@googlegroups.com
Hi LizR,

   Yes, I am saying that  there may be AIs around already unaware of our existence and vice versa! Cultures, languages, religions, etc. all have the behaviors that we would associate with entities that are to some degree "self-aware" in that there are "self-replication" behaviors associated - See Dawkin's The extended Phenotype - Humans are quite capable of becoming members of a sufficiently expressive language as silicon hardware...


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Stephen Paul King

unread,
Sep 3, 2014, 12:16:17 AM9/3/14
to everyth...@googlegroups.com
Modulo decryption....


--
You received this message because you are subscribed to a topic in the Google Groups "Everything List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Sep 3, 2014, 12:31:33 AM9/3/14
to everyth...@googlegroups.com
On 3 September 2014 16:15, Stephen Paul King <Step...@provensecure.com> wrote:
Hi LizR,

   Yes, I am saying that  there may be AIs around already unaware of our existence and vice versa! Cultures, languages, religions, etc. all have the behaviors that we would associate with entities that are to some degree "self-aware" in that there are "self-replication" behaviors associated - See Dawkin's The extended Phenotype - Humans are quite capable of becoming members of a sufficiently expressive language as silicon hardware...

While not disagreeing with this general idea (it's rather intriguing) I'm not sure "self-aware" is entailed by "self-replicating" ?! "The Extended Phenotype", iirc (it's a long time since I read it, which was around when it first came out) posits that the environment of an organism gets entrained by its genes, effectively - or to put it another way, that the body doesn't end at the skin (or bark, etc). That seems kind of the inverse of the "AGI hypothesis" ?

LizR

unread,
Sep 3, 2014, 12:32:37 AM9/3/14
to everyth...@googlegroups.com
PS I have to go in a minute to meet my other half to attend this...

Chris de Morsella

unread,
Sep 3, 2014, 1:35:18 AM9/3/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Tuesday, September 02, 2014 6:58 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Mon, Sep 1, 2014 at 2:45 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

>>  Amazing isn’t it. The elegance of self-assembling processes that can do so much with so little input.

 

Yes, very amazing! 

 

> I doubt 1000 lines of computer code is a large enough initial instruction set even for a highly self-generating system.

 

My intuition would say that you're right but my intuition would also say that 750 Meg is not enough information to make a human baby, and yet we know for a fact that it is. So I must conclude that our intuition is not very good in matters of this sort. 

 

Point taken, but then a human baby is a plastic template for the individual to emerge in; a fully formed adult human only develops after decades of experience and learning. All of that living experience and cultural learning is not contained inside that DNA bundle.

I disagree with your conclusion that epigenetic effects are of minor consequence. For example a fetus developing in a low stress nourishing environment will during embryogenesis – or so I am arguing – develop into a different human being than that same child in an alternate universe where it is exposed to high stress and low nourishment. The very rapid unfolding sequence of DNA choreographed events that occurs during embryogenesis will unfold in a different manner in each instance.

Time will tell… eventually (and perhaps soon) I believe we will crack this code operating over the code. That I believe provides life with a key ability to respond – in a genetic manner to rapidly mutating environments. It is an extra mechanism that works hand I hand with DNA  switching expression on and off… selecting from alternate exressions.

 

 

>>> The same strand of DNA, depending on the dynamic action of the large number of transcription factors

 

>> A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.

 

> From that original bundle of genetic code + environmental influences.

 

>>I don't know what you're talking about.

 

I am talking about epigenetic environmentally driven processes both acting to control – to a degree --  which regions of DNA get expressed, and how these regions finally get transcribed into mRNA, in what is a highly dynamic process occurring within the cell’s nucleus -- with much snipping and splicing taking place on the underlying original copy of the DNA being transcribed. Out of this process occurring within the nucleus emerges the resulting mRNA that ultimately is sent out to the ribosomes (where further front line editing may be taking place,)

 

What a protein can do is a function of it's shape, and the shape of a transcription factor, just like any other protein, is entirely determined by its amino acid sequence, and that is entirely determined by the Messenger RNA sequence, and that is entirely determined by the DNA sequence in the genome. Proteins with the same amino acid sequence always fold up in exactly the same way, at least under all environmental conditions found inside living cells. Yes if environmental conditions are very extreme, if things are very hot or the pH is super high or super low  the protein can become denatured and fold up into weird useless shapes, but such conditions are always fatal to life so it's irrelevant. Under all lifelike conditions proteins always fold up in the exact same way.   

 

> 90% of the living things in a human body DO NOT have human DNA not by weight of course but by census

 

Our primary interest around here is the brain and what the brain does, mind; and I don't see the relevance bacteria have to that. But if you want to include the genome of E coli  that's fine, there's plenty of unused space on that CD for it.

 

There are a heck of lot more species inhabiting our guts than just one or two species of E coli. They perform many services, including it is being discovered working very closely with our own immune systems to warn their human host of the presence of pathogens.

 

> The kind of flora and fauna we have in our guts in many ways determines who we are, what we think and what we desire.

 

So the key to consciousness and the factor that determines our personal identity lies in our poo?  

If you want to characterize your digestive process by what is defecated out as waste I think you must not have a good grasp of what the digestive process is all about. It is our primary interface with the external world. It is the interface where we absorb the external world into our bodies internal world. It even has its own tiny frontline brain – the enteric nervous system.

You think that the cravings for sugary foods, that the depression that often occurs in sugar addicted people when they do not feed their habit is purely human in origin and that the candida yeast that such persons are often infested with has absolutely no role in this? There are numerous amazing animal studies that prove that parasite species can control the behavior of their hosts – even to the extent of making their hosts engage in behavior that is designed to get them eaten as certain parasitic species do to insects they have infected (in the Amazon I believe) making the host insects climb to the exposed tops of the leaves where they become easy prey for birds of the species that is the next host species in that parasites life cycle.

Just because a thought pops up in the brain does not mean that the mind is the executive actor at the root of the desire or emotion. Parasites have evolved very sophisticated chemical signaling that they use to influence their hosts.

> It affects out well-being

So would an inflamed toenail, but I don't think a investigation of that affliction will bring much enlightenment on the nature of intelligence or consciousness.  

 

Apples and oranges. The internal chemical signaling that parasites engage in to harness a hosts immune system or affect its mental state is the evolved mechanism by which these parasitic species have learned to control their hosts. An inflamed toenail is a wound and the pain is the organisms own nervous system response.

As I said apples and oranges.

> I do not see a single human (or other eukaryote) only in terms of its own DNA + epigenetic meta-programming over the DNA base, but also in terms of the ecosystem that exists within.

That is where we differ and I think that is your fundamental error, you believe you must understand everything before you can understand anything, in other words you do what is becoming increasingly fashionable these days, you reject reductionism in spite of it having worked so spectacularly well during the last 400 years. I don't.

John K Clark

If you need to saw a piece of lumber don’t use a hammer. Just because one tool – reductionism has had spectacular success in increasing our understanding (and I am not denying that it has) does not mean that it is always the appropriate tool to use for the job.

In understanding the workings of complex multi-actor systems reductionist approach has not produced spectacular results. A systems approach is required as well – to complement the understanding of the parts with a different kind of understanding of the dynamic working of the whole.

Surely this is important for something like understanding consciousness and self-aware intelligence.

 -Chris

 

 

--

You received this message because you are subscribed to the Google Groups "Everything List" group.

To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

meekerdb

unread,
Sep 3, 2014, 7:54:45 AM9/3/14
to everyth...@googlegroups.com
On 9/2/2014 10:35 PM, 'Chris de Morsella' via Everything List wrote:

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Tuesday, September 02, 2014 6:58 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Mon, Sep 1, 2014 at 2:45 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

>>  Amazing isn’t it. The elegance of self-assembling processes that can do so much with so little input.

 

Yes, very amazing! 

 

> I doubt 1000 lines of computer code is a large enough initial instruction set even for a highly self-generating system.

 

My intuition would say that you're right but my intuition would also say that 750 Meg is not enough information to make a human baby, and yet we know for a fact that it is. So I must conclude that our intuition is not very good in matters of this sort. 

 

Point taken, but then a human baby is a plastic template for the individual to emerge in; a fully formed adult human only develops after decades of experience and learning. All of that living experience and cultural learning is not contained inside that DNA bundle.

I disagree with your conclusion that epigenetic effects are of minor consequence. For example a fetus developing in a low stress nourishing environment will during embryogenesis – or so I am arguing – develop into a different human being than that same child in an alternate universe where it is exposed to high stress and low nourishment. The very rapid unfolding sequence of DNA choreographed events that occurs during embryogenesis will unfold in a different manner in each instance.


The rapidity of unfolding isn't really relevant to the information required.  Just think what you're saying from an information standpoint.  At the most simplistic level, stress (whatever that is for a fetus?) and nourishment are two bits.  Realistically it's maybe a half dozen bits.  For those few bits to make a significant difference in the baby can only happen if those significant differences are already encoded in the DNA and the epigenetic bits are just "picking out" one from another.  Given the vagueness of things like "stress" it's hard to see how they can be factors distinguishable from random effects like cosmic rays.  Are there more epigenetic effects in Denver and Mexico City?


Time will tell… eventually (and perhaps soon) I believe we will crack this code operating over the code. That I believe provides life with a key ability to respond – in a genetic manner to rapidly mutating environments. It is an extra mechanism that works hand I hand with DNA  switching expression on and off… selecting from alternate exressions.


Implying there are alternate expressions already coded in that 750Mb.


 

 

>>> The same strand of DNA, depending on the dynamic action of the large number of transcription factors

 

>> A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.

 

> From that original bundle of genetic code + environmental influences.

 

>>I don't know what you're talking about.

 

I am talking about epigenetic environmentally driven processes both acting to control – to a degree --  which regions of DNA get expressed, and how these regions finally get transcribed into mRNA, in what is a highly dynamic process occurring within the cell’s nucleus -- with much snipping and splicing taking place on the underlying original copy of the DNA being transcribed. Out of this process occurring within the nucleus emerges the resulting mRNA that ultimately is sent out to the ribosomes (where further front line editing may be taking place,)

 

What a protein can do is a function of it's shape, and the shape of a transcription factor, just like any other protein, is entirely determined by its amino acid sequence, and that is entirely determined by the Messenger RNA sequence, and that is entirely determined by the DNA sequence in the genome. Proteins with the same amino acid sequence always fold up in exactly the same way, at least under all environmental conditions found inside living cells. Yes if environmental conditions are very extreme, if things are very hot or the pH is super high or super low  the protein can become denatured and fold up into weird useless shapes, but such conditions are always fatal to life so it's irrelevant. Under all lifelike conditions proteins always fold up in the exact same way.   

 

> 90% of the living things in a human body DO NOT have human DNA not by weight of course but by census

 

Our primary interest around here is the brain and what the brain does, mind; and I don't see the relevance bacteria have to that. But if you want to include the genome of E coli  that's fine, there's plenty of unused space on that CD for it.

 

There are a heck of lot more species inhabiting our guts than just one or two species of E coli. They perform many services, including it is being discovered working very closely with our own immune systems to warn their human host of the presence of pathogens.


Yet "bubble boys" that are born with dysfunctional immune systems and are kept in sterile environments seem perfectly human.


 

> The kind of flora and fauna we have in our guts in many ways determines who we are, what we think and what we desire.

 

So the key to consciousness and the factor that determines our personal identity lies in our poo?  

If you want to characterize your digestive process by what is defecated out as waste I think you must not have a good grasp of what the digestive process is all about. It is our primary interface with the external world. It is the interface where we absorb the external world into our bodies internal world. It even has its own tiny frontline brain – the enteric nervous system.

You think that the cravings for sugary foods, that the depression that often occurs in sugar addicted people when they do not feed their habit is purely human in origin and that the candida yeast that such persons are often infested with has absolutely no role in this? There are numerous amazing animal studies that prove that parasite species can control the behavior of their hosts – even to the extent of making their hosts engage in behavior that is designed to get them eaten as certain parasitic species do to insects they have infected (in the Amazon I believe) making the host insects climb to the exposed tops of the leaves where they become easy prey for birds of the species that is the next host species in that parasites life cycle.

Just because a thought pops up in the brain does not mean that the mind is the executive actor at the root of the desire or emotion. Parasites have evolved very sophisticated chemical signaling that they use to influence their hosts.

> It affects out well-being

So would an inflamed toenail, but I don't think a investigation of that affliction will bring much enlightenment on the nature of intelligence or consciousness.  

 

Apples and oranges. The internal chemical signaling that parasites engage in to harness a hosts immune system or affect its mental state is the evolved mechanism by which these parasitic species have learned to control their hosts. An inflamed toenail is a wound and the pain is the organisms own nervous system response.

As I said apples and oranges.

> I do not see a single human (or other eukaryote) only in terms of its own DNA + epigenetic meta-programming over the DNA base, but also in terms of the ecosystem that exists within.

That is where we differ and I think that is your fundamental error, you believe you must understand everything before you can understand anything, in other words you do what is becoming increasingly fashionable these days, you reject reductionism in spite of it having worked so spectacularly well during the last 400 years. I don't.

John K Clark

If you need to saw a piece of lumber don’t use a hammer. Just because one tool – reductionism has had spectacular success in increasing our understanding (and I am not denying that it has) does not mean that it is always the appropriate tool to use for the job.

In understanding the workings of complex multi-actor systems reductionist approach has not produced spectacular results. A systems approach is required as well – to complement the understanding of the parts with a different kind of understanding of the dynamic working of the whole.


A systems approach is not a holistic approach.  It's still reduction plus synthesis.  I'd say holism has never produced any results.  What people sometimes cite as holistic is really abstracting the parts of a system in a different way, e.g. thermodynamics just worked with controllable variables and neglected molecules.



Surely this is important for something like understanding consciousness and self-aware intelligence.


Yet, on Bruno's theory, consciousness is a binary attribute, all-or-nothing.  Intelligence has degrees and is no doubt relative environment and context, but not consciousness (Although I disagree with Bruno on this, I think it may be semantic difference since we seem to agree that there are qualitatively different kinds of consciousness).

Brent

Chris de Morsella

unread,
Sep 3, 2014, 12:32:48 PM9/3/14
to everyth...@googlegroups.com

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, September 03, 2014 4:54 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On 9/2/2014 10:35 PM, 'Chris de Morsella' via Everything List wrote:

 

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On Behalf Of John Clark
Sent: Tuesday, September 02, 2014 6:58 AM
To: everyth...@googlegroups.com
Subject: Re: AI Dooms Us

 

On Mon, Sep 1, 2014 at 2:45 PM, 'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

 

>>  Amazing isn’t it. The elegance of self-assembling processes that can do so much with so little input.

 

Yes, very amazing! 

 

> I doubt 1000 lines of computer code is a large enough initial instruction set even for a highly self-generating system.

 

My intuition would say that you're right but my intuition would also say that 750 Meg is not enough information to make a human baby, and yet we know for a fact that it is. So I must conclude that our intuition is not very good in matters of this sort. 

 

Point taken, but then a human baby is a plastic template for the individual to emerge in; a fully formed adult human only develops after decades of experience and learning. All of that living experience and cultural learning is not contained inside that DNA bundle.

I disagree with your conclusion that epigenetic effects are of minor consequence. For example a fetus developing in a low stress nourishing environment will during embryogenesis – or so I am arguing – develop into a different human being than that same child in an alternate universe where it is exposed to high stress and low nourishment. The very rapid unfolding sequence of DNA choreographed events that occurs during embryogenesis will unfold in a different manner in each instance.


The rapidity of unfolding isn't really relevant to the information required. 

 

It is not the rapidity – though embryogenesis is certainly rapid. What I am pointing out is that epigenetic mechanisms alter which DNA actually gets expressed. Instead of sequence A, sequence B becomes expressed.

 

Just think what you're saying from an information standpoint.  At the most simplistic level, stress (whatever that is for a fetus?) and nourishment are two bits.  Realistically it's maybe a half dozen bits.  For those few bits to make a significant difference in the baby can only happen if those significant differences are already encoded in the DNA and the epigenetic bits are just "picking out" one from another. 

 

Exactly – but if one looks at the underlying original information in the DNA as also being a dictionary of coding stretches – amongst other things.  Epigenetics (and the OS of the DNA itself for that matter that until recently was dismissively called junk DNA) would function in a similar manner selecting already existing words from a dictionary of words in order to assemble them into meaningful sentences and paragraphs. Is the information content of the resulting final output reducible to the word definitions contained in the dictionary.

It is all in the sequencing. No doubt – and it is clearly so – an organisms DNA also contains this kind of coding as well that switches on and off sequences, but it also seems to be true that epigenetic factors – that is factors that are external to the DNA itself can also control the expressed sequence.

 

Given the vagueness of things like "stress" it's hard to see how they can be factors distinguishable from random effects like cosmic rays.  Are there more epigenetic effects in Denver and Mexico City?

Given the subtlety in which it operates epigenetic effects are for the most part being discerned using statistical means to find correlations between expressed phenotypes and common conditions. For example the recent study that showed that there are inherited epigenetic effects on the grandchildren of grandmothers who smoked tobacco while their children were developing in their wombs.



Time will tell… eventually (and perhaps soon) I believe we will crack this code operating over the code. That I believe provides life with a key ability to respond – in a genetic manner to rapidly mutating environments. It is an extra mechanism that works hand I hand with DNA  switching expression on and off… selecting from alternate exressions.


Implying there are alternate expressions already coded in that 750Mb.


 

 

>>> The same strand of DNA, depending on the dynamic action of the large number of transcription factors

 

>> A transcription factors is just a protein that binds to specific DNA sequences. And where did the information come from to know what sequence of amino acids will build that very important protein? From the original 750 Meg of course.

 

> From that original bundle of genetic code + environmental influences.

 

>>I don't know what you're talking about.

 

I am talking about epigenetic environmentally driven processes both acting to control – to a degree --  which regions of DNA get expressed, and how these regions finally get transcribed into mRNA, in what is a highly dynamic process occurring within the cell’s nucleus -- with much snipping and splicing taking place on the underlying original copy of the DNA being transcribed. Out of this process occurring within the nucleus emerges the resulting mRNA that ultimately is sent out to the ribosomes (where further front line editing may be taking place,)

 

What a protein can do is a function of it's shape, and the shape of a transcription factor, just like any other protein, is entirely determined by its amino acid sequence, and that is entirely determined by the Messenger RNA sequence, and that is entirely determined by the DNA sequence in the genome. Proteins with the same amino acid sequence always fold up in exactly the same way, at least under all environmental conditions found inside living cells. Yes if environmental conditions are very extreme, if things are very hot or the pH is super high or super low  the protein can become denatured and fold up into weird useless shapes, but such conditions are always fatal to life so it's irrelevant. Under all lifelike conditions proteins always fold up in the exact same way.   

 

> 90% of the living things in a human body DO NOT have human DNA not by weight of course but by census

 

Our primary interest around here is the brain and what the brain does, mind; and I don't see the relevance bacteria have to that. But if you want to include the genome of E coli  that's fine, there's plenty of unused space on that CD for it.

 

There are a heck of lot more species inhabiting our guts than just one or two species of E coli. They perform many services, including it is being discovered working very closely with our own immune systems to warn their human host of the presence of pathogens.


Yet "bubble boys" that are born with dysfunctional immune systems and are kept in sterile environments seem perfectly human.

 

And die soon after they leave their bubble. Survival of the fittest does not work in binary terms – yes/no. Say a randomly distributed group A has just a 1% survival advantage over another group B; they begin with a 50/50 distribution in the population, but over many generations selective pressure will favor group A and it will begin to spread and become predominant.

A human with a healthy biotic meta-immune system in their gut has a higher chance of surviving than a human with a ravaged and parasite infested gut micro biotic flora & fauna. Candida leads to cancer (statistically speaking).


> The kind of flora and fauna we have in our guts in many ways determines who we are, what we think and what we desire.

 

So the key to consciousness and the factor that determines our personal identity lies in our poo?  

If you want to characterize your digestive process by what is defecated out as waste I think you must not have a good grasp of what the digestive process is all about. It is our primary interface with the external world. It is the interface where we absorb the external world into our bodies internal world. It even has its own tiny frontline brain – the enteric nervous system.

You think that the cravings for sugary foods, that the depression that often occurs in sugar addicted people when they do not feed their habit is purely human in origin and that the candida yeast that such persons are often infested with has absolutely no role in this? There are numerous amazing animal studies that prove that parasite species can control the behavior of their hosts – even to the extent of making their hosts engage in behavior that is designed to get them eaten as certain parasitic species do to insects they have infected (in the Amazon I believe) making the host insects climb to the exposed tops of the leaves where they become easy prey for birds of the species that is the next host species in that parasites life cycle.

Just because a thought pops up in the brain does not mean that the mind is the executive actor at the root of the desire or emotion. Parasites have evolved very sophisticated chemical signaling that they use to influence their hosts.

> It affects out well-being

So would an inflamed toenail, but I don't think a investigation of that affliction will bring much enlightenment on the nature of intelligence or consciousness.  

 

Apples and oranges. The internal chemical signaling that parasites engage in to harness a hosts immune system or affect its mental state is the evolved mechanism by which these parasitic species have learned to control their hosts. An inflamed toenail is a wound and the pain is the organisms own nervous system response.

As I said apples and oranges.

> I do not see a single human (or other eukaryote) only in terms of its own DNA + epigenetic meta-programming over the DNA base, but also in terms of the ecosystem that exists within.

That is where we differ and I think that is your fundamental error, you believe you must understand everything before you can understand anything, in other words you do what is becoming increasingly fashionable these days, you reject reductionism in spite of it having worked so spectacularly well during the last 400 years. I don't.

John K Clark

If you need to saw a piece of lumber don’t use a hammer. Just because one tool – reductionism has had spectacular success in increasing our understanding (and I am not denying that it has) does not mean that it is always the appropriate tool to use for the job.

In understanding the workings of complex multi-actor systems reductionist approach has not produced spectacular results. A systems approach is required as well – to complement the understanding of the parts with a different kind of understanding of the dynamic working of the whole.


A systems approach is not a holistic approach.  It's still reduction plus synthesis.  I'd say holism has never produced any results.  What people sometimes cite as holistic is really abstracting the parts of a system in a different way, e.g. thermodynamics just worked with controllable variables and neglected molecules.

 

I am an advocate of use the best tool for the job. A systems approach does make use of reductionism, but it adds something different, a different optic perhaps. It looks at the dynamic functioning whole (while also relying on the best understanding of the fundamental parts). So yes, I agree with you, the systems approach is not antithetical to the reductionist approach. It is a different focus and viewpoint… a stepping back to study the dynamic processes as they interact within the system being studied.



Surely this is important for something like understanding consciousness and self-aware intelligence.


Yet, on Bruno's theory, consciousness is a binary attribute, all-or-nothing.  Intelligence has degrees and is no doubt relative environment and context, but not consciousness (Although I disagree with Bruno on this, I think it may be semantic difference since we seem to agree that there are qualitatively different kinds of consciousness).

Brent

 

Consciousness seems more like a fuzzy emergent phenomena to me that has no hard binary line between not being present and being present. For example, for the sake of argument, suppose it is absent in a single ant; however when one steps back to look at the ant colony as a whole and how adaptive and intelligent its behavior can be, the ant colony as an entity seems rather more conscious.

Consciousness seems very much to be an emergent phenomena – IMO.

John Clark

unread,
Sep 3, 2014, 12:38:10 PM9/3/14
to everyth...@googlegroups.com
On Wed, Sep 3, 2014 at 7:54 AM, meekerdb <meek...@verizon.net> wrote:

> on Bruno's theory, consciousness is a binary attribute, all-or-nothing. Intelligence has degrees

If that is true (and I'm not saying it is) then we can immediately conclude that Bruno's theory is wrong because we know for a fact from personal experience that consciousness does come in degrees just as intelligence does.

  John K Clark 


Quentin Anciaux

unread,
Sep 3, 2014, 12:41:58 PM9/3/14
to everyth...@googlegroups.com
2014-09-03 18:38 GMT+02:00 John Clark <johnk...@gmail.com>:
On Wed, Sep 3, 2014 at 7:54 AM, meekerdb <meek...@verizon.net> wrote:

> on Bruno's theory, consciousness is a binary attribute, all-or-nothing. Intelligence has degrees

If that is true (and I'm not saying it is) then we can immediately conclude that Bruno's theory is wrong because we know for a fact

And we know for a fact you're just plain stupid, and never read Bruno whatsoever... Because he said there is a degree... but also, that either you're conscious  (with degree) or you're not conscious, it is either 0 or > 0 (at any degree). There is no "unconsciousness" with degree...

But coming from Liar Clark, this is normal behavior.

Quentin
 
from personal experience that consciousness does come in degrees just as intelligence does.

  John K Clark 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

John Clark

unread,
Sep 3, 2014, 1:56:19 PM9/3/14
to everyth...@googlegroups.com
On Wed, Sep 3, 2014  'Chris de Morsella' via Everything List <everyth...@googlegroups.com> wrote:

>a human baby is a plastic template for the individual to emerge in


And those 1000 lines of Lisp are a plastic template for the Jupiter Brain to emerge in.


> All of that living experience and cultural learning is not contained inside that DNA bundle.


Obviously. And everything the adult Jupiter Brain knows wasn't contained in those 1000 lines of Lisp, but that was the seed that got things going.


> I disagree with your conclusion that epigenetic effects are of minor consequence.


Are you saying that random environmental conditions in the womb are necessary to build a brain? I don't see how that could work but a machine has just as much access to the environment as a fetus. Or are you saying the conditions are not random at all but planned and deliberate? Well, I don't believe in God.


> The very rapid unfolding sequence of DNA choreographed events that occurs during embryogenesis will unfold in a different manner in each instance.


DNA doesn't fold or unfold, the most it does is during reproduction when it temporally turns from a double helix to 2 single helices which soon turn into 2 double helices. It's protein that's the champion folder and the same sequence of amino acids always fold into the same shape under all lifelike conditions; it's a good thing too because if the way proteins wasn't consistent and reliable life would be impossible. 

It is an extra mechanism that works hand I hand with DNA  switching expression on and off… selecting from alternate exressions.


Yes, but computer code has been doing that, switching subroutines on and off, for more than 60 years.  Tell me something of fundamental importance that meat can do but silicon can't.
 
> how these regions finally get transcribed into mRNA, in what is a highly dynamic process

And when a Lisp program gets run it's a a highly dynamic process. Tell me something of fundamental importance that meat can do but silicon can't.
 

>> So the key to consciousness and the factor that determines our personal identity lies in our poo?  

> If you want to characterize your digestive process by what is defecated out as waste I think you must not have a good grasp of what the digestive process is all about. It is our primary interface with the external world. It is the interface where we absorb the external world into our bodies internal world. It even has its own tiny frontline brain – the enteric nervous system.

So I guess the answer to my question is yes.

>>> It affects out well-being

>> So would an inflamed toenail, but I don't think a investigation of that affliction will bring much enlightenment on the nature of intelligence or consciousness.  

 

> Apples and oranges

are both trees.

> Just because one tool – reductionism has had spectacular success in increasing our understanding (and I am not denying that it has) does not mean that it is always the appropriate tool to use for the job.

Reductionism may not always work but holism NEVER works.

  John K Clark
It is loading more messages.
0 new messages