a doubt

14 views
Skip to first unread message

IvanV

unread,
Apr 5, 2013, 3:33:29 PM4/5/13
to general-in...@googlegroups.com
Friends, shit came into my life.
I spent more than ten years on researching the artificial intelligence. The main reason why I spent so much time is because I shit on the freaking top of this cruel planet where I have to kill an animal or a plant to su
rvive. I hoped that AI (artificial intelligence) could find some artificial food or something that would make my life bareable.
I thought of a very clever plan where AI would collect data from the Internet and generalize knowledge, while finding new formulas that stand in this Universe. From my work I've found out that it is possible to i.e. fill in data from molecular behavior, state the final molecular formula and press a computer button where computer would find out what starting moleculas need to be combined to get the final result. It is like some kind of universal math that works on knowledge, it uses brute force of checking *every* combination of possible input particles which by combining give final wanted result. That way we could find out what moleculas need to be combined to get proteins, vitamins and God knows what to artificially make healthy food. The same algorithm also automatically solve any math or physics problem and also can provide logical proofs for logic questions. The plan was to popularize site on which students would solve math and physics school tasks, while in the same time I wanted to provide true scientists with a tool for finding new physics and chemical formulas in the Universe.

Than the shit came this summer. Let's say we have perfectly ethical behavior programmed in artificial intelligence algorithm. Let's also say that computers can brake in a way that some bytes are changed without user knowledge. In most cases program would crash and stop working. If we make a fresh copy of program when it brakes and if we do it nearly infinite times on each brake (time flow is infinite, isn't it) when will by an error program start to behave completely opposite to perfectly ethical behavior? And if the thing is smart and fast like a super genius, how could we stop it when the scenario of Terminator movie begins?

So where is a catch? The catch is that I have to flush down the toilet more than ten years of work on AI because I'm scared to shit of that error that is certainly coming in the future by my calculations.

So why not to build purely scientific formula searcher that does not have any behavior algorithm? Well, if you put data about humans in there, by some future automatic concluding: puff, perfect ethical behavior of humans is there and again the same problem rises and things can brake to make the freaking opposite bastard out of program.

Someone would say that in infinite time line a natural storm thunder can produce the same freaking bastard in computer when it breaks. But that way that bastard is so far away in all possible error combinations. By even starting to program AI with perfect behavior, three sixes are becoming nearer in AI thought.

So, we have three combinations on raising graph of mistake possibilities:

1. First is waiting for thunder and believing in god.
2. Second is to program just pure scientific concluder that does nothing, but just inform us on demand about knowledge it concluded (it would also know ethical perfectness)
3. The most scary combination: Perfect behavior mechanism that can brake to complete devil genius the most soonest in some time coming upon us in the future.

So, I wanted to hear Your thoughts. As I crap onto this planet for noted reasons, I am thinking of picking combination two (2.), but I still have doubts which raise with awearnes of possibility of knowing freaking life phenomena that opens many questions about before life, in life, after life, pain, joy, fear, panic and god knows what can broken bastard think of. I wish I'm just paranoid, but this situation is killing me. I have to step onto little darling ant while i'm going to buy some tomatoes and beef brain for my suffer, because I'm scared to death of possibilities with an AI. I'd freaking pick combination two, but I'm still scared to do it without support.

Would you risk it with 2.? I have to hear your thoughts, I hate to decide this on myself? Shit would only say how to do something, it wouldn't do anything but saying stuff on demand? But three sixes would be nearer in the future, maybe thousands years from now, maybe tomorrow, who would know with gambling? Friends? Anyone?

Matt Mahoney

unread,
Apr 5, 2013, 5:24:53 PM4/5/13
to general-intelligence
How about none of the above? Do you really think you can build a godlike AGI?
> --
> You received this message because you are subscribed to the Google Groups
> "Genifer" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to general-intellig...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>



--
-- Matt Mahoney, mattma...@gmail.com

Ivan Vodišek

unread,
Apr 5, 2013, 5:27:24 PM4/5/13
to general-in...@googlegroups.com
@Matt
godlike i'm still afraid of.
scientific? I'd damn could!
The question is: is it safe enough?


2013/4/5 Matt Mahoney <mattma...@gmail.com>

Russell Wallace

unread,
Apr 5, 2013, 5:35:36 PM4/5/13
to general-in...@googlegroups.com
You may be both relieved and disappointed to learn that what you propose won't actually work. Sure, mathematically speaking search algorithms to which you refer could do all sorts of things... if you had an infinitely powerful computer to run them on. But if you try to use them in real life, the heat death of the universe will interrupt your program long before it outputs the first useful result.

Ivan Vodišek

unread,
Apr 5, 2013, 5:51:24 PM4/5/13
to general-in...@googlegroups.com
Nirgal @GSS group

Maybe you would feel better by actually feeling you are part of a ecosystem through your body and your actions ?
By eating plants you've watched grow and have taken care of, and by planting much more than what you eat, you would feel of service of the planet, while taking from her what you need to support her even more ?

 
About IA, i think there are many more dangers that could lead towards our extinction and considering humankind, we've got shit in every corner. It's not about hiding it under the carpet and wait, as it always ends up smelling like death, it's about accepting it and facing it, cleaning the shit we make without counting on others to do so.

I think ethical perfectness has as many faces as there are beliefs and values and religions. It doesn't exist in nature and is an awkward attempt to understand and express our divine essence, at best.

"The problem with intelligent people is that they are full of doubts, while the stupid ones are full of confidence."
Charles Bukowski


The danger in an IA is exactly the concept of ethical perfectness.
We've got the same problem actually with the judicial system. Laws may have been meant to support some social cohesion, but they supported so much our ethics that they almost replaced it in the heart of many.

So imagine you build an IA that is supposed to be ethically perfect, and that enough scientists and politicians and shit say : " alleluia, we've got our messiah that will save us from the depths of hell we are actually in ! Listen to God Jr and we'll all be happy and live in peace and love !". Then, for sure, all your fears will become reality, and it may actually be much worse.
You are right that it doesn't need a lot for something supposed to be good to turn horribly bad.
And often it turns bad because we forget our own free will, intelligence and ethics by listening to a book, a voice, an idea, or, why not, a machine.
Actually, it's easier to let someone else decide, even more when it concerns ethics, good and bad. Because these things mean conflict between the self and others.
So if someone says : do that, you'll be good, it releases one from the burden of the choice.

So to answer your question, i'll get to option 2. and would add a few things you could want to consider.

Consider everything, including any IA type imaginable and each of their features, as Pharmakons. Meaning everything is a poison and a cure, it depends on individuals, quantities, environment... The IA of your worst nightmares may actually be useful in some cases.
More developments on pharmakons there if you want : http://p2pfoundation.net/NakedMind#Using_Netention_as_a_Library_of_Pharmakons

IAs should only suggest to humans and never act without their consent. Not only for "ethics" reasons, as to avoid some dangers, but also simply because it's nothing more than a tool, and our actual use of it will determine the quality of the results.
Actually, what we can do much more easily than an AI, is tools to enhance Collective Intelligence, which can lead to something that has infinitely more potential than an AI (imo), and is less likely to overwhelm and control us if it is decentralised, and uses semantics mostly to suggest doors, not opening them for the user.



2013/4/5 Russell Wallace <russell...@gmail.com>

Ivan Vodišek

unread,
Apr 5, 2013, 5:52:08 PM4/5/13
to general-in...@googlegroups.com
I've just got a answer to my question.
How much bytes takes an AI behavior algorithm? Let's say 1000 bytes. It is like we expect from error to line up 1000 bytes of unknown AI code in a sequence on error and to continue working. I estimate probability of 2 to the power of 1000. Freaking zillion and more.

It is like to expect from storm thunder to build up bricks out of mud and to build torturing house in a time of the storm.

With these numbers I'm damn in. I'm still picking (2.), not (3.) and hoping for Good God to exist, if I need even that.



2013/4/5 Ivan Vodišek <ivan....@gmail.com>

Ivan Vodišek

unread,
Apr 5, 2013, 6:28:55 PM4/5/13
to general-in...@googlegroups.com
Nirgal @Global Survival Group:

If this is what you have in mind, an AI would not be able to do a better job than a system that enhances and collects and interlink our individuals intelligences.

Me @Global Survival Group:
@Nirgal
Something is moving in my mind in direction that in nature, rows from many tables exist. What generaization does is sorting them into tables and searching the same rule between columns that applies to all rows. When the rule is found, the whole column experience data is redundant and replaced by just one rule of column dependance on other columns.


2013/4/5 Ivan Vodišek <ivan....@gmail.com>

Ivan Vodišek

unread,
Apr 5, 2013, 7:05:31 PM4/5/13
to general-in...@googlegroups.com
Nirgal @GSS:
Is there place for evolution or mutation in what you describe ?
I'm not sure...

Me @GSS:
I think that generalization algorithm is still. The catch is that function has id, parameters and result and they can be mutually combined to match data. And here, functions can have whatever parameters and whatever result. So it is mutable in a way of extending parameters of functions. Here comes "evolutionary algorithm" of combining functions that reduces search time when partial results are achieved. Functions with most matching particles are promoted and used more in future combinations.

Nirgal @GSS:
i've been thinking a bit more. Actually what you describe seems close to what is life.
Lots of data is in genes but mostly redundant genese activate.

So rules are the patterns we observe as the species, eye colour, the arms, etc...
Then, you can still computerize exceptions and particularities as statistical chances (or relevancies for ideas).
For exemple, if you observe one three-legged guy in a population of 100, the three-legged phenomenon is more unlikely to happen, but still it can happen.

In case of ideas, it allows the possibility to combine ideas and things that can be highly irrelevant and, why not, still produce something valuable.


2013/4/6 Ivan Vodišek <ivan....@gmail.com>
Reply all
Reply to author
Forward
0 new messages