Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Constrained reasoning and the AI alignment problem

0 views
Skip to first unread message

pataphor

unread,
Oct 24, 2023, 3:26:22 AM10/24/23
to
The ongoing siege and genocide in Gaza kind of threw me off IRC, at
least for a while. It's not the first time this happened, but anyway,
in order to escape indolence I'm back here again, to share my somewhat
lonely perspective.

Recently I came across a video of a speech by one of the subsidizers of
artificial intelligence research, and of its associated problems. He
supposedly graduated with good grades from a philosophy study, a thing
which always piques my interest, especially if they also managed to
become successful in the material world.

Unfortunately, after watching the beginning of the video, and later the
part where he distances himself from the thing he helped create, I could
not escape the impression he was barely able to communicate, so my
focus moved towards trying to understand what could have happened to
some supposedly brilliant philosopher to end up like this.

Was he high? Did he end up in some sycophantic environment entirely
lacking the necessary interrogation that once pushed him towards the
higher levels, reducing him to some idiosyncratic savant, spewing
dog whistles that only dedicated fans could still pretend to understand?

Finally some important realization came to me, if this was the
environment AI was created in, wouldn't it be reasonable to look for the
constraints that the subsidized would have to operate under?

If you assume that AI itself can do either good or bad, but realize that
the current mega corporations driving the effort are bad, but still,
this is where all your money comes from, wouldn't one's argumentation
be limited to advocating for an AI pause, at least for the people that
wouldn't want to end up at the foot of the ladder?

This idea seemed to line up very well with my earlier, more intuitive
doubts about the way the questions about AI alignment were posed, a bit
like "do you want an AI pause? yes/no".

Which seems to fly past any relevant plan as to how one would want to
have one's AI future, specifically by which, hopefully trustworthy
partners, after thoroughly relating it to the current state of the
world and the supposed mitigation for ongoing genocides, global
warming, empire ideologies and nuclear brinkmanship, and, not in the
last place, ideas about how to preserve general purpose computing,
privacy and free speech.

It's kind of ironic that these things were now vaguely addressed by a
libertarian billionaire, who somehow was partly responsible for creating
the problems himself, by only listening to people that validated, or at
least not criticized his progressively hypocritical stance.




0 new messages