[thanks for throwing this dog a bone.]
The trouble with A.i is it isn't one thing. Its a direction before its
any one thing. Its a methodology seeking an application. Its the idea
of the magic bullet, a plaything for the smartest of smart programmers
to earn his merit badge as A.i programmer. Without a clear idea, A.i
becomes whatever we believe it to be.
We have one idea of a.i as a machine taught to learn. Even when we
aren't clear of the rules, these machines, with enough examples, will
create there own rules, or find rules we never knew existed. We marvel
at this ability, as if it were a form of alchemy. Having created these
ai, or 'smart fit' machines, it seems fine to just let them loose
on the data.
Artificial intelligence, this euphemism is to be taken at face value,
so that as we accept this idea of intelligence, we cultivate an
entirely different order for intelligence. Instead of intelligence as
reasoning, or rational, or what in former times was called wisdom. we
accept the computationally sound as intelligent.
It seems to me bound up in this idea of intelligence is an allegiance
to the past, since its based on a model taken from past data, it
must mean a duty to the status quo. This is A.i bound to a model,
with humanity to adhere to that model. Humanity perceived as erred
for not following the prudent fit. Its same idea for humanity then
becomes one of predicting your next move, your intentions, your will.
Its what's other wise called 'static intelligence', or to use Daniel
Kahneman's definition, its an example of 'system #1' thinking. Its based
on a associations. It is derivative, pandering to our own cognitive
flaws, eg, a bias to believe and confirm. It is always
of the data.
Of course there is that older computing maxim, born of the 'data
processing' age, which still applies even here to A.i. Garbage in,
Garbage out.
All this may suit those seeking mechanized solutions, but you have
to wonder on the ultimate cost where the great mass of humanity are
'processed' by machines, and there are no independent minds to
question the summary claims of these machines. Those summaries to
read as Meta.
Or worst, where we limit our perception of the person to what the
machine was taught to encode as the person. Or worst yet, where
we cheat the data, massaging the data ahead of the machine's
processing of that data. Since these machines have no idea of
falsehoods and won't question what they receive. Once again,
Garbage in, Garbage out.
Or worst where the person is repeatedly processed as so much data,
with no one in the loop to note the effects of this. This too
is A.i. Issuing orders determining the actions of its human
subordinates, who now have no way to ask the question.
As ever those who invent this future on a fancy, wont be around
to see their effects. They'll postulate the theory, for others
to observe. You'll division and perception based on status. Those
on one side of these systems, the creators, who wont know what
happens next. The other side of these silicon Vally dreams is
never to be discussed.
To put it frankly, we don't have enough examples of when these
systems fail, and so there is no will to be critical, let alone
cynical. Its like the religion, where there is too much invested
in the order which comes from this belief, for us to doubt it.
And so we look the other way. or pray we wont succumb to the
same fate. What we don't understand, we accept.
Ai suits the idealized. Its suits the perfect, or the sanitized.
It will serve us with the best fit, as it was designed to, as if
we all existed as discrete projections, part of the same continuum
from which these model based programs take their fit.. These A.i
programs will serve us with the best fit, even when that fit is
no match at all. We absolutely demand this, and these systems
wont ever say no to that demand..
The worst aspect of this A.i future is the lack of accountability.
Humans act knowing they are accountable, but what says us of these
A.i? There is no one person to blame or take responsibility when
A.i makes a mistake.
As cogs in the machine, cogs responding to what A.i decrees, few
will have a sense of the bigger picture. Cause and effect are no
longer relevant, both are kept separate, instead this idea of
intelligence is one with no sense for time or timing, order or
sequence. instead one finds A.i actions based on on correlations
or outcome bias. Ironically these A.i systems wont see the part
played by their presence. Cause and effect.
A.i like Gm can mean whatever we want it to mean. Good or bad.
All that matters is the methodology. The Method comes before its function.