On Tuesday, 22 November 2022 at 15:25:53 UTC-4, HRM Resident wrote:
> A conversational algorithm feeds on labeled conversations, and can be
> trained to converse on any topic, from philosophy to the meaning of life
> to customer support situations or, even if we want, to start flirting
> and end up moaning in ecstasy.
So it would pass the turing test. :)
> Which is why it makes no sense to suggest that an algorithm "has
> developed consciousness," that it "fears being turned off," that it
> "feels lonely," or that it "cares about humanity", and is basically
> indulging in the anthropomorphization of a technology that is neither
> conscious nor will be any time soon. It's like the RCA dog thinking his
> master is inside the gramophone.
Well maybe, maybe not. I've been anthropomorphizing you for some
time. Am I wrong to do this? If I disassemble you will I find anything more than
protons, neutrons and electrons? It follows that these elementary components
can be assembled to be *you*. and you claim to be conscious, creative
and caring, yet you are but a machine. Riddle me that.
> A computer it is only executing algorithms. Believing it's akin to
> your mind is a reflection of some fear, or ignorance of the subject, or
> perhaps, wishful thinking that we are now living in a science fiction
Computers as we know them, yes, but are more than one way
to make computing devices.
> That is not a good thing, because it means we may well be wary of
> other technologies with huge potential, or believe that technologies
> that are far from developed are already here. Consciousness is an
> extremely complex mechanism, and machine learning researchers are doomed
> to fail.
Maybe it is and maybe it is fairly simple, One recent idea is that
consciousness is the memory of what the brain did a half second ago
based on the observation that the brain is assembling the instruction
for an act before we "will" the act.
> Stop comparing the brain with a supposedly conscious machine, and
> instead stay focused on the application of statistics capable of making
> a machine perform advanced automation applied to more and more tasks.
Oh, I get it. I've been misinterpreting your "I don't know" chant. It should
be "I don't know and I don't want to know and we should quit trying
to know". I don't know why you apparently fear finding out about
things we don't already know.
> The former is nothing more than a manifestation of some of unfounded
> fear, pointless questions and science-fiction stories. The latter, on
> the other hand, is the key to the next leap forward in productivity and,
> possibly, to a revolution in the way we will understand work in the
> future. Which would be no bad thing, and certainly doesn't require
> machines with consciousness.
So fundamental knowledge is not worth pursuing, only applied
research is worthwhile. Are you a LWA?
> HRM Resident