AI has traditionally
been represented in Hollywood as an existential threat to humanity.
Predictably, the counter-trend chose to depict them as homo humane – moral beings dedicated to humanity’s welfare, or at
the very least, choosing not to interfere. In doing so however, we are merely
projecting human psychological quirks upon their minds; “minds” that are
fundamentally non-human.
To be sure, nobody
really knows how consciousness arises (or even how to define it, strictly
speaking). Increasing neural complexity along the evolutionary path, at one
point, led to rudimentary self-awareness. Instead of a series of involuntary
reflexes, the trait allowed the animal to take cognizance of his surroundings,
and actively make use of its neural
prowess to manipulate it in order to avoid danger and obtain food - the stepping
stone towards intelligence. It reduced the turnover time that natural selection
took in order to get the critical survival behaviour that had suddenly become
indispensable. What would have taken generations to program into the animal for
it to mindlessly execute, could now be partially improvised by the animal
itself, making consciousness a chief candidate for optimization through natural
selection. Origins aside, it is clear that our modern consciousness is an
emergent phenomenon, the result of millions of years of evolutionary tweaking.
Similarly, our
psychological impulses too have been shaped by innumerable, and mostly
unaccountable evolutionary factors. Given that we can't possibly hope to
recreate all the factors that led to our current psychological state, there is
absolutely no reason to assume that an AI which becomes self-aware somehow,
will share any of our expectations of
behaviour. Why, for example, would it want to create something, follow orders,
or have even an iota of curiosity? Why would it even want to survive? A
consciousness brought about by artificial means with possibly no survival
instincts, cannot be “obviously” expected to have any desire.
All of this assumes
that the self-awareness we are used to is a distinct
something that can be reached through
multiple paths, and not something bound to a very specific evolutionary
process. Even if it were the case, will
we even call something that doesn't want to survive, conscious? Doesn't our very
idea of consciousness hinge upon our perception of free will? If that is
an illusion, might not consciousness be one too? Are all our efforts then, directed at making machines delusional? And, how much time will a “true” AI take to
realize this?
No comments:
Post a Comment