Would you like your A.I. with or without consciousness?

Google “geniuses” are telling us that we are 12 years away from an A.I. that is more “intelligent” than human beings. What does this mean?  Are they right?  In common use, people define the word intelligence with many different expectations despite how dictionaries try to expound upon its classical definition.  Intelligence is a word invented by academics to explain a predictable phenomena that can be graduated like a fever thermometer. Its most succinct definition reads as follows “the ability to acquire and apply knowledge and skills.”  I like that  definition as it is easy to utilize.  However, defining I.Q. in this way fails our unconscious expectations of its application being performed judiciously. So it fails  us seriously when that it does not encompass creativity and other facets of applying knowledge such as doing so in a way that does not create collateral damage. Consider as an example the hand-1571851_640wonders of pharmaceuticals that stop one disease and create two or three new issues…. you know what I mean, you see the commercials. We all wonder if this is intelligence.  At least some times it seems to be so when an antibiotic works and saves your life.  Of course if a fluoroquinolone and your tendons tear from its use crippling you, you are not quite as pleased.

People born with higher I.Q.s do learn faster, in some cases much faster.  They can often apply what they learn quickly and even effectively.  If we create and A.I. that can learn and apply what it has learned quickly and effectively will that mean in is smarter than a human?  Of course we now have to define “smart.” I chose smart, for fun, and because that grasps the popular meaning of intelligence that is more generalized.

Actually I am skeptical that we will have a “true” A.I. anytime soon, at least the type that Isaac Asimov wrote about and endeared his readers to. Isaac’s robots were “smart” and very humanoid.  Isaac’s A.I.s duplicated humanity and is some ways outshined us. They were not only Artificial Intelligences but they were conscious and supremely ethical as required by Asimov’s 3 laws of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

But lets get back to consciousness and try to define it. What is consciousness? We see it often defined as “the state of being awake and aware of one’s surroundings.” I prefer to think of it as being aware that I am regardless of whatever surroundings I may find myself in. This harkens back to the famously debated statement, “I think, therefore I am.”  Here, thought is used as a proof of one’s existence and perhaps mindfulness. Proving to oneself that you exist is also consciousness as I see it.  Can we create consciousness or is that “God given” only via a soul as some think?

We already have the ability to create sensory apparatus and motorized limbs that can closely duplicate many human physical endowments. What is appears to be lacking is the intellectual gifts that make a physical being animate itself. When we have the possibly soon to come capacity of creating nearly infinite memory with a holographic compression tool and a possible subatomic substrate, we will be one hand-697264_640step in the direction of creating the Asimov like A.I.  However, nearly endless memory is not enough. Human memory is prodigious and likely soon matched but it will not create self cognizance.

Perhaps what we now need is to add problem solving. As we grow from childhood to maturity our ability to solve problems grows with our experience. We come into this world programmed with the rudimentary skills to do this and we add to it with age and experience.  This can be programmed. What is left now to create a Asimov like A.I. may be consciousness?

When I say “I think therefore I am” it implies that I have a reason to think and this may be consciousness.  However if you mix memory with problem solving skills (a definition of intelligence) it is not likely to get up and fix your car without commands to do so.  It is not self commanding, so to speak.  What makes us self motivating?  At first blush this is quite simple, human need. A piece of metal and crystal with computer form has no clear needs; humans do. But what if it had? Would it become conscious?  We frail humans have many needs. Some of the more obvious are: food (fuel), shelter (pain avoidance), companionship and emotional gratification.  Would giving this to a computer create consciousness?  We could do this!

A computer could have pain sensors helping it avoid damage. Our cars already do in the form of crash detectors. Our cars also know when they are low on fuel, their food. Our A.I. with these skills might now avoid rain, walking into deep water and running out of fuel. It will to some extent be self activating but not conscious. What if we programmed it with a need to have companionship and to see the benefits of working in teams with beings like itself? It could be programmed to “instinctively” form robot tribes to protect itself from the pain of destruction by competitive A.I.s eagle-1245681_640who might take its fuel and shelter if these needs were in short supply.  It would likely now be more self activated but still not conscious. What now if we add emotional needs? It is quite possible to program an inner pain when other A.I.s out perform it in group activities such as searching for fuel etc.  We could even program in some “boasting” into
A.I.s that would create “jealousy” in other A.I.s.  We could in fact, create “pride feelings” in those who achieve more. We could program in competitiveness too, perhaps as a desire to soar with the eagles.  However it is not likely to lead to self awareness.  What is still missing?  How about death?

We could design in an inevitable death in which the CPU-Mind simply degrades and stops functioning as a process of memory overloading or randomness inherently created by the repetitive use of its subatomic substrate. In other words it wears out at the subatomic level by being overwhelmed from its self created complexity.  If you don’t like this cause, we can come up with another one more suitable however the bottom line is it dies.  How will this affect possible consciousness?  The A.I. might begin to seek solutions to this issue, it could be self activating all the time.  Is that consciousness?  How about if we make it possible for the A.I. to have sex with an opposite sex A.I. to reproduce itself. Could we also create in its mind a sense of woman-1339124_640family and the special value of family?  Consider that we know trees do this!  Why not A.I.s?  We could even make it possible that each A.I. inherit certain A.I. characteristics from a parent A.I., like genetics. Successful A.I. lives could modify genes so that those inheriting them would be more likely to succeed at survival and reproduction. Would this now endow the A.I. with human consciousness like behavior?   I suspect we are getting close. The sexual behavior could also be sensually and emotionally rewarding so as to be a positive motivating experience as it can be for humans. It could also be only occasionally productive by programming.  The desire for sex would also end with the climax so that the A.I. would not be stuck in a pleasure seeking loop.  We could put this all under the control of a chemical charging system that creates urgency as charge builds in a system much like hormones. It might even be connected to the cycles of the moon and the timing created by the rhythm of the Schumann Resonance.  In this way the “need” to attempt reproduction (sex) would not be a constant.  The A.I. could be programmed to recognize its “child” as its link to immortality and a strength added to its tribe, since death is inevitable.

crucifix-1802224_640We should also create an intellectual skill allowing an A.I. the ability to “put itself in another A.I.’s shoes.” This could be the beginning of compassion and sensitivity. This ability to feel another’s pain, especially at death would add an appreciation for life and perhaps an urgency to live it well. With this programmed, an A.I. might one day say, “let he who has not sinned cast the first water!”

Would we now have consciousness?  It is beginning to me to look like the A.I. will behave in a conscious fashion. Would it now begin to wonder if it will have an existence after “death?” We could of course program an electromagnetic transport for an archive of its experience to a cosmic archive. From this archive we might even pass its experience to a subsequent generation of its family, though placing it at a lower subconscious level of mental access to increase its likelihood of survival after “birth”  or to create new born prodigies.

I think at this point the A.I. will be like Asimov’s.  Will it be conscious?  I am not sure but it makes me wonder if we are too!

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s