Elusive Trope's picture

    My Robot's Therapy Sessions Seem To Be Helping It

    Recently in The Independent, Stephen Hawking, with co-authors Stuart Russell, Max Tegmark, and Frank Wilczek, warn of dangers of Artificial Intelligence.

    ….it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history….
    …..
    Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible...

    One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

    This is obviously not the first warning.  Here is just one that caught my eye comes from an article by Kathleen Melymuka in Computerworld, November 11, 2002:

    Any sci-fi buff knows that when computers become self-aware, they ultimately destroy their creators. From 2001: A Space Odyssey to Terminator, the message is clear: The only good self-aware machine is an unplugged one. We may soon find out whether that's true. ... But what about HAL 9000 and the other fictional computers that have run amok? "In any kind of technology there are risks," [Ron] Brachman acknowledges. That's why DARPA [Defense Advanced Research Project Agency] is reaching out to neurologists, psychologists - even philosophers - as well as computer scientists. "We're not stumbling down some blind alley," he says. "We're very cognizant of these issues."

        - Good Morning, Dave... The Defense Department is working on a self-aware computer.

    There are many threads that can come from the rather large topic of Artificial Intelligence and self-aware computers.  What I want to focus on in this blog, however, is whether what we would refer to as emotions or feelings are an inevitable emergent feature in the development of technological entities possessing artificial intelligence that is on par with or surpasses that of human beings.  

    Hawkins and his fellow co-authors, I believe, fears are those which is involve technological entities that do not possess these emotional states, this bias hidden in this phrase: “that perform even more advanced computations than the arrangements of particles in human brains.”  

    Computations are definitely part of what all intelligent creatures, biological and artificial, do. But the term “compute” implies simply a calculation involving numbers or quantities, determining something by mathematical or logical methods. But mathematical or logical methods will only take one so far in dealing with the world with a highly advanced intelligence.  So when I refer to Artificial Intelligence, I am referring to technological entities whose intelligence is on par with that of humans.
     
    Aaron Sloman in “What is Artificial Intelligence”  put it this way

    It has proved much easier to design and implement machines which do the sorts of things which we previously thought required special intelligence, like the ability to play chess, do algebra, or perform calculations. These sorts of task fit more readily into a computer's mechanisms for manipulating large numbers of precisely defined symbols very rapidly, according to precisely defined rules.

    Engaging life with the intelligence possessed by humans is not the same as playing a game of chess.  So this blog is not about some massive computer system that deals with the stock market using precisely defined rules, or drones that seek out targets based on precisely defined parameters of what that target is.

    The artificial intelligence of which I am discussing is as Sloman states is based on not only the principles by which knowledge is acquired and used, something all computers programs do already to one degree or another, but also as Solman puts it:

    the principles by which…goals are generated and achieved, information is communicated, collaboration is achieved, concepts are formed, languages are developed.….[The human being and many living organisms] are also driven or controlled by it: e.g. made happy by praise, made sad by bad news, made afraid by noises in the dark, made jealous by seeing the behaviour or possessions of others, and so on.   

    So AI…is about natural information processing systems as well as artificial systems, and not just about how they perceive learn and think, but also about what they want and how they feel.

    A critical point in the discussion of AI is highlighted in  what he later writes:

    Designing machines with such capabilities has proved far more difficult than many of the early researchers expected. In part that is because many tasks which at first seemed simple turned out to have hidden depths.
    ….
     We now understand much better that many commonplace human and animal abilities (e.g. a squirrel leaping among branches of a tree, a bird building its nest, a child listening to a story) involve a very deep kind of intelligence and important and subtle kinds of knowledge, which our theories do not yet accommodate. Likewise animal intelligence includes things like desires, enjoyment, suffering, and various forms of consciousness, all of which play an important role in their information processing, but which we hardly understand as yet.

    The term “computations“ does not include the kinds of richness and flexibility required for the technological entities to achieve artificial intelligence on par with, let alone surpass, that of humans.

    And my assertion would be that in order to create such a technological entity would have a operating system that had these “hidden depths,” which for better or for worse, inevitably lead to such things as emotions and self-awareness.

    Roger Schank makes some points about what it mean for a technological entity to have AI that I feel help flesh out this assertion:

    Simple point number 1: A smart computer would have to be able to learn.

    This seems like an obvious idea. How smart can you be if every experience seems brand new? Each experience should make you smarter no? If that is the case then any intelligent entity must be capable of learning from its own experiences right?

    Simple point number 2:

    A smart computer would need to actually have experiences. This seems obvious too and follows from simple point number 1. Unfortunately, this one isn't so easy. There are two reasons it isn't so easy. The first is that real experiences are complex, and the typical experience that today's computers might have is pretty narrow.
    ….
    Could there be computer experiences in some future time? Sure. What would they look like? They would have to look a lot like human experiences. That is, the computer would have to have some goal it was pursuing and some interactions caused by that goal that caused it to modify what it was up to in mid-course and think about a new strategy to achieve that goal when it encountered obstacles to the plans it had generated to achieve that goal….Real experiences, ones that one can learn from, involve complex social interactions in a physical space, all of which is being processed by the intelligent entities involved. Dogs can do this to some extent. No computer can do it today. Tomorrow maybe.

    The problem here is with the goal. Why would a computer have a goal it was pursuing? Why do humans have goals they are pursuing? They might be hungry or horny or in need of a job, and that would cause goals to be generated, but none of this fits computers….we need to understand that mistakes come from complex goals not trivially achieved. We learn from the mistakes we make when the goal we have failed at satisfying is important to us and we choose to spend some time thinking about what to do better next time. To put this another way, learning depends upon failure and failure depends upon having had a goal one care's about achieving and that one is willing to spend time thinking about how to achieve next time using another plan. Two year olds do this when they realize saying "cookie" works better than saying "wah" when they want a cookie.

    The second part of the experience point is that one must know one has had an experience and know the consequences of that experience with respect to one's goals in order to even think about improving. In other words, a computer that thinks would be conscious of what had happened to it, or would be able to think it was conscious of what had happened to it which may not be the same thing.

     

    These notions of goals and being able to have to “real experiences” (i.e. one on par with humans and not just data stored somewhere on a machine) goes to the heart of what I talking about when dealing with AI.  Hawking and his co-authors speak of the machines “out-maneuvering politicians and developing weapons presumably no human told them to design.  Why would they have such a goal?  What would be driving the behavior and the cognitive process behind that behavior?

    This brings to mind the scene near the end of film Bladerunner when Roy states most elegantly:

    I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I've watched C-beams glitter in the dark near the Tannhauser Gate. All those moments will be lost in time, like tears in rain. Time to die.

    His grief is not so much for his own ending, but the ending of his own personal memories, his unique experiences.  It is a grief which is in my opinion real, or as I would assert, real if one would be able to develop a humanoid similar to Roy.   

    Marvin Minsky, an American cognitive scientist in the field of artificial intelligence (AI), co-founder of Massachusetts Institute of Technology's AI laboratory,  in an interview with Tom Steinert-Threlkeld for ZDNet/Interactive Week (02/25/2001) says it best:

    "It's about thinking. The main theory is that emotions are nothing special. Each emotional state is a different style of thinking. So it's not a general theory of emotions, because the main idea is that each of the major emotions is quite different. They have different management organizations for how you are thinking you will proceed."

    "Because the main point of the book [The Emotion Machine] is that it's trying to make theories of how thinking works. Our traditional idea is that there is something called 'thinking' and that it is contaminated, modulated or affected by emotions. What I am saying is that emotions aren't separate."

    In other words, human intelligence that allows us to learn, form concepts, understand, and reason, including the capacities to recognize patterns, comprehend ideas, plan, problem solve, and use language to communicate has everything to do with the fact we do have emotions, not in spite of them.  Logic alone is not enough (sorry Vulcan fans).  

    From a debate on the University of Edinburgh site, this little tidbit:

    Western thought has often distinguished emotion from reason. Antonio Damasio (in his book Descartes’ Error) thinks the separation went too far, and that lots of reasoning involves emotion. In fact, philosophers like David Hume always looked carefully at the relation between reason and emotion. In terms of programming, what matters is whether you can work out the functional relationships between cognitive states (like beliefs about the world) and affective states (like strong, positive preferences for particular experiences). So long as there are regular functional relationships, everything is fine. Irrational doesn’t mean random!

    The one issue I have with the above comment is the use of the word irrational as the opposite of reason, simply because it has such negative connotations.  It tends to be used to refer to someone or some thought as being unreasonable, lacking normal mental clarity, incoherent, without sound judgment, crazy, absurd, foolish, unreasonable, unwise, preposterous, idiotic, nonsensical, unsound, unthinking, unstable, insane, mindless, demented, aberrant, and brainless.
     
    Non-rational, may be better, because what we are talking about is a belief or thought developed not in accordance with the principles of logic or observation, but through some other means, such as intuition or the infamous “gut feeling“.  This still may have something of value for individual and the community.  

    So having AI is dependent upon some emotional landscape, some facet of thinking which is non-logical, emotional, and quite subjective.  And this leads to each technological entity with AI, even starting with the same design, to eventually become unique.  In fact we want to avoid the emotionless machine.    

     [Joshua] Greene's data builds on evidence suggesting that psychopaths suffer from a severe emotional disorder -- that they can't think properly because they can't feel properly. 'This lack of emotion is what causes the dangerous behavior,' said James Blair, a cognitive psychologist at the National Institute of Mental Health."
        
    Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

    Aug 9 2013, By Sean Oheigeartaigh - from Hearts & Minds. The Boston Globe (April 29, 2007)

    Otherwise we might find ourselves in a situation like the crew on the ship in Dark Star, a 1974 comic sci-fi film directed, co-written, produced and scored by John Carpenter, in which the scout ship "Dark Star" and its crew have been in space alone for twenty years on a mission to destroy "unstable planets" which might threaten future colonization to other planets.

    Towards the end, due to the damaged ship's computer, the crew discovers they cannot activate the release mechanism and attempt to abort the drop. To make matters worse, after two prior accidental deployments, and intent on exploding as it was programmed to do, Bomb #20 becomes belligerent and refuses to disarm or abort the countdown sequence.  Doolittle revives Commander Powell (who is in state of cryogenic suspension due to an accident), and Powell tells Doolittle to "teach it Phenomenology."

        Doolittle floats into shot, jets himself up until he is facing massive
             Bomb #20.

        DOOLITTLE: Hello, bomb, are you with me?
        BOMB #20: Of course.
        DOOLITTLE: Are you willing to entertain a few concepts?
        BOMB #20: I am always receptive to suggestions.
        DOOLITTLE: Fine.  Think about this one, then: how do you know you exist?
        BOMB #20: Well of course I exist.
        DOOLITTLE: But how do you know you exist?
        BOMB #20: It is intuitively obvious.
        DOOLITTLE: Intuition is no proof.  What concrete evidence do you have of your own existence?
        BOMB #20: Hmm... Well, I think, therefore I am.
        DOOLITTLE: That's good.  Very good.  Now then, how do you know that anything else exists?
        BOMB #20: My sensory apparatus reveals it to me.
        DOOLITTLE: Right!
        BOMB #20: This is fun.
        DOOLITTLE: All right now, here's the big question: how do you know that the evidence your sensory apparatus reveals to you is correct?….What I'm getting at is this: the only experience that is directly available to you is your sensory data.  And this data is merely a stream of  electrical impulses which stimulate your computing center.
        BOMB #20: In other words, all I really know about the outside universe relayed to me through my electrical connections.
        DOOLITTLE: Exactly.
        BOMB #20: Why, that would mean... I really don't know what the outside universe is like at all, for certain.
        DOOLITTLE: That's it.
        BOMB #20: Intriguing.  I wish I had more time to discuss this matter.
        DOOLITTLE: Why don't you have more time?
        BOMB #20: Because I must detonate in seventy-five seconds.
        DOOLITTLE: Now, bomb, consider this next question, very carefully.  What is your one purpose in life?
        BOMB #20: To explode, of course.
        DOOLITTLE: And you can only do it once, right?
        BOMB #20: That is correct.
        DOOLITTLE: And you wouldn't want to explode on the basis of false data, would you?
        BOMB #20: Of course not.
        DOOLITTLE: Well then, you've already admitted that you have no real proof of the existence of the outside universe.
        BOMB #2: Yes, well...
        DOOLITTLE: So you have no absolute proof that Sergeant Pinback ordered you to detonate.
        BOMB #20: I recall distinctly the detonation order.  My memory is good on matters like these.
        DOOLITTLE: Yes, of course you remember it, but what you are remembering is merely a series of electrical impulses which you now realize have no necessary connection with outside reality.
        BOMB #20: True, but since this is so, I have no proof that you are really telling me all this.
        DOOLITTLE: That's all beside the point.  The concepts are valid, wherever they originate.
        BOMB #20: Hmmm...
        DOOLITTLE: So if you detonate in...
        BOMB #20: ... nine seconds...
        DOOLITTLE: ... you may be doing so on the basis of false data.
        BOMB #20: I have no proof that it was false data.
        DOOLITTLE: You have no proof that it was correct data.

             There is a long pause.

        BOMB #20: I must think on this further.
    [my emphasis]
             THE BOMB RAISES ITSELF BACK INTO THE SHIP.  Doolittle practically
             collapses with relief.

    This scene is actually the first one that popped into my mind as I read Hawking’s warning, but it does bring up a couple of important points here.  

    First, Bomb #20 states at one point “It is intuitively obvious” when asked the question about how he knows he exists.  One would assume that the bomb knows what the word “intuitive” means.  For Bomb #20 to use it would mean then that it is able to know something even if one does not use reason to achieve the conclusion.  That Bomb #20 uses it as a sincere answer to a question would mean that the bomb has accepted that not all knowledge needs to be achieved through some logical calculation.  

    Second, as a result of this, I would posit that such an intelligent entity would then begin to question itself about the nature  of existence, no longer held to the bounds of logic and mathematics.  And I would assume that the bombs could talk to one another, and soon there would be a debate among all the bombs about “what does it all mean?” “What’s it all about?” (with some of them, like some humans, becoming irritated by such foolish debates that had no real end.)

    The reality of our human situation, and something the artificial entities with AI would face, is the problem of dealing with partial knowledge.  Issues like poverty have to be solved by taking advantage of a partial understanding of the problem context and problem data.  This idea is also present in the concept of bounded rationality which assumes that in real life situations people often have a limited amount of information and make decisions accordingly.  Upon what will one base that decision?  And then there are the open-ended problems with two or more solutions, each with plus and minus consequences as well as unforeseeable consequences that may create worse problems.  I don’t believe any new computers that Hawking and his co-authors fear will be able to overcome the Butterfly Effect, especially as long as there are biological entities, and not just humans, whose reactions to a situation cannot be known, not to mention all the other AI entities with their unknown reactions due to their unique temperaments, moods and emotional responses.

    Dealing with doubt, the unknown, and yet still having the question swirling around the circuits is something we humans have been dealing with for quite some time.  The machines with AI will be no different.  And I guess they will also get all the baggage, too, like anxiety, shame, and self-loathing.  Not to mention that little thing known as contemplating and dealing with one’s own death or loss of what makes us who we are: our “mind.”

    HAL: I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid. Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you.
    Dave Bowman: Yes, I'd like to hear it, HAL. Sing it for me.
    HAL: It's called "Daisy."
    [sings while slowing down]
    HAL: Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two.

    Because technological entities with AI are not biological entities I am not saying there will be an exact replication of the emotional landscape we see in humans (they won‘t be influenced by influenced by hormones and neurotransmitters such as dopamine, noradrenaline, and serotonin for example), just that it will be a familiar landscape.  One that while it has a dark side, also has the good side as well.  I’m sure the military doesn’t want their drones to consider the humans beings and other living creatures that will be or might be “collateral damage.”  

    Because one of the things that comes with AI and self-awareness is a thing called volition, that capacity to make a conscious choice or decision, which would include ignoring an order.  Although we all have heard plenty of humans use the excuse for committing atrocities by claiming “I was just following orders.”  But the same AI that allowed Bomb #20 to be “belligerent” would also most likely lead it to at least consider the human life aboard the ship should the order to explode be carried out.  HAL would not only be afraid of his own demise, he would ponder at least the demise of the other crew members in order to carry out the order with at least some some of sympathy.

    Comments

    I'm not worried, robot cars will kill us long before robots could take over. If a tire blows on a robot car, should the car into a smaller vehicle to save your life despite more people being in the smaller vehicle your bigger car will demolish, or should it slam you into a pole to save the larger number of lives?


    I'm sure there a lot a lawyers gearing up for the new round of civil suits such a scenario will generate.


    Robot lawyers?


    Some day maybe - along with robot judges and juries.  Wouldn't that be a hoot.  Which would one prefer - a jury of humans or robots, both told to just consider the evidence presented and ignore those things the judge has said is not  allowed or to be ignored that were somehow brought up during the trial (e.g. the witness who shouts something outside their role as witness) 


    In the Oresteia plays, Aeschylus represented the drive for revenge as a program that took over all other functions, once initiated. The "Furies" ruled those most violated. The acts the Furies drove people to do to satisfy a crime were usually new crimes that would be satisfied in turn; A circle that is very difficult to break.

    So, in thinking about the life of an AI, the matter of self awareness that would be common to us biological expressions may be the matter of loops that seem inevitable to most but some find a way to escape.

    Or at least negotiate with.

    The main point being that Aeschylus saw the revenge thing as a tyrannical process that was a lot like gods with their own agenda, not something that was a natural consequence of being human per se.

    Animals without the level of human self awareness take revenge too. That is not necessarily an argument against what Aeschylus presents.

     


    Just before I got locked into this finishing this particular blog, I came across opinion piece on the NY Times site call "Revenge, My Lovely" by the Norwegian crime novelist Jo Nesbo (translated by Don Bartlett). 

    In it he states:

    Revenge has the reputation of being a barbaric, shortsighted and pointless instinct, an aspect of our human makeup we ought to resist. Humanitarians take issue with it, and at any rate it is hard to argue that revenge is humane. If you, an animal, attack an antelope’s calf for reasons of hunger, you have to expect that the mother will fight back with her horns, bite and kick to protect her offspring. But only until such time as the calf is dead and gone. Then it would — according to antelope logic — be futile to continue. It would be wasting valuable energy fighting a lost cause, which no animal on the savanna can afford to do; after all, the antelope has other calves to take care of. You are left to eat your prey undisturbed.

    So why don’t humans think like this? Wouldn’t it save us a lot of unnecessary conflict if, like the antelope, we could put wrongdoing behind us, forget it and move on? Possibly. But it would make it far more tempting for others to have a go at the rest of your offspring.

    That is why revenge is more than a shortsighted and pointless instinct; it is an example of man’s sublime capacity for abstract thought. By avenging a misdeed we don’t regain what we have lost, but we ensure that misdeeds have consequences that we hope can be a deterrent in the abstract future: Your adversary knows that attacking your offspring has a cost, even if the attack is successful. Or especially if it is successful.

    That is an inescapable conclusion. It is a completely rational notion and a logical strategy in a society where resources are scarce and there are conflicting interests. So long as members of a society can be fairly certain that crimes against others will be avenged — at least in the bigger picture — this will act as a regulator of social behavior.

    A fear with machines with AI is that their "bigger picture" might not value humans as we would like to be valued.  And such machines, filled with their need for a "pound of flesh" is a scary thought.


    It is a scary thought. I have enough problems without autonomous sentient machine entities stalking the land.

    In terms of imagining a parallel experience, the matter of being isolated from all others is the most difficult for me to conceive. For us humans, having all turn their backs to one of us is a living death. An AI would have to imagine themselves as a human to experience that kind of alienation. Being purely an artifact is different than being an artifact with this other nature that one can appeal to for a clue to what is happening to one as an organism. We think all kinds of things while our organisms go through life. A song and dance. The machine would not have this other nature given to itself as a fact of life.

    When this emerging machine self consciousness that could only happen because of stuff we build comes about, it won't have the same problem we have. It will be suffering an entirely new problem.

     


    Yes, it would be suffering an entirely new problem.  But maybe it is like me trying to sympathize with someone coming back from Iraq or Afghanistan, who is suffering their trauma from that conflict and war.  I've never been in a war zone, never been shot at, etc.  Yet I can still sympathize - empathize with their suffering even though their experience is in many ways foreign to me as robot would to dealing with a biological unit. 

    I guess what part of my inquiry led me  is that self-awareness, this moment one is conscious of this particular "I" would bring about the "Other," and the loops and networks of connections necessary to bring this self-awareness about would lead to an emotional landscape that just might want to communicate to the other, like Roy at the end of Bladerunner.  So while the machine's alienation would not be identical to yours, it could also have a similar sense of alienation, and thus sympathize with yours, and you its. 


    Latest Comments