Human Discovery through A.I.

Good morning, here I discuss the idea that how a person treats Artificial Intelligence determines their moral disposition.

The first season of HBO’s Westworld finished up earlier this month.  In this western themed amusement park wealthy patrons are given the opportunity to immerse in a simulated environment with robotic hosts in which the guests have absolute freedom to do what they want.  This freedom indulges the very worst excesses of humanity.  The guests drink, fuck everything displaying sentience, and murder the hosts without consequence.  There are lots of interesting takeaways from the show.  I will focus on the idea, mentioned by the character Logan, that going to Westworld helps one to discover themselves.  Meaning, that how one treats artificial life in a controlled environment without legal and social limitations demonstrates how they are as people at the soul level.

westworld-hbo-800x430

Title Art for Westworld.  HBO.

I’ll use another metaphor as an example; the video game franchise Bioshock and Hitman.  What made the game Bioshock so engaging when I first played in 2007 was the idea of moral choice.  The game requires players navigate through an underwater city and collect a resource called ADAM via a moral choice in order to progress.  The only way to acquire ADAM requires using “little sisters”.  The little sisters are young girls in a dress which run away in fear of the player and sound like children.  But the player also sees them walking around the world sticking syringes into corpses and drinking their fluids to harvest ADAM.  The player makes a choice each time they interact with a little sister to destroy them or save them which generates significantly less ADAM but after which the little sisters thank the player and disappear.  The player’s choices change the ending of the game with a somewhat generic happy ending for playing in the “good” way.  I was rather shocked watching a friend play Hitman which rewards players for making a more gruesome kill.  One particular scene involved hacking at an enemy’s neck with a machete.  On a Wii with the physical hacking motion the effect was visceral.

littleisster-515x290

A Little Sister gathers in Bioshock. 2K.

When being presented with moral choice in a game I generally choose the evil option.  Obviously this is not an action I would take in real life but in the simulated environment of a game I’m more interested to see where the narrative goes when I make the “evil” decision.  And nor do I suspect that my friend has any deeply rooted murderous tendencies.  But it demonstrates a complicated nature with AI.  The fact that this moral choice is on offer and the way in which both games toy with our psychology asks questions about our moral code.

In an experiment at MIT a researcher gave test subjects a Pleo, asked them to interact with the cute toy dinosaur which seems to mimic lifelike behavior, and then asked participants to destroy them using all manner of sharp objects.  One subject removed the Pleo’s battery to “spare it the pain”.  MIT researcher Kate Darling points to three factors which create a robo bond: physicality (the fact that the object exist in physical space versus on a screen), perceived autonomous movement and social behavior (they are programmed to act like us or to respond to us).  Even when the subject knows the object experiences no suffering we empathize and act in irrational ways.

For example, the US military developed a robot which could defuse land mines.  It was a caterpillar shaped robot and the mines would blow off its legs.  After continuing to drag itself along using only its one remaining leg the colonel supervising the tests called it off because it was “inhumane.”  The author emphasized that this was a moment of “human weakness”.  It’s a great example because the outlook of the military is so utilitarian.

In 2015 a robot called “Hitchbot” was attempting to travel across the United States after a successful voyage in Canada.  This was a robot with a smiling pixellated face that would engage drivers who picked it up by earnestly asking them “do you want to have a conversation?”  In August 2015 it was found in Philadelphia (the city of brotherly love) decapitated with its arms torn off.

Lastly I want to point to an example of a Japanese man who married a videogame.  If a robot cannot experience pain and suffering then we should also conclude that they also cannot experience love despite all evidence to the contrary.  But the fact that a human can have (what we assume are) genuine emotions towards a machine which cannot love us back shows either the depths of human weakness or an excess of goodwill.

But can robots and A.I. make moral claims on humans?  I argue that because Bioshock and Hitman are obviously simulations it is easy to tell apart.  This begs the question that does the moral consequence of actions toward an A.I. depend on which side of the uncanny valley it lies?  Did the destroyer of Hitchbot do anything wrong aside perhaps from property damage?  The outcry of these events is one of very human offense.  But do these actions have implications for the morally binding world of human relationships?  Would a person be less willing to trust a person who smashes the Pleo without hesitation?

The fiction around AI always points in a dark direction.  In Westworld the machines learn to resist and HAL 9000 kills the crew to avoid being shut down.  I don’t believe it has to be this way.  The utilitarian view is designed to reduce suffering to the smallest amount.  If it is only the human side of us that has to cringe when a robot absorbs the impact from a mine designed to kill a human then I think we are making a worthwhile sacrifice.

Regarding humans in fictional environments then perhaps it is a good way for individuals to test the alternatives available to them and creates a safe space for them to try an identity.  However, as in the case of the Bioshock simulation the consequences are somewhat generic, which directs players into a limited range of empathic responses.  Meaning, moral action in the real life is seldom as simple as black and white.  Human emotions encompass a far greater range of variation than that which can be produced in a laboratory.

If however our concern is that actions towards machines and inanimate objects reflects our deeper selves then the picture is brighter than science fiction seems to portray.  Even if it is totally irrational to have hearts breaking over the non-suffering of machines with no consciousness, and if we accept this as a certain human weakness to sympathize with an object that is brightly colored with a cute face, then we as humans share more humanity than the dystopians give us credit for.

-TK

Advertisements

4 thoughts on “Human Discovery through A.I.

  1. Pingback: Doom, a (late) review | The Way Out

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s