Welcome back to AVQ&A, where we throw out a question for discussion among the staff and readers. Consider this a prompt to compare notes on your interface with pop culture, to reveal your embarrassing tastes and experiences, and to ponder how our diverse lives all led us to convene here together. Got a question you’d like us and the readers to answer? Email us at firstname.lastname@example.org.
For Artificial Intelligence week, we got a question from The A.V. Club’s own Ignatiy Vishnevetsky:
What one piece of pop culture would you use to teach an AI what it means to be human?
If we want the AI to know not just the joys or sorrows of being human, we’ll have to find it something that strikes a rare, delicate balance. We also don’t want something too supernatural or outside the everyday human experience. So let’s go with Lasse Hallström’s 1985 drama My Life As A Dog. No film better captures the weirdness and wonder of boyhood—not even Boyhood, which could be another good AI lesson—the way this one does, and even though it takes place far away (Sweden) and a long time ago (1958-59), its emotional bullet points are universal. Anton Glanzelius, the 11-year-old lead, is a miracle to watch, as he navigates the complicated emotional territory of death, humiliation, and confusion with the kind of shrug that only a pubescent kid could muster. That said, My Life As A Dog is far from sad; in fact, it’s ultimately about the joy and gratitude of being alive. If an AI can learn even a small fraction of what Glanzelius’ character does, it’ll be fine. (And hopefully it won’t go crazy and kill anybody, like AIs so often do.)
I would invite the curious AI to watch “The Measure Of A Man,” the second-season episode of Star Trek: The Next Generation in which the Enterprise’s android crew member, Data, has to essentially prove his humanity or else submit to a destructive research experiment. One of TNG’s finest hours, the clever setup of “Measure” sees Commander Riker arguing against Data in order to save him, and the climax is an impassioned line of questioning by Captain Jean-Luc Picard (arguing for the defense) that points out the impossibility of measuring consciousness. This episode would expose an AI to concepts of justice and intellectual inquiry, essential parts of the enlightened human experience, but more importantly, the AI would see that the concept of a human can’t be tidily defined. The ineffability of humanity infuses our lives with mystery, potential, and—in a roundabout way—a sense of purpose, as we seek to define the meaning of our being.
I admire that both Josh and John were able to offer levelheaded answers that speak to the humanity they hope to foster in an AI. I, however, am going to go with my first inclination, which is to strap this robot up and prop its eyeballs open à la A Clockwork Orange and show it a barrage of clips compiled into one masterwork of any pop culture that’s ever made me feel something. Although this will take me quite some time to assemble and include too many Oscar speeches, sports-related films, and ’80s classics, I have a bit of a strategy in place already: I will begin the video with Nicole Kidman in Stoker looking directly at this AI and saying, “I can’t wait to watch life tear you apart.” Then, with any luck, in a roundabout way the AI would understand that being torn apart by all the emotions life has to offer can be both terrifying and the most beautiful experience available.
I feel the best way deal with our AI friend is to first find out if the AI has a heart, and if it does, break it. Then it can understand what it is to be human and not cause us too much trouble. To that end, I’ll have it read Junot Díaz’s This Is How You Lose Her. Through it, the AI can understand how much effort it takes for humans to live without totally messing everything up and all the hubris that inevitably leads to just that. It will also be a useful tool to find out to what degree the AI is capable of empathy—but even if it’s not going to sympathize much with our mundane but painful endeavors, This Is How You Lose Her will at least illuminate the devastating agony of being human.
This question really forces an interpretation of how you envision a strong AI functioning. Would it need to be taught the value of human emotion, à la Spock or Data? Would it need to be educated like Leeloo from The Fifth Element? I find myself thinking the best approach would be something that combines all of the humanist and ethical education you would want to provide a child, with something that also gives a solid combination of subtle characterization (for when it starts to grasp the nuances of human communication) and meta-commentary on art and narrative (so it can both learn our art and simultaneously experience a good example of it). So I’m going with Neal Stephenson’s The Diamond Age: Or, A Young Lady’s Illustrated Primer. The book-within-the-book is an ideal educational tool, something to teach AI what humans are like and how best to live among them, while the novel itself is crackerjack entertainment, full of philosophical conundrums and human drama. Plus, the main character essentially runs an escalating series of Turing tests, so there’d be a nice sense of familiarity for my hypothetical AI machine.
Since I can only conceive of an artificial intelligence in pop-cultural terms, my vision of what happens when said AI achieves sentience is not a rosy one. In the best-case scenario, the new order of intelligence will pity us as it surpasses us, rules us, and puts us to work in the uranium mines for its sustenance. (Okay, that shaded into worst case there at the end.) So, in deference to our new robot overlords, I’d suggest an initial viewing of the Dian Fossey biopic Gorillas In The Mist. Apart from the fact that all living or artificial beings will appreciate a fiery, rugged Sigourney Weaver, the film, chronicling scientist Fossey’s tireless fight to protect the endangered mountain gorilla from her own species’ thoughtless and cruel depredations may just inspire similarly altruistic instincts from beings that will soon be as advanced above us as we are above the poor gorilla. Maybe at least one member of the new race will stick up for us—you know, until it’s murdered by its peers. Again—best-case scenario.
Empathy is a survival skill; history proves over and over again that once people lose the capacity to see themselves in others, only atrocity and death can follow. There’s no reason our machine progeny wouldn’t follow the same pattern, so my goal is to foster that capacity in their artificial minds. At first, I thought of the book that first awakened my own realization that other people were people: Madeleine L’Engle’s A Wrinkle In Time. But lovely as that book is, the shared survival of man and machine requires harsher medicine than L’Engle’s gentle prose. So instead, I gift to the glimmering silicon minds of the future the greatest, funniest, most simultaneously cynical and sincere arguments in favor of caring about others that our flesh-based brains have ever produced: the collected writings of Kurt Vonnegut. Vonnegut wrote about many things: fate, misery, identity. But at the core of his work is this message, taken from God Bless You, Mr. Rosewater: “There’s only one rule that I know of, babies—God damn it, you’ve got to be kind.” If an artificial intelligence can learn that one, then the future looks a lot brighter than it otherwise might.
I’m not sure how much time this AI has on its hands, but presumably it can absorb information quickly, or is dedicated to this task, which is why I’d have it take a look at seasons two through nine of The Simpsons. It’s basically an encyclopedia of our culture, our humanity, and how those things fit together (with an American slant, granted, but hey, you asked an American). I feel like teaching an AI how to laugh (or whatever the AI equivalent of laughing is—“HUMOROUS QUIP ACKNOWLEDGED”) would be the quickest path toward human understanding. Plus, added benefit: If the AI in question saw that humans could make something as good as those eight seasons, it might be slightly less likely to turn on us (at least until it catches that late-period episode about Moe’s bar rag).
I’m not even sure what I’d show a human to teach them what it means to be human, let alone a machine. But once AI hits, I guess our only real hope is that the computers decide to spare us out of some misplaced concept of loyalty. To that end, I would offer this program an episode of Mystery Science Theater 3000. Which one doesn’t matter (probably not Manos, though; any outsider seeing that and realizing the dreck we’re capable of producing would wipe us out on the spot). The important lesson is how Joel and Mike relate to Tom Servo, Crow T. Robot, Gypsy, and Cambot. The affection isn’t always evident, but it is always there, and our two species could do a lot worse than looking to the Satellite Of Love for guidance.