Читайте только на ЛитРес

Книгу нельзя скачать файлом, но можно читать в нашем приложении или онлайн на сайте.

Читать книгу: «Smarter Than You Think: How Technology is Changing Our Minds for the Better», страница 2

Шрифт:

I want to examine how technology changes our mental habits, but for now, we’ll be on firmer ground if we stick to what’s observably happening in the world around us: our cognitive behavior, the quality of our cultural production, and the social science that tries to measure what we do in everyday life. In any case, I won’t be talking about how your brain is being “rewired.” Almost everything rewires it, including this book.

The brain you had before you read this paragraph? You don’t get that brain back. I’m hoping the trade-off is worth it.

The rise of advanced chess didn’t end the debate about man versus machine, of course. In fact, the centaur phenomenon only complicated things further for the chess world—raising questions about how reliant players were on computers and how their presence affected the game itself. Some worried that if humans got too used to consulting machines, they wouldn’t be able to play without them. Indeed, in June 2011, chess master Christoph Natsidis was caught37 illicitly using a mobile phone during a regular human-to-human match. During tense moments, he kept vanishing for long bathroom visits; the referee, suspicious, discovered Natsidis entering moves into a piece of chess software on his smartphone. Chess had entered a phase similar to the doping scandals that have plagued baseball and cycling, except in this case the drug was software and its effect cognitive.

This is a nice metaphor for a fear that can nag at us in our everyday lives, too, as we use machines for thinking more and more. Are we losing some of our humanity? What happens if the Internet goes down: Do our brains collapse, too? Or is the question naive and irrelevant—as quaint as worrying about whether we’re “dumb” because we can’t compute long division without a piece of paper and a pencil?

Certainly, if we’re intellectually lazy or prone to cheating and shortcuts, or if we simply don’t pay much attention to how our tools affect the way we work, then yes—we can become, like Natsidis, overreliant. But the story of computers and chess offers a much more optimistic ending, too. Because it turns out that when chess players were genuinely passionate about learning and being creative in their game, computers didn’t degrade their own human abilities. Quite the opposite: it helped them internalize the game much more profoundly and advance to new levels of human excellence.

Before computers came along, back when Kasparov was a young boy in the 1970s in the Soviet Union, learning grand-master-level chess was a slow, arduous affair. If you showed promise and you were very lucky, you could find a local grand master to teach you. If you were one of the tiny handful who showed world-class promise, Soviet leaders would fly you to Moscow and give you access to their elite chess library, which contained laboriously transcribed paper records of the world’s top games. Retrieving records was a painstaking affair; you’d contemplate a possible opening, use the catalog to locate games that began with that move, and then the librarians would retrieve records from thin files, pulling them out using long sticks resembling knitting needles. Books of chess games were rare and incomplete. By gaining access to the Soviet elite library, Kasparov and his peers developed an enormous advantage over their global rivals. That library was their cognitive augmentation.

But beginning in the 1980s, computers took over the library’s role and bested it. Young chess enthusiasts could buy CD-ROMs filled with hundreds of thousands of chess games. Chess-playing software could show you how an artificial opponent would respond to any move. This dramatically increased the pace at which young chess players built up intuition. If you were sitting at lunch and had an idea for a bold new opening move, you could instantly find out which historic players had tried it, then war-game it yourself by playing against software. The iterative process of thought experiments—“If I did this, then what would happen?”—sped up exponentially.

Chess itself began to evolve. “Players became more creative and daring,” as Frederic Friedel, the publisher of the first popular chess databases and software, tells me. Before computers, grand masters would stick to lines of attack they’d long studied and honed. Since it took weeks or months for them to research and mentally explore the ramifications of a new move, they stuck with what they knew. But as the next generation of players emerged, Friedel was astonished by their unusual gambits, particularly in their opening moves. Chess players today, Kasparov has written, “are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t.”

Most remarkably, it is producing players who reach grand master status younger. Before computers, it was extremely rare for teenagers to become grand masters. In 1958, Bobby Fischer stunned the world by achieving that status at fifteen. The feat was so unusual it was over three decades before the record was broken, in 1991. But by then computers had emerged, and in the years since, the record has been broken twenty times, as more and more young players became grand masters. In 2002, the Ukrainian Sergey Karjakin became one at the tender age of twelve.38

So yes, when we’re augmenting ourselves, we can be smarter. We’re becoming centaurs. But our digital tools can also leave us smarter even when we’re not actively using them.

Let’s turn to a profound area where our thinking is being augmented: the world of infinite memory.

We, the Memorious_

What prompts a baby, sitting on the kitchen floor at eleven months old, to suddenly blurt out the word “milk” for the first time? Had the parents said the word more frequently than normal? How many times had the baby heard the word pronounced—three thousand times? Or four thousand times or ten thousand? Precisely how long does it take before a word sinks in anyway? Over the years, linguists have tried to ask parents to keep diaries of what they say to their kids, but it’s ridiculously hard to monitor household conversation. The parents will skip a day or forget the details or simply get tired of the process. We aren’t good at recording our lives in precise detail, because, of course, we’re busy living them.

In 2005, MIT speech scientist Deb Roy and his wife, Rupal Patel (also a speech scientist) were expecting their first child—a golden opportunity, they realized, to observe the boy developing language. But they wanted to do it scientifically. They wanted to collect an actual record of every single thing they, or anyone, said to the child—and they knew it would work only if the recording was done automatically. So Roy and his MIT students designed “TotalRecall,” an audacious setup that involved wiring his house with cameras and microphones. “We wanted to create,” he tells me, “the ultimate memory machine.”

In the months before his son arrived, Roy’s team installed wide-angle video cameras and ultrasensitive microphones in every room in his house. The array of sensors would catch every interaction “down to the whisper” and save it on a huge rack of hard drives stored in the basement. When Roy and his wife brought their newborn home from the hospital, they turned the system on. It began producing a firehose of audio and video: About 300 gigabytes per day, or enough to fill a normal laptop every twenty-four hours. They kept it up for two years, assembling a team of grad students and scientists to analyze the flow, transcribe the chatter, and figure out how, precisely, their son learned to speak.

They made remarkable discoveries. For example, they found that the boy had a burst of vocabulary acquisition—“word births”—that began around his first birthday and then slowed drastically seven months later. When one of Roy’s grad students analyzed this slowdown,1 an interesting picture emerged: At the precise moment that those word births were decreasing, the boy suddenly began using far more two-word sentences. “It’s as if he shifted his cognitive effort2 from learning new words to generating novel sentences,” as Roy later wrote about it. Another grad student discovered that the boy’s caregivers3 tended to use certain words in specific locations in the house—the word “don’t,” for example, was used frequently in the hallway, possibly because caregivers often said “don’t play on the stairs.” And location turned out to be important: The boy tended to learn words more quickly when they were linked to a particular space. It’s a tantalizing finding, Roy points out,4 because it suggests we could help children learn language more effectively by changing where we use words around them. The data is still being analyzed, but his remarkable experiment has the potential to transform how early-language acquisition is understood.

It has also, in an unexpected way, transformed Roy’s personal life. It turns out that by creating an insanely nuanced scientific record of his son’s first two years, Roy has created the most detailed memoir in history.

For example, he’s got a record of the first day his son walked. On-screen, you can see Roy step out of the bathroom and notice the boy standing, with a pre-toddler’s wobbly balance, about six feet away. Roy holds out his arms and encourages him to walk over: “Come on, come on, you can do it,” he urges. His son lurches forward one step, then another, and another—his first time successfully doing this. On the audio, you can actually hear the boy squeak to himself in surprise: Wow! Roy hollers to his mother, who’s visiting and is in the kitchen: “He’s walking! He’s walking!”

It’s rare to catch this moment on video for any parent. But there’s something even more unusual about catching it unintentionally. Unlike most first-step videos caught by a camera-phone-equipped parent, Roy wasn’t actively trying to freeze this moment; he didn’t get caught up in the strange, quintessentially modern dilemma that comes from trying to simultaneously experience something delightful while also acting and getting it on tape. (When we brought my son a candle-bedecked cupcake on his first birthday, I spent so much time futzing with snapshots—it turns out cheap cameras don’t focus well when the lights are turned off—that I later realized I hadn’t actually watched the moment with my own eyes.) You can see Roy genuinely lost in the moment, enthralled. Indeed, he only realized weeks after his son walked that he could hunt down the digital copy; when he pulled it out, he was surprised to find he’d completely misremembered the event. “I originally remembered it being a sunny morning, my wife in the kitchen,” he says. “And when we finally got the video it was not a sunny morning, it was evening; and it was not my wife in the kitchen, it was my mother.”

Roy can perform even crazier feats of recall. His system is able to stitch together the various video streams into a 3-D view. This allows you to effectively “fly” around a recording, as if you were inside a video game. You can freeze a moment, watch it backward, all while flying through; it’s like a TiVo for reality. He zooms into the scene of his watching his son, freezes it, then flies down the hallway into the kitchen, where his mother is looking up, startled, reacting to his yells of delight. It seems wildly futuristic, but Roy claims that eventually it won’t be impossible to do in your own home: cameras and hard drives are getting cheaper and cheaper, and the software isn’t far off either.

Still, as Roy acknowledges, the whole project is unsettling to some observers. “A lot of people have asked me, ‘Are you insane?’” He chuckles. They regard the cameras as Orwellian, though this isn’t really accurate; it’s Roy who’s recording himself, not a government or evil corporation, after all. But still, wouldn’t living with incessant recording corrode daily life, making you afraid that your weakest moments—bickering mean-spiritedly with your spouse about the dishes, losing your temper over something stupid, or, frankly, even having sex—would be recorded forever? Roy and his wife say this didn’t happen, because they were in control of the system. In each room there was a control panel that let you turn off the camera or audio; in general, they turned things off at 10 p.m. (after the baby was in bed) and back on at 8 a.m. They also had an “oops” button in every room: hit it, and you could erase as much as you wanted from recent recordings—a few minutes, an hour, even a day. It was a neat compromise, because of course one often doesn’t know when something embarrassing is going to happen until it’s already happening.

“This came up from, you know, my wife breast-feeding,” Roy says. “Or I’d stumble out of the shower, dripping and naked, wander out in the hallway—then realize what I was doing and hit the ‘oops’ button. I didn’t think my grad students needed to see that.” He also experienced the effect that documentarians and reality TV producers have long noticed: after a while, the cameras vanish.

The downsides, in other words, were worth the upsides—both scientific and personal. In 2007, Roy’s father came over to see his grandson when Roy was away at work. A few months later, his father had a stroke and died suddenly. Roy was devastated; he’d known his father’s health was in bad shape but hadn’t expected the end to come so soon.

Months later, Roy realized that he’d missed the chance to see his father play with his grandson for the last time. But the house had autorecorded it. Roy went to the TotalRecall system and found the video stream. He pulled it up: his father stood in the living room, lifting his grandson, tickling him, cooing over how much he’d grown.

Roy froze the moment and slowly panned out, looking at the scene, rewinding it and watching again, drifting around to relive it from several angles.

“I was floating around like a ghost watching him,” he says.

What would it be like to never forget anything? To start off your life with that sort of record, then keep it going until you die?

Memory is one of the most crucial and mysterious parts of our identities; take it away, and identity goes away, too, as families wrestling with Alzheimer’s quickly discover. Marcel Proust regarded the recollection of your life as a defining task of humanity; meditating on what you’ve done is an act of recovering, literally hunting around for “lost time.” Vladimir Nabokov saw it a bit differently: in Speak, Memory, he sees his past actions as being so deeply intertwined with his present ones that he declares, “I confess I do not believe in time.”5 (As Faulkner put it, “The past is never dead. It’s not even past.”)6

In recent years, I’ve noticed modern culture—in the United States, anyway—becoming increasingly, almost frenetically obsessed with lapses of memory. This may be because the aging baby-boomer population is skidding into its sixties, when forgetting the location of your keys becomes a daily embarrassment. Newspaper health sections deliver panicked articles about memory loss and proffer remedies, ranging from advice that is scientifically solid (get more sleep and exercise) to sketchy (take herbal supplements like ginkgo) to corporate snake oil (play pleasant but probably useless “brain fitness” video games.) We’re pretty hard on ourselves. Frailties in memory are seen as frailties in intelligence itself. In the run-up to the American presidential election of 2012, the candidacy of a prominent hopeful, Rick Perry, began unraveling with a single, searing memory lapse: in a televised debate, when he was asked about the three government bureaus he’d repeatedly vowed to eliminate, Perry named the first two—but was suddenly unable to recall the third. He stood there onstage, hemming and hawing for fifty-three agonizing seconds before the astonished audience, while his horrified political advisers watched his candidacy implode. (“It’s over, isn’t it?” one of Perry’s donors asked.)7

Yet the truth is, the politician’s mishap wasn’t all that unusual. On the contrary, it was extremely normal. Our brains are remarkably bad at remembering details. They’re great at getting the gist of something, but they consistently muff the specifics. Whenever we read a book or watch a TV show or wander down the street, we extract the meaning of what we see—the parts of it that make sense to us and fit into our overall picture of the world—but we lose everything else, in particular discarding the details that don’t fit our predetermined biases. This sounds like a recipe for disaster, but scientists point out that there’s an upside to this faulty recall. If we remembered every single detail of everything, we wouldn’t be able to make sense of anything. Forgetting is a gift and a curse: by chipping away at what we experience in everyday life, we leave behind a sculpture that’s meaningful to us, even if sometimes it happens to be wrong.

Our first glimpse into the way we forget came in the 1880s, when German psychologist Hermann Ebbinghaus ran a long, fascinating experiment on himself.8 He created twenty-three hundred “nonsense” three-letter combinations and memorized them. Then he’d test himself at regular periods to see how many he could remember. He discovered that memory decays quickly after you’ve learned something: Within twenty minutes, he could remember only about 60 percent of what he’d tried to memorize, and within an hour he could recall just under a half. A day later it had dwindled to about one third. But then the pace of forgetting slowed down. Six days later the total had slipped just a bit more—to 25.4 percent of the material—and a month later it was only a little worse, at 21.1 percent. Essentially, he had lost the great majority of the three-word combinations, but the few that remained had passed into long-term memory. This is now known as the Ebbinghaus curve of forgetting, and it’s a good-news-bad-news story: Not much gets into long-term memory, but what gets there sticks around.

Ebbinghaus had set himself an incredibly hard memory task. Meaningless gibberish is by nature hard to remember. In the 1970s and ’80s, psychologist Willem Wagenaar tried something a bit more true to life.9 Once a day for six years, he recorded a few of the things that happened to him on notecards, including details like where it happened and who he was with. (On September 10, 1983, for example, he went to see Leonardo da Vinci’s Last Supper in Milan with his friend Elizabeth Loftus, the noted psychologist). This is what psychologists call “episodic” or “autobiographical” memory—things that happen to us personally. Toward the end of the experiment, Wagenaar tested himself by pulling out a card to see if he remembered the event. He discovered that these episodic memories don’t degrade anywhere near as quickly as random information: In fact, he was able to recall about 70 percent of the events that had happened a half year ago, and his memory gradually dropped to 29 percent for events five years old. Why did he do better than Ebbinghaus? Because the cards contained “cues” that helped jog his memory—like knowing that his friend Liz Loftus was with him—and because some of the events were inherently more memorable. Your ability to recall something is highly dependent on the context in which you’re trying to do so; if you have the right cues around, it gets easier. More important, Wagenaar also showed that committing something to memory in the first place is much simpler if you’re paying close attention. If you’re engrossed in an emotionally vivid visit to a da Vinci painting, you’re far more likely to recall it; your everyday humdrum Monday meeting, not so much. (And if you’re frantically multitasking on a computer, paying only partial attention to a dozen tasks, you might only dimly remember any of what you’re doing, a problem that I’ll talk about many times in this book.) But even so, as Wagenaar found, there are surprising limits. For fully 20 percent of the events he recorded, he couldn’t remember anything at all.

Even when we’re able to remember an event, it’s not clear we’re remembering it correctly. Memory isn’t passive; it’s active.10 It’s not like pulling a sheet from a filing cabinet and retrieving a precise copy of the event. You’re also regenerating the memory on the fly. You pull up the accurate gist, but you’re missing a lot of details. So you imaginatively fill in the missing details with stuff that seems plausible, whether or not it’s actually what happened. There’s a reason why we call it “re-membering”; we reassemble the past like Frankenstein assembling a body out of parts. That’s why Deb Roy was so stunned to look into his TotalRecall system and realize that he’d mentally mangled the details of his son’s first steps. In reality, Roy’s mother was in the kitchen and the sun was down—but Roy remembered it as his wife being in the kitchen on a sunny morning. As a piece of narrative, it’s perfectly understandable. The memory feels much more magical that way: The sun shining! The boy’s mother nearby! Our minds are drawn to what feels true, not what’s necessarily so. And worse, these filled-in errors may actually compound over time. Some memory scientists suspect that when we misrecall something, we can store the false details in our memory in what’s known as reconsolidation.11 So the next time we remember it, we’re pulling up false details; maybe we’re even adding new errors with each act of recall. Episodic memory becomes a game of telephone played with oneself.

The malleability of memory helps explain why, over decades, we can adopt a surprisingly rewritten account of our lives. In 1962, the psychologist Daniel Offer asked a group12 of fourteen-year-old boys questions about significant aspects of their lives. When he hunted them down thirty-four years later and asked them to think back on their teenage years and answer precisely the same questions, their answers were remarkably different. As teenagers, 70 percent said religion was helpful to them; in their forties, only 26 percent recalled that. Fully 82 percent of the teenagers said their parents used corporal punishment, but three decades later, only one third recalled their parents hitting them. Over time, the men had slowly revised their memories, changing them to suit the ongoing shifts in their personalities, or what’s called hindsight bias. If you become less religious as an adult, you might start thinking that’s how you were as a child, too.

For eons, people have fought back against the fabrications of memory by using external aids. We’ve used chronological diaries for at least two millennia, and every new technological medium increases the number of things we capture: George Eastman’s inexpensive Brownie camera gave birth to everyday photography, and VHS tape did the same thing for personal videos in the 1980s. In the last decade, though, the sheer welter of artificial memory devices has exploded, so there are more tools capturing shards of our lives than ever before—e-mail, text messages, camera phone photos and videos, note-taking apps and word processing, GPS traces, comments, and innumerable status updates. (And those are just the voluntary recordings you participate in. There are now innumerable government and corporate surveillance cameras recording you, too.)

The biggest shift is that most of this doesn’t require much work. Saving artificial memories used to require foresight and effort, which is why only a small fraction of very committed people kept good diaries. But digital memory is frequently passive. You don’t intend to keep all your text messages, but if you’ve got a smartphone, odds are they’re all there, backed up every time you dock your phone. Dashboard cams on Russian cars are supposed to help drivers prove their innocence in car accidents, but because they’re always on, they also wound up recording a massive meteorite entering the atmosphere. Meanwhile, today’s free e-mail services like Gmail are biased toward permanent storage; they offer such capacious memory that it’s easier for the user to keep everything than to engage in the mental effort of deciding whether to delete each individual message. (This is an intentional design decision on Google’s part, of course; the more they can convince us to retain e-mail, the more data about our behavior they have in order to target ads at us more effectively.) And when people buy new computers, they rarely delete old files—in fact, research shows that most of us just copy our old hard drives13 onto our new computers, and do so again three years later with our next computers, and on and on, our digital external memories nested inside one other like wooden dolls. The cost of storage has plummeted so dramatically that it’s almost comical to consider: In 1981, a gigabyte of memory cost roughly three hundred thousand dollars, but now it can be had for pennies.

We face an intriguing inversion point in human memory. We’re moving from a period in which most of the details of our lives were forgotten to one in which many, perhaps most of them, will be captured. How will that change the way we live—and the way we understand the shape of our lives?

There’s a small community of people who’ve been trying to figure this out by recording as many bits of their lives as they can as often as possible. They don’t want to lose a detail; they’re trying to create perfect recall, to find out what it’s like. They’re the lifeloggers.

When I interview someone, I take pretty obsessive notes: not only everything they say, but also what they look like, how they talk. Within a few minutes of meeting Gordon Bell, I realized I’d met my match: His digital records of me were thousands of times more complete than my notes about him.

Bell is probably the world’s most ambitious and committed lifelogger.14 A tall and genial white-haired seventy-eight-year-old, he walks around outfitted with a small fish-eye camera hanging around his neck, snapping pictures every sixty seconds, and a tiny audio recorder that captures most conversations. Software on his computer saves a copy of every Web page he looks at and every e-mail he sends or receives, even a recording of every phone call.

“Which is probably illegal, but what the hell,” he says with a guffaw. “I never know what I’m going to need later on, so I keep everything.” When I visited him at his cramped office in San Francisco, it wasn’t the first time we’d met; we’d been hanging out and talking for a few days. He typed “Clive Thompson” into his desktop computer to give me a taste of what his “surrogate brain,” as he calls it, had captured of me. (He keeps a copy of his lifelog on his desktop and his laptop.) The screen fills with a flood of Clive-related material: twenty-odd e-mails Bell and I had traded, copies of my articles he’d perused online, and pictures beginning with our very first meeting, a candid shot of me with my hand outstretched. He clicks on an audio file from a conversation we’d had the day before, and the office fills with the sound of the two of us talking about a jazz concert he’d seen in Australia with his wife. It’s eerie hearing your own voice preserved in somebody else’s memory base. Then I realize in shock that when he’d first told me that story, I’d taken down incorrect notes about it. I’d written that he was with his daughter, not his wife. Bell’s artificial memory was correcting my memory.

Bell did not intend to be a pioneer in recording his life. Indeed, he stumbled into it. It started with a simple desire: He wanted to get rid of stacks of paper. Bell has a storied history; in his twenties, he designed computers, back when they were the size of refrigerators, with spinning hard disks the size of tires. He quickly became wealthy, quit his job to become a serial investor, and then in the 1990s was hired by Microsoft as an éminence grise, tasked with doing something vaguely futuristic—whatever he wanted, really. By that time, Bell was old enough to have amassed four filing cabinets crammed with personal archives, ranging from programming memos to handwritten letters from his kid and weird paraphernalia like a “robot driver’s license.” He was sick of lugging it around, so in 1997 he bought a scanner to see if he could go paperless. Pretty soon he’d turned a lifetime of paper into searchable PDFs and was finding it incredibly useful. So he started thinking: Why not have a copy of everything he did? Microsoft engineers helped outfit his computer with autorecording software. A British engineer showed him the SenseCam she’d invented. He began wearing that, too. (Except for the days where he’s worried it’ll stop his heart. “I’ve been a little leery of wearing it for the last week or so because the pacemaker company sent a little note around,” he tells me. He had a massive heart attack a few years back and had a pacemaker implanted. “Pacemakers don’t like magnets, and the SenseCam has one.” One part of his cyborg body isn’t compatible with the other.)

The truth is, Bell looks a little nuts walking around with his recording gear strapped on. He knows this; he doesn’t mind. Indeed, Bell possesses the dry air of a wealthy older man who long ago ceased to care what anyone thinks about him, which is probably why he was willing to make his life into a radical experiment. He also, frankly, seems like someone who needs an artificial memory, because I’ve rarely met anyone who seems so scatterbrained in everyday life. He’ll start talking about one subject, veer off to another in midsentence, only to interrupt that sentence with another digression. If he were a teenager, he’d probably be medicated for ADD.

Бесплатный фрагмент закончился.

Возрастное ограничение:
0+
Дата выхода на Литрес:
29 июня 2019
Объем:
442 стр. 5 иллюстраций
ISBN:
9780007427789
Правообладатель:
HarperCollins

С этой книгой читают

Новинка
Черновик
4,9
166