Читайте только на ЛитРес

Книгу нельзя скачать файлом, но можно читать в нашем приложении или онлайн на сайте.

Читать книгу: «The People’s Platform: Taking Back Power and Culture in the Digital Age», страница 5

Шрифт:

Ribot quotes John Lennon: “You think you’re so clever and classless and free.” Americans in general like to think of themselves as having transcended economic categories and hierarchies, Ribot says, and artists are no exception. During the Great Depression artists briefly began to think of themselves as workers and to organize as such, amassing social and political power with some success, but today it’s more popular to speak of artists as entrepreneurs or brands, designations that further obscure the issue of labor and exploitation by comparing individual artists to corporate entities or sole proprietors of small businesses.

If artists are fortunate enough to earn money from their art, they tend to receive percentages, fees, or royalties rather than wages; they play “gigs” or do “projects” rather than hold steady jobs, which means they don’t recognize the standard breakdowns of boss and worker. They also spend a lot of time on the road, not rooted in one place; hence they are not able to organize and advocate for their rights.

What’s missing, as Ribot sees it, is a way to understand how the economy has evolved away from the old industrial model and how value is extracted within the new order. “I think that people, not just musicians, need to do an analysis so they stop asking the question, ‘Who is my legal employer?’ and start asking, ‘Who works, who creates things that people need, and who profits from it?’” These questions, Ribot wagers, could be the first step to understanding the model of freelance, flexible labor that has become increasingly dominant across all sectors of the economy, not just in creative fields.

We are told that a war is being waged between the decaying institutions of the off-line world and emerging digital dynamos, between closed industrial systems and open networked ones, between professionals who cling to the past and amateurs who represent the future. The cheerleaders of technological disruption are not alone in their hyperbole. Champions of the old order also talk in terms that reinforce a seemingly unbridgeable divide.

Unpaid amateurs have been likened to monkeys with typewriters, gate-crashing the cultural conversation without having been vetted by an official credentialing authority or given the approval of an established institution. “The professional is being replaced by the amateur, the lexicographer by the layperson, the Harvard professor by the unschooled populace,” according to Andrew Keen, obstinately oblivious to the failings of professionally produced mass culture he defends.

The Internet is decried as a province of know-nothing narcissists motivated by a juvenile desire for fame and fortune, a virtual backwater of vulgarity and phoniness. Jaron Lanier, the technologist turned skeptic, has taken aim at what he calls “digital Maoism” and the ascendance of the “hive mind.” Social media, as Lanier sees it, demean rather than elevate us, emphasizing the machine over the human, the crowd over the individual, the partial over the integral. The problem is not just that Web 2.0 erodes professionalism but, more fundamentally, that it threatens originality and autonomy.

Outrage has taken hold on both sides. But the lines in the sand are not as neatly drawn as the two camps maintain. Wikipedia, considered the ultimate example of amateur triumph as well as the cause of endless hand-wringing, hardly hails the “death of the expert” (the common claim by both those who love the site and those who despise it). While it is true that anyone can contribute to the encyclopedia, their entries must have references, and many of the sources referenced qualify as professional. Most entries boast citations of academic articles, traditional books, and news stories. Similarly, social production does not exist quite outside the mainstream. Up to 85 percent of the open source Linux developers said to be paradigmatic of this new age of volunteerism are, in fact, employees of large corporations that depend on nonproprietary software.33

More generally, there is little evidence that the Internet has precipitated a mass rejection of more traditionally produced fare. What we are witnessing is a convergence, not a coup. Peer-to-peer sites—estimated to take up half the Internet’s bandwidth—are overwhelmingly used to distribute traditional commercial content, namely mainstream movies and music. People gather on message boards to comment on their favorite television shows, which they download or stream online. The most popular videos on YouTube, year after year, are the product of conglomerate record labels, not bedroom inventions. Some of the most visited sites are corporate productions like CNN. Most links circulated on social media are professionally produced. The challenge is to understand how power and influence are distributed within this mongrel space where professional and amateur combine.

Consider, for a moment, Clay Shirky, whose back-flap biography boasts corporate consulting gigs with Nokia, News Corp, BP, the U.S. Navy, Lego, and others. Shirky embodies the strange mix of technological utopianism and business opportunism common to many Internet entrepreneurs and commentators, a combination of populist rhetoric and unrepentant commercialism. Many of amateurism’s loudest advocates are also business apologists, claiming to promote cultural democracy while actually advising corporations on how to seize “collaboration and self-organization as powerful new levers to cut costs” in order to “discover the true dividends of collective capability and genius” and “usher their organizations into the twenty-first century.”34

The grassroots rhetoric of networked amateurism has been harnessed to corporate strategy, continuing a nefarious tradition. Since the 1970s populist outrage has been yoked to free-market ideology by those who exploit cultural grievances to shore up their power and influence, directing public animus away from economic elites and toward cultural ones, away from plutocrats and toward professionals. But it doesn’t follow that criticizing “professionals” or “experts” or “cultural elites” means that we are striking a blow against the real powers; and when we uphold amateur creativity, we are not necessarily resolving the deeper problems of entrenched privilege or the irresistible imperative of profit. Where online platforms are concerned, our digital pastimes can sometimes promote positive social change and sometimes hasten the transfer of wealth to Silicon Valley billionaires.

Even well-intentioned celebration of networked amateurism has the potential to obscure the way money still circulates. That’s the problem with PressPausePlay, a slick documentary about the digital revolution that premiered at a leading American film festival. The directors examine the ways new tools have sparked a creative overhaul by allowing everyone to participate—or at least everyone who owns the latest Apple products. That many of the liberated media makers featured in the movie turn out to work in advertising and promotion, like celebrity business writer Seth Godin, who boasts of his ability to turn his books into bestsellers by harnessing the power of the Web, underscores how the hype around the cultural upheaval sparked by connective technologies easily slides from making to marketing. While the filmmakers pay tribute to DIY principles and praise the empowering potential of digital tools unavailable a decade ago, they make little mention of the fact that the telecommunications giant Ericsson provided half of the movie’s seven-hundred-thousand-dollar budget and promotional support.35

We should be skeptical of the narrative of democratization by technology alone. The promotion of Internet-enabled amateurism is a lazy substitute for real equality of opportunity. More deeply, it’s a symptom of the retreat over the past half century from the ideals of meaningful work, free time, and shared prosperity—an agenda that entailed enlisting technological innovation for the welfare of each person, not just the enrichment of the few.

Instead of devising truly liberating ways to harness machines to remake the economy, whether by designing satisfying jobs or through the social provision of a basic income to everyone regardless of work status, we have Amazon employees toiling on the warehouse floor for eleven dollars an hour and Google contract workers who get fired after a year so they don’t have to be brought on full-time. Cutting-edge new-media companies valued in the tens of billions retain employees numbering in the lowly thousands, and everyone else is out of luck. At the same time, they hoard their record-setting profits, sitting on mountains of cash instead of investing it in ways that would benefit us all.

The zeal for amateurism looks less emancipatory—as much necessity as choice—when you consider the crisis of rising educational costs, indebtedness, and high unemployment, all while the top 1 percent captures an ever-growing portion of the surplus generated by increased productivity. (Though productivity has risen 23 percent since 2000, real hourly pay has effectively stagnated.)36 The consequences are particularly stark for young people: between 1984 and 2009, the median net worth for householders under thirty-five was down 68 percent while rising 42 percent for those over sixty-five.37 Many are delaying starting families of their own and moving back in with Mom and Dad.

Our society’s increasing dependence on free labor—online and off—is immoral in this light. The celebration of networked amateurism—and of social production and the cognitive surplus—glosses over the question of who benefits from our uncompensated participation online. Though some internships are enjoyable and useful, the real beneficiary of this arrangement is corporate America, which reaps the equivalent of a two-billion-dollar annual subsidy.38 And many of the digital platforms to which we contribute are highly profitable entities, run not for love but for money.

Creative people have historically been encouraged to ignore economic issues and maintain indifference to matters like money and salaries. Many of us believe that art and culture should not succumb to the dictates of the market, and one way to do this is to act as though the market doesn’t exist, to devise a shield to deflect its distorting influence, and uphold the lack of compensation as virtuous. This stance can provide vital breathing room, but it can also perpetuate inequality. “I consistently come across people valiantly trying to defy an economic class into which they were born,” Richard Florida writes. “This is particularly true of the young descendants of the truly wealthy—the capitalist class—who frequently describe themselves as just ‘ordinary’ creative people working on music, film or intellectual endeavors of one sort or another.”

How valiant to deny the importance of money when it is had in abundance. “Economic power is first and foremost a power to keep necessity at arm’s length,” the French sociologist Pierre Bourdieu observed. Especially, it seems, the necessity of talking honestly about economics.

Those who applaud social production and networked amateurism, the colorful cacophony that is the Internet, and the creative capacities of everyday people to produce entertaining and enlightening things online, are right to marvel. There is amazing inventiveness, boundless talent and ability, and overwhelming generosity on display. Where they go wrong is in thinking that the Internet is an egalitarian, let alone revolutionary, platform for our self-expression and development, that being able to shout into the digital torrent is adequate for democracy.

The struggle between amateurs and professionals is, fundamentally, a distraction. The tragedy for all of us is that we find ourselves in a world where the qualities that define professional work—stability, social purpose, autonomy, and intrinsic and extrinsic rewards—are scarce. “In part, the blame falls on the corporate elite,” Barbara Ehrenreich wrote back in 1989, “which demands ever more bankers and lawyers, on the one hand, and low-paid helots on the other.” These low-paid helots are now unpaid interns and networked amateurs. The rub is that over the intervening years we have somehow deceived ourselves into believing that this state of insecurity and inequity is a form of liberation.

3
WHAT WE WANT

Today it is standard wisdom that a whole new kind of person lives in our midst, the digital native—“2.0 people” as the novelist Zadie Smith dubbed them. Exalted by techno-enthusiasts for being hyper-connected and sociable, technically savvy and novelty seeking—and chastised by techno-skeptics for those very same traits—this new generation and its predecessors are supposedly separated by a gulf that is immense and unbroachable. Self-appointed experts tell us that “today’s students are no longer the people our educational system was designed to teach”; they “experience friendship” and “relate to information differently” than all who came before.1

Reflecting on this strange new species, the skeptics are inclined to agree. “The cyber-revolution is bringing about a different magnitude of change, one that marks a massive discontinuity,” warns the literary critic Sven Birkerts. “Pre-Digital Man has more in common with his counterpart in the agora than he will with a Digital Native of the year 2050.” It is not just cultural or social references that divide the natives from their pre-digital counterparts, but “core phenomenological understandings.” Their very modes of perception and sense making, of experiencing the world and interpreting it, Birkerts claims, are simply incomprehensible to their elders. They are different creatures altogether.2

The tech-enthusiasts make a similarly extreme case for total generational divergence, idolizing digital natives with fervor and ebullience equal and opposite to Birkerts’s unease. These natives, born and raised in networked waters, surf shamelessly, with no need for privacy or solitude. As described by Nick Bilton in his book I Live in the Future and Here’s How It Works, digital natives prefer media in “bytes” and “snacks” as opposed to full “meals”—defined as the sort of lengthy article one might find in the New Yorker magazine. Digital natives believe “immediacy trumps quality.”3

They “unabashedly create and share content—any type of content,” and, unlike digital immigrants, they never suffer from information overload. People who have grown up online also do not read the news. Or rather, we are told, for them the news is whatever their friends deem interesting, not what some organization or authoritative source says is significant. “This is the way I navigate today as well,” Bilton, technology writer for the New York Times, proudly declares. “If the news is important, it will find me.”4 (Notably, Bilton’s assertion was contradicted by a Harvard study that found eighteen- to twenty-nine-year-olds still prefer to get their political news from established newspapers, print or digital, than from the social media streams of their friends.)5

These two poles of opinion typify an ongoing debate about the way technology is transforming a younger generation’s relationship to traditional cultural forms, a debate that gets especially vehement around the question of journalism’s future—a topic with the profoundest of implications for the public sphere and health of democracy. In the popular imagination, either the Internet has freed us from the stifling grip of the old, top-down mass media model, transforming consumers into producers and putting citizens on par with the powerful, or we have stumbled into a new trap, a social media hall of mirrors made up of personalized feeds, “filter bubbles,” narcissistic chatter, and half-truths. Young people are invoked to lend credence to both views: in the first scenario, they are portrayed as empowered and agile media connoisseurs who, refusing to passively consume news products handed down from on high, insist on contributing to the conversation; in the second, they are portrayed as pliant and ill-informed, mistaking what happens to interest them for what is actually important.

The fact that two hundred thousand undergraduates are now majoring in journalism in the United States—a number that has risen 35 percent over the past decade despite rising tuition costs and a rapidly shrinking job market—implies the possibility of a different situation altogether. Presumably, many of these students still see some utility in traditional journalism and hope to devote themselves to the cause of investigating things that matter at substantial length. The critic Lawrence Weschler turned melancholic when reflecting on the fate of students who take his popular course on the art of the long essay. “They come into my office crying hot tears,” he told me, “when they realize there’s nothing they’d rather do with their lives.”

Yet the likelihood of these students getting a job writing long assignments is slim to none, and that has as much to do with economic realities as with technological innovation or the rewiring of their brains and the attenuation of their attention spans—with opportunity, in other words, as much as inclination. If the economics of the Web favor aggregation and link baiting, shocking headlines and quickly consumable trifles, future media makers will inevitably produce exactly that.

The optimists on one side, the skeptics on the other, those who laud the next generation and those who scorn it—oddly, both camps end up making the same mistake. The imagination and ambitions of an entire cohort have been preemptively and presumptuously denied. The naysayers and the celebrants stand ready to write the obituary for human beings who look beneath the surface, who care about the world beyond their immediate surroundings, who pay attention to that which is complex and outside them. One camp applauds the caricature while the other chides it, but both agree that the emerging media landscape accurately reflects what digital natives want. Neither recognizes the persistence of individuals who do not conform to this mold, nor do they bother wondering how to carve out and sustain a cultural space in which a wider variety of capabilities might flourish.

A few days after the first massive pro-democracy demonstration in Egypt in early 2011, Andrew Burton got on a plane heading to Cairo. He landed late the night of February 1 and slept at the airport, rising at dawn to head into the city. Walking around Tahrir Square with his camera in hand, he found his morning went smoothly. He got his bearings and took pictures of protesters, who were friendly and welcoming. But when Burton headed out from his hotel later that afternoon, the press were reporting clashes between the pro-democracy activists and Mubarak’s supporters, many of whom were hired thugs and plainclothes cops. Moving through the crowd, Burton felt the tension twisting the air.

When Burton stopped to photograph a man painting slogans over antigovernment graffiti, he was grabbed from behind and his lens covered. He pulled away and, unsure of what to do, tried to head back to his hotel, but an angry crowd gathered around him and began to attack. A group of men rushed to his aid, taking most of the blows and pushing Burton down an alley until his back was up against an army tank. His shirt was ripped and strange hands plunged into his pockets. Then Burton felt someone get a grip under his armpits and lift him upward, dumping him into the tank where he found himself surrounded by fourteen soldiers, his age and smiling. “They scooted over, and made a place for me to sit. Everything was quiet—the transition from an angry mob scene to a cramped interior happened very, very quickly,” he later recounted. “The soldiers were joking, laughing, making fun of me; they didn’t seem to care too much about what was going on outside.” He took cover for the next few hours, making small talk in broken English and sharing food. When things calmed down a general flagged a taxi that took him back to his hotel.

Burton is an ambitious and talented young photographer, barely a year out of college when we met, someone who defies all the easy stereotypes of his generation. He makes “content,” has a Web site, and sends out social media updates, but for him these are ways of engaging deeply with the issues he cares about, a means of focusing on them, not flitting across the surface. The trip to Egypt was his first to a conflict zone, one made on his own dime in hopes that an outlet would pick up his photographs, which Bloomberg News eventually did. When we met a few weeks later, he was hatching plans to travel to Tunisia to document the nation’s first democratic election; he had no idea that within days he would be in Japan shooting the triple crisis of earthquake, tsunami, and nuclear meltdown that shook the region.

“I’m not much of an adrenaline junkie or a speed demon. There are a lot of war photographers who just get dare-devilish,” Burton says. Instead, as someone who minored in international relations and politics, he’s more interested in exploring social movements and the complex interactions between government and the governed; it just happens that conflict is how those issues most visibly manifest themselves. “But when I was in Egypt I missed a lot of that. I failed in that sense. Instead I got more nuanced, quiet moments that other photographers may have missed. It was day six, so I wasn’t going to get anything totally new,” he recalls with a hint of regret. “But there were little things, like the subway to Tahrir Square, which wasn’t running, which people had turned into a kind of makeshift dump. There was the whole financial district, which was totally shut down. Or I shot these mini-businesses that kept people fed or were set up to charge cell phones. Just how people got by. There’s a kind of industry pressure, or maybe it’s personal pressure, to get exciting images of conflict and violence, but they’re not always the most interesting.”

The industry to which he refers, Burton admits, is shrinking, and photojournalism, long an uncertain venture, is considered a dying profession by many. He sees himself and his peers trying to squeak through the door, holding on to threads. He knows veteran photographers who complain of new pressures, saying that magazines like National Geographic used to give them six months to a year to complete an assignment and that they now are expected to turn things over in a matter of weeks or a couple of months.

To most freelancers, even a few weeks of steady focus sounds luxurious, since the demands on them are even more intense. Burton self-financed his trips to the Beijing Olympics, Egypt, and Japan and was lucky to find news organizations that, after the fact, licensed his photos to USA Today or the Associated Press, which meant he earned a couple of hundred bucks a day, allowing him to break even or make a small profit after expenses. (Due to dwindling budgets, established news organizations are turning to freelance writers as well as photographers to cover hazardous international beats, sometimes paying as little as seventy dollars for a story filed from the front lines.)6

On top of money problems are the personal risks that come with going solo in a crisis zone. “One photo editor told me to remember that even when I’m freelancing for an organization, no one will have my back. If I get shot, the editors buying my photos don’t help me because I’m not staff,” Burton says. Burton recently read The Forever War, journalist Dexter Filkins’s account of reporting from Baghdad in 2003 for the New York Times, which took out an insurance policy for Filkins and his fellow journalists that cost fourteen thousand dollars a month, not to mention the armored car that cost a quarter of a million dollars and the security adviser who cost a thousand dollars a day. Burton contrasts that with the story of João Silva, a photographer who was on contract with the paper (a position between freelance and staff) in Afghanistan. In 2010 he stepped on a land mine while accompanying American soldiers patrolling an area near the town of Arghandab and lost both his legs. Silva was fortunate that the Times volunteered to pay for his medical expenses, but the point is that the paper wasn’t required to.

Despite all the hype about the Web enabling people to cut out middlemen and fly solo, Burton made clear during our first conversation that his dream was getting a staff position with a wire service or a newspaper or simply securing some sort of institutional support. “We’re expected to be society’s eyes and ears,” Burton said, but fewer and fewer organizations can justify the expense. “I have really dark days, like when I go a week without getting any work and I just think, fuck this. I can’t do it. Realistically, it’s impossible. Will I be able to eat, have a savings account, have a family?” It’s a labor of love: “It has to be your passion.”

In the summer of 2013 Burton, to his great relief, squeaked through the door. He was offered a staff position at Getty Images, which he happily accepted. The job meant financial stability, health insurance, and the peace of mind that comes from knowing the organization would stand behind him should he run into trouble documenting something dangerous or controversial.

There are people who find other ways to make a living taking photographs, Burton acknowledged, though the alternatives to reportage make him deeply ambivalent. He’s got friends who shoot weddings or fashion spreads. “You can also work for an NGO,” he told me. “More people are doing that, which is basically the same thing as working for a corporation, but you don’t feel as bad about it.” And as is the case for all creative fields with business models in crisis, advertising, public relations, and other corporate projects beckon.

There’s a case to be made that Burton and others like him should content themselves with being hobbyists. To use an analogy dear to new-media thinkers, it’s as though they are trying to break into the buggy whip business when cars are flying off the assembly line. Anyone with a cell phone can take a picture and publish it online, and millions upon millions do, every day. It’s getting increasingly unrealistic, according to this line of thought, to expect to be hired to do something like making images, which are so ubiquitous.

In one possible future, people like Burton and I will be obsolete; we won’t need dedicated photographers and documentary filmmakers because everyone will simply chronicle their own lives, streaming it all for the world to watch. This may sound far-fetched, but consider that some of the most searing and powerful images of recent conflicts were not taken by professional photographers (like Robert Capa during World War II or Eddie Adams in Vietnam) but shot off the cuff. They were shocking candids injudiciously produced by perpetrators of violence as they tortured prisoners in Abu Ghraib or proudly flanked mutilated Afghan civilians, not compositions by outside observers.

In all their raw cruelty, these photos made the despicable aspects of war palpable in a way that the work of professional shooters fails to do. They were immediate, disorienting, and deeply disturbing. Similarly, we’ve been captivated by footage shot by civilians who happened to be on the scene during moments of political upheaval, terrorism, and natural disasters. The effect is often more authentic and gripping than anything an outsider could produce. Nonetheless, depending on idiocy (people’s misjudgments about how the images they produce and share will be received), ego (their conviction that their own lives are worth broadcasting), or chance (hoping that they happen to be standing beside the Hudson River when a plane lands in it or in the room when someone goes on a rampage) for our collective enlightenment is a risky proposition.

Most people would probably agree that there are things we need to see and situations where we can’t count on bystanders to point and shoot or guilty parties to incriminate themselves. Yet many influential new-media thinkers argue that the prospect should be eagerly embraced. In the future they anticipate, legacy news organizations will wither away and be replaced by a wired citizenry, collectively creating and rating user-generated content using collaborative filtering mechanisms, evolving a distribution system more inclusive and engaging than what came before. The line between reporter and reader will blur as a growing number of people create, curate, and circulate content. If journalism continues to exist as such, it will be less about going out gathering facts and reporting from the field and more about “curating” other people’s contributions and guiding a conversation, the shift focusing from content to the connections it produces.

Jeff Jarvis, a self-proclaimed Internet triumphalist, represents this strand of thinking taken to its logical extreme. He believes we are witnessing a massive epistemological shift, the veritable end of the Gutenberg era, with its dependence on print and corresponding emphasis on authorship, linearity, fixity, and closure. Digital technology, he says, disrupts such modes of knowing and the institutions that supported them, unleashing an information flow to which anyone can contribute, empowering the “people formerly known as readers,” and ushering in a democratized age of information gathering.

“We no longer need companies, institutions, or government to organize us,” Jarvis declares, adopting his standard insurgent tone. The future, Jarvis likes to say, is not institutional but entrepreneurial. The Burtons of the world should be able to go it alone, and if they can’t make it, it’s because they’re not innovative enough. (Jarvis, for all his blather, does not live by his own advice. Like many new-media thinkers, he’s employed by an academic department and publishes his books and articles through traditional channels. “Dog’s gotta eat,” he’s fond of saying.)

Бесплатный фрагмент закончился.

Возрастное ограничение:
0+
Дата выхода на Литрес:
30 июня 2019
Объем:
361 стр. 3 иллюстрации
ISBN:
9780007525607
Правообладатель:
HarperCollins

С этой книгой читают