Democracy and the Media (5/00)

Introduction:  Watchdogs and Lapdogs

In the late 1960s, shortly after I started work as a reporter for a Vermont daily newspaper, an angry reader complained about my bias in a letter to the editor. "I strongly doubt that he could cover the proceedings of a dog show without incorporating a message," wrote the critic.

I took it as a compliment at the time. And I still do.

Perhaps that’s why I was so pleased to join about 1000 other progressive media-makers in New York City for a Media and Democracy Congress in October 1998. For three days, journalists and activists from across the country gathered to examine the problems — concentration of ownership, the relentless slide into info-tainment, an avalanche of gossip and "news" people really can’t use — and also debate some potential solutions. It was certainly inspiring to be among colleagues and friends who aren’t afraid of the A-word — advocacy.

In the Watergate era, advocacy journalists were often our heroes. Even mass media, although already well on the way to their current degeneration, were still considered by many to be a potential part of the solution. Today, however, most people don’t trust reporters any more than politicians. In a 1998 Roper poll, 88 percent responding said corporate owners and advertisers improperly influence the press.

Of course, most journalists deny this, a lack of self-knowledge (or candor) that only makes matters worse. The fact that getting ahead too often means going along remains one of this profession’s most debilitating little secrets. The issue isn’t merely that less than ten media giants control the origination of most content, national distribution, and transmission into homes, or that the public is being set up for a commercial, pay-per-view world that will make notions about the Internet’s liberatory potential sound like science fiction — though much more could be said about both developments. It’s also how the national conversation about these and most other issues is defined by our media gatekeepers.

During a lively panel one evening, Nation columnist Christopher Hitchens noted that the word partisan is always used in a negative context, while bipartisan is offered as a positive solution. If that isn’t an endorsement of the one-party state, what is? Reporters don’t say Ronald Reagan or Bill Clinton are liars, Hitchens noted, even though these are verifiable facts. But they do say the two are great communicators, which is merely a subjective opinion. The issue, he suggested, isn’t a lack of information — it’s all out there somewhere — but how most reporters think and how the news is constructed.

Which brings us to free markets and competition, the basic tenets of today’s corporate religion. Unfortunately, most journalists are its faithful missionaries. Just one example: When stories about utility deregulation describe it as a "movement to bring competition to the electric industry" that will "let consumers shop for the best deals," that’s a corporate sermon, not a fact. Yes, competition now. But oligopoly later.

Incidentally, the same kind of thing was said — when anything was mentioned at all — about the Telecommunications Act of 1996. But the actual result of that legislation was to reduce competition and sweep away consumer protections. You didn’t hear about it on commercial TV, but media mergers such as Disney/ABC/Capital Cities and TimeWarner/AOL would not have been possible without such corporate-friendly "reform."

We also didn’t hear much about the $70 billion giveaway of the digital TV spectrum, a prime example of corporate welfare. That gold rush began in April 1998, without a whisper in the press. Making the giants pay for this enormous new public resource could dramatically reduce the deficit, or easily fund public broadcasting and children’s TV. But instead, spectrum rights that allow broadcasters to launch up to six new channels each were handed out for free. The only "string" was some vague contribution to be determined at a later date.

Fortunately, there are still some real heroes at work in the press, reporters and activists who insist on being watchdogs rather than lapdogs. During the Media and Democracy Congress, at a ceremony in the Great Hall of NYU’s historic Cooper Union, a dozen of them were honored during a lively final celebration. Among the recipients were Karl Grossman, who brought the problem of nukes and weapons in space to public attention — though the mainstream media still ignores this mega-story; Amy Goodman, producer of Pacifica Radio’s groundbreaking news magazine, Democracy Now; writers Jim Ridgeway, Gary Webb (who broke the CIA-Cocaine story), and New York Times columnist Bob Herbert; and In These Times publisher James Weinstein. 

Workers at the feisty Detroit Sunday Journal received a "media hero" award for standing up to Gannett and Knight-Ridder, which locked out 2000 union workers in 1996. (Hmm, how did we miss hearing about that epic struggle?) And journalist Mumia Abu-Jamal, who sat on death row in Pennsylvania despite compelling evidence of his innocence, was given a standing ovation for his courage and compassion.

The Congress also looked at some potential solutions: new anti-trust laws to deal with the world of global media, a tax on advertising — including those millions in "soft" and "hard" political contributions, which mainly end up in the coffers of media corporations — to adequately fund public broadcasting and public access, corporate divestment of news divisions, and a ban on children’s advertising, to name but a few. But most people there were ultimately aware that not much of that agenda will come to pass without a public groundswell. And that’s sure to be resisted (largely through omission) by the media giants. But at least we have some journalists who go beyond official doublespeak and independent media that bring people news the mainstream sources don’t see fit to broadcast or print.

To paraphrase an old conservative motto, advocacy in defense of democracy is no vice, and objectivity in the face of corporate tyranny is no virtue. Or, as one of my favorite muckrakers, Lincoln Steffens, once put it, "This is all very unscientific, but then, I am not a scientist. I am a journalist," At the conclusion of the Congress, filmmaker Michael Moore lumbered down the aisle of the Great Hall, looked out at an eager audience, and told them a "horrible" truth: His new film, The Big One, would be released by Miramax, which is owned by the evil empire itself — Disney.

"Why Disney?" someone shouted. "Why not?" he shot back, suggesting that maybe Disney understands something that most media "radicals" don’t get. Namely, that the vast majority of US media consumers are a lot more progressive than we assume. And as long as there’s an audience, Disney will take advantage of it. So, why shouldn’t we do the same, Moore seemed to say. At the time, he was also developing a late night TV show for Rupert Murdoch’s Fox network. But that obvious long shot didn’t come through.

Though somewhat derivative of his earlier, groundbreaking documentary, Roger and Me, Moore’s followup was quite entertaining and just as radical. During a 46-city book tour for his book, Downsize This!, he met with underpaid Borders workers, confronted corporate flaks, talked with people in fast-food joints about downsizing, and finally got to go one-on-one with sweatshop multimillionaire Phillip Knight of Nike.     

Asked why no Nike sneakers were made in the US, Knight argued lamely that American workers just don’t want to make shoes. But even a video plea for jobs from workers in Flint, Michigan, Moore’s hometown, failed to persuade him. Here was one relentless — and often hilarious — filmmaker/advocate. But the bottom line in The Big One was that, even if you get into corporate headquarters, no one’s really home. By the way, the name of the film wasn’t a self-reference, though it seemed to fit. Rather, it was Moore’s un-serious bid to rename the United States. 

"Apathy is the curse of civilization," muckraking journalist George Seldes once wrote. On the other hand, the passion and engagement revealed in documentaries like this, and the best of independent media, may be civilization’s best hope. Despite what most TV and movie fare suggest, an automatic weapon isn’t the tool most people use to pursue justice or uncover the truth. More often than we think, they fight the system through creative, nonviolent acts of resistance — and sometimes they prevail.

Part One:  The Great Free Speech Robbery

About two centuries ago the leaders of the new United States of America struggled to create a document that was acceptable to the then semi-autonomous states. One of the new system’s primary architects, James Madison, expressed special concerns during these debates that, unless specifically protected, the rights of individual citizens would be vulnerable. Nevertheless, the US Constitution, as ratified in 1788, provided no explicit protection for rights such as free speech.

Undaunted, Madison continued to protest that the establishment of a democratic system wasn’t sufficient in itself to protect the rights of all to participate. On the contrary, he noted that "the invasion of private rights is chiefly to be apprehended not from acts of government contrary to the sense of its constituents but from acts in which the government is the mere instrument of the major number of the constituents."

Madison had come to believe that government "should be disarmed of powers which trench upon those particular rights (of speech, press and religion)." Here lay the basis for his campaign to add a Bill of Rights to the Constitution. Opponents argued that such additions were unnecessary, since the federal government had been given no power to suppress speech or other rights. Noah Webster, for example, argued that inclusion of inherent rights such as freedom of speech would be as ludicrous as asserting in the Constitution the right to hunt and catch fish on one’s own land, to eat and drink, or to sleep as one chooses. No doubt Webster would be shocked to find that several of these "rights" have since been called into question.

Not having the benefit of hindsight, Madison simply reminded his critics that "freedom of the press and rights of conscience, those choicest privileges of the people" were unguarded in the British constitution, and, as the new Americans knew well, were consistently violated. To avoid repeating Britain’s mistakes in its recently liberated colonies, he proposed that these fundamental rights be placed beyond the reach of government, the assumption being that an overbearing regime was the main threat to freedom. Madison’s basic assertion was that the free exchange of ideas was a personal right beyond the scope of government authority. He added to that his strong objection to majoritarian control of speech. The greatest danger to liberty, he argued, was to be found "in the body of the people, operating by the majority against the minority."

Despite such warnings and the much-touted protections provided by the First Amendment, however, various developments in the 19th and 20th centuries considerably undermined "those choicest privileges of the people." Possibly the most damaging was the revolutionary Supreme Court ruling, in the 1886 Santa Claria County v. Southern Pacific Railroad case, that corporations were persons within the meaning of the 14th Amendment.

The Amendment had been passed after the Civil War to assure that no state could abridge the privileges of citizens or deny equal protection under the law. It was the most significant legal change of the reconstruction era, subsequently serving as a basis for more Supreme Court cases than almost any other provision. Ostensibly, the 14th Amendment was designed to protect the newly won freedom of Black Americans and make the first eight amendments to the Constitution applicable to the states. But the ambiguity of its language allowed the Supreme Court to interpret the law narrowly in terms of citizenship rights, while simultaneously extending "equal protection" to businesses. Ultimately, the Amendment was transformed into an important tool for vested interests. 

In the heat of the industrial revolution, growing US corporations were eager to limit government involvement in their expansion plans. With the help of savvy laissez-fair lawyers, they began by using the new Amendment as a bar against social legislation. But thanks largely to Roscoe Conkling, the Santa Claria decision reached much farther.   

Conkling, a lawyer and leading stalwart Republican whose presidential aspirations were thwarted in 1876, was a major force in US politics during this period. His feud with President Garfield over political appointments still raged when Garfield was assassinated by one of Conkling’s misguided supporters in 1881, putting Conkling protege Chester Arthur in the White House. 

When the Santa Claria case reached the Supreme Court, Conkling, who had helped write the 14th Amendment more than a decade earlier, persuaded the justices to accept his interpretation of what it meant. The drafting committee had "conspired" to extend equal protection to corporations, he said, by using the word "person" rather than "citizen." Though this conspiracy theory was later exposed as a fraud, the Court accepted his argument and broadened the application accordingly. The impact on subsequent rulings regarding "corporate speech rights" has been profound. 

Almost a century later, after public pressure forced the US Congress to limit contributions and expenditures for political campaigns in 1971, the Supreme Court used free speech as a basis for striking down limits on spending. Such limits, they argued, would reduce the "quantity" of political speech. Defining the spending of money as a form of speech, the majority ruled that only contributions could be restricted. Despite the emergence of equality as a basic social value, it wasn’t to be applied to speech. Speakers without money might be "leveled up" slightly by limited access requirements, but wealthy speakers could not be "leveled down."

The Court also ruled that, since corporations have speech rights, they can’t be prohibited from spending money to influence the outcome of a vote, whether or not the outcome would directly affect them. In First National Bank of Boston v. Massachusetts, the majority ruled that speech can’t be restricted simply because the source is either a corporation or a union. In his dissent, however, Justice White noted that the self-expression function of the First Amendment "is not at all furthered by corporate speech." If ideas aren’t a product of individual choice, he argued, constitutional protection can be limited, adding that "the restriction of corporate speech concerned with political matters impinges much less severely upon the availability of ideas to the general public than do restrictions upon individual speech." White also stated plainly that corporations are artificial entities created to make money, and gave weight to the widely recognized need "to prevent corporate domination of the political process." As long as freedom of expression was essentially protected, he saw no problem in some "curtailment of the volume of expression."

In his own dissent from the majority in that case, Justice William Rehnquist mentioned that Congress and at least 30 states felt that some restrictions on corporate political activities were justified. Concluding that the purposes of corporations can be fulfilled without the liberties of political expression, he quoted Justice Marshall, who defined a corporation as "an artificial being, invisible, intangible and existing only in contemplation of law."

Part Two: A Corporate Information Order

During the last century, particularly its final half, technological innovations made electronic media the dominant conveyers of basic information. Seeing and hearing truly became believing. Yet, despite the dangers posed by these powerful tools, ranging from the potential for manipulation of mass opinion and actions to the drowning out of smaller voices, the main response of most governments has been rules and regulations that are either downright discriminatory or merely ineffective. 

What federal and state action has clearly failed to do is slow the consolidation of economic control. As a new century begins, nine corporate giants — General Electric, Sony, AT&T/Liberty Media, Disney, Time Warner, News Corporation, Viacom, Seagram, and Bertelsmann — own most of the world’s global broadcast stations, along with most major newspapers, magazines, and recording and film companies. Though broadcast and cable channels continue to multiply, and magazine racks are filled with colorful covers, the surface diversity masks increasingly centralized ownership of most output. 

In the late 1970s, the Gannett Corporation, with over 90 daily newspapers and USA Today, eight television stations, 15 radio stations and production companies, positioned itself to become one of the giants in the print, data base, and video markets. But this empire was soon dwarfed by Rupert Murdoch’s News Corporation, which not only acquired newspapers and 20th Century Fox with its vast motion picture archives, but also TV Guide and other publications purchased for $3 billion from Walter Annenberg’s Triangle Publications.

An equally awesome media conglomerate was created by the merger of Time, Inc., already well positioned in the global information marketplace, and Warner Communications. Then, as the year 2000 dawned, Time-Warner was itself merged with America Online, the leading Internet company. This $350 billion deal set off speculation that the world’s largest media and entertainment entity would revolutionize global communications.

Other examples include MCA, one of the media giants purchased in the 1990s by the Japanese, and Gulf + Western, which once listed Paramount Pictures and Simon & Schuster among its numerous holdings. In 1989, G+W decided to change its name to Paramount Communications and shed its non-media industries; the idea was to fully concentrate on winning the global communications race. But in 1994, Viacom, which owns movie houses, Blockbuster video, Spelling Entertainment, and TV networks like Showtime, Comedy Central, MTV, VH-1, USA, Nickelodeon, and Lifetime, brought Paramount for $10.4 billion. At the end of the decade, Viacom announced a merger with CBS, while MCA, which had already merged with WorldCom, paid $122 billion to acquire Sprint, combining the second and third largest long distance companies.

In short, the 20th century concluded with an unprecedented surge of media and telecommunication mergers and acquisitions.

Until the emergence of the Internet, the most dynamic sector of the broadcasting industry was cable. In 1979, about 14 million US homes were wired; as of 1990 there were 53.9 million subscribers, almost two thirds of all homes with TV sets. But aside from a few bright spots, notably CNN’s "crisis" coverage and C-SPAN’s diligence, the proliferation of channels mainly meant more of the same — reruns, shopping, religious, and movie channels, and feature-length commercials. Despite the largesse of particular operators, the absence of an established "right to speak" on cable TV is especially disquieting in view of cable’s quasi-monopolistic position in most places. 

In less than 30 years the media environment of the US and, following its example, much of the world has been transformed. Television has turned political debate into a war of packaged sound bites. Blatant commercialism and violent cartoons have altered the perceptions and values of millions of children. Multinational companies and ad agencies mold consciousness, hammering in certain messages and suppressing others. Global management of information by a corporate elite that manage every step of the process now poses as great a threat to self-government as pollution does to the environment.

As the AOL-Timer Warner merger suggests, even the prospect of a participatory renaissance ushered in by the Internet may be vastly overrated. According to media historian Robert McChesney, the major beneficiaries of the so-called Internet Age will be the investors, advertisers, and a handful of media, computer, and telecommunications corporations. AOL is already the largest Internet service provider in the US, and owns Netscape, the most widely used browser along ‘Netizens’ worldwide. Time Warner’s extensive fiber-optic networks should give AOL a significant advantage, the ability to offer service 100 times faster than traditional phone lines.

In 1998, there were 120 million Internet users worldwide, and it has become increasingly obvious that this new medium is the fastest growing tool of modern global communication. But less than three percent of the world’s people — mainly male, middle class, and fluent in English — are currently part of the new cyberculture. The US has more computers than the rest of the world put together. South Asia, home to more than a quarter of humanity, has less than one percent of the world’s Internet users. Thus far, despite the use of computers and e-mail as tools to mobilize political action and promote progressive campaigns, the "information age" is generally shaping up like a new era of information imperialism. 

Looking specifically at freedom of speech and the press in the US, many of the problems can be traced to a basic, obsolete notion about the source of the danger. Although government intrusions are far from irrelevant, they no longer constitute the primary threat; that honor must go to corporate entities, including the institutional media themselves, which have exploited basic rights and snuffed out the personal right to speak in the process. In their effort to guard against government abridgments of speech, Congress and the courts have left most citizens at the mercy of impersonal economic forces whose institutional autonomy and ability to widely disseminate their views have undermined diversity in the marketplace of ideas. Any voice that isn’t "amplified" through broadcast or print is unlikely to be audible. Or, to paraphrase a proverb, if you can’t be heard, have you actually spoken? 

The evolution of varying standards of speech protection for different modes of communication has given the government some leverage in negotiations with each. Overall, however, the promise of First Amendment protection has led to an assumption that economic entities are entitled to the same rights as human beings. It has even been argued that they are involved in speech that is more vital to democracy than the speech of individuals. The rationale has cut both ways, occasionally justifying refusal of access to the media and, less often, requiring media to air a message.

As a result of this fragmentation of speech rights, most people have had their freedom redefined and largely curtailed on the basis of the medium they wish to use. Those who operate outside the institutional media are consigned to the status of "listeners," "consumers," "audience," or occasionally "sources." Thus, the progressive mechanization of mass media, combined with economic centralization, has led to a system of mass communication that is largely impersonal and unresponsive. Freedom of speech has become an institutional right, and individual speakers have been turned into interchangeable objects. 

Part Three:   Taking Control of New Tools

At a "public interest" summit held in 1994, representatives from government, corporations and public interest groups assessed the emerging shape of what was then being called the National Information Infrastructure, or NII. Among the questions they asked — but didn’t fully answer, were, Who will build it? Who will pay for it? And who will be left out?

Although some participants were optimistic about the prospects for education and virtually universal access, others warned about "information apartheid," the further division of the US — and the world — into haves and have nots. Within a few years, this became known as the "digital divide," a euphemism as useful to the media establishment as "collateral damage" has been to the military. Legislation to prohibit "electronic redlining," and to reserve a portion of the space on advanced telecommunications networks for non-commercial uses, was already in the pipeline. But it was also fairly clear that the evolution of new technologies would likely be dominated by the same players who already owned the nation’s communication systems.

Rumblings about a new round of media mergers meanwhile provided a glimpse into the future. At the time, it looked highly possible that one or more of the three major TV networks in the US would soon change hands. General Electric already owned NBC and its cable spin-offs; had investments in Bravo, AMC, and even the Independent Film Channel; was a major partner in the PrimeStar satellite operation. Now Time-Warner, already one of the world’s largest media empires, as well as Viacom and the Walt Disney Co. were looking to purchase a network. Before long Disney bought ABC, and Viacom made its play for CBS.

In May, 2000, regulators approved the Viacom-CBS merger, a $45 billion deal involving 38 TV stations, 162 radio stations, as well as movie studios, book publishing, themes parks, and cable channels MTV and Nickolodeon. Since the new company’s TV stations would reach over 40 percent of the national audience, more than current federal rules allow, the FCC gave the company a year to scale down its holdings to meet a 35 percent cap. It also faced the prospect of giving up Viacom-owned UPN, since media entities currently aren’t permitted to own two networks. But the FCC agreed to review the dual network ban, since Viacom claimed UPN would falter without a powerful parent corporation.

As Dennis Mazzocca noted in Networks of Power, deregulation of cable television, combined with more programming and distribution partnerships, means that "all of cable television will soon be controlled entirely by the same media conglomerates that presently dominate broadcasting." In the early 1990s, fewer than 20 players ran mass communication worldwide; by 2000 that number shrank to less than ten.

With the emergence of the "information superhighway," these same interests vied in the US with telephone companies for access to an enormous new market. New Jersey’s Bell Telephone asked the FCC for permission to link its new fiber optic system with companies offering "Video Dialtone" services. Clearly, cable TV, telephone systems and computer networks would soon look very different. Whether phone companies would even be subject to public access requirements wasn’t clear. On a "superhighway" built by AT&T, Time-Warner/AOL, Viacom, Capital Cities/ABC, MCI and Sprint, the options would be severely limited by the political biases and market mentality of the info-media behemoths.

Corporate media critics like Mazzocco have suggested using community-based pressure as a lever to bring the media under public control through non-profit, public foundations. Another approach is mounting a campaign to charge operators for their access of the airways — a usage fee based on net profits, which could used in part to establish a viable alternative. Advertisers might also be taxed to pay for public services, as they are in Hawaii, New Mexico and Washington.

But the problem can’t really be solved by demanding money for an "alternative" to corporate media. Manipulation of global consciousness is a profound political, cultural, and technological problem that altruistic alternatives are unlikely to overcome on their own. Even if 20 percent of the space on US telecommunication networks is reserved for government and the public, the remaining 80 percent will be programmed by corporations and polluted with propaganda. Two-way communication may remain affordable, but contact with a larger public will remain elusive for all but a clever few.

As mentioned already, the main threat to free speech and self-government is control of information technology by enormous economic interests that use the media to manipulate what we think and buy. Thus, in order to open up mass media, we may have to claim them as essential public forums (see Part Four). Combining local pressure with state legislation and legal action based on constitutional rights, there is still hope for a truly "free marketplace of ideas" along the public information highway, and a chance to make that road as open and creative as possible.

Paradoxes of the Information Age

It’s no accident that George Orwell made the telescreen one of the primary symbols of a totalitarian society in 1984. Even when he wrote that prescient novel in 1948, the importance — and dangers — of telecommunications were already obvious. Today, information technologies are bringing rapid and fundamental change to almost every aspect of society. 

In his book, What Are People For?, Wendell Berry rejected the notion of computers as a liberatory tool, pointing to their cost, reliance on resource exploitation, and use of electrical energy. In Four Arguments for the Elimination of Television, Jerry Mander made an even more devastating critique: "Television produces such a diverse collection of dangerous effects — mental, physiological, ecological, economic, political; effects that are dangerous to the person and also to society and the planet — that it seems to me only logical to propose that it should never have been introduced, or once introduced, be permitted to continue."  

And yet, television, computers, and related information technologies also offer opportunities for global democratization and empowerment. During the late 1980s and early 1990s, for instance, VCRs served as revolutionary tools in Poland, fax machines helped open up politics and economics in the Soviet Union, and audio cassettes kept the hope of freedom alive in South Africa. Beginning in 1994, laptop computers helped secure international support for the Zapatista movement.

More recently, activists resisting globalization across borders have used the Internet and a new network of independent radio, electronic, and print outlets to start building a movement for global justice and democracy. In short, small, accessible, and affordable technologies can help people to challenge the "knowledge" monopoly of elites.   Perhaps the best guarantee that information will be used on behalf of humanity is to work for its free flow. That isn’t to say "more" is always "better." But repressive governments and elites are normally the first to oppose broad access to information. After all, information is power, and open societies are usually characterized by high per capita availability of televisions, telephones, and computer terminals.

Instant communication clearly opens up possibilities for social change. Like Gutenberg’s invention of moveable type, modern information processing creates at least the possibility of widespread information literacy. Moveable type took the printed word beyond the privileged few; telecommunications and computers could make information accessible to all. They might even help spur a shift in values from uniformity to diversity, from centralization to local democracy, and from organizational hierarchy to cooperative problem-solving units. 

But this depends largely on the growth of a social movement that promotes self-management of information, along with the cultivation of new skills. One of the main skills needed is the knack of making connections between disparate bits of information. Effective media organizers are often techno-generalists able to create knowledge out of large information flows, and also pattern-finders who work easily in a team environment. 

We have only begun to experience the Information Age. The personal computer revolution is little more than 30 years old. Even bigger changes lie ahead, some dangerous, others with liberating potential, some with both. Those who become "literate" can help harness new technology to extend freedom and meet the needs of the planet and humanity. 

There are risks and drawbacks, of course. In addition to potential for social isolation and misinformation linked to dependence on computers, production of components often makes use of extremely toxic chemicals such as chlorine, arsenic, and phosgene. Groundwater contamination by high tech companies has yielded dozens of superfund clean-up sites and been implicated in pregnancy and childbirth complications. The high-tech industry is also the world’s largest single source of CFCs. Chronic exposure to low-frequency radiation from computer screens has been linked to increased incidence of cancer and other illnesses — one of the many stories suppressed by corporate media.

As researcher Ron Chepesiuk noted in a 1999 Toward Freedom expose, the $150 billion computer chip industry has been described as "the pivotal driver of the world economy." More than 900 plants are located in Arizona, Massachusetts, Virginia, Texas, New Mexico, Oregon, Vermont, and Idaho, and throughout Asia, Europe, Latin America, and the Caribbean. But prodigious growth has come with a hefty environmental price tag. Few industries use the same amount of toxic chemicals to manufacture products. Producing 220 billion silicon chips a year currently requires the use of highly corrosive hydrochloric acid; metals such as arsenic, cadmium, and lead; volatile solvents like methyl chloroform, benzene, acetone, and trichloroethylene (TCE); and a number of very toxic gases.

As of 1999, Silicon Valley had the country’s largest number of EPA Superfund Priorities List sites (29), and more than 100 different contaminants were linked to the local drinking water. In the past, much of the liquid waste from chip making in Silicon Valley was stored in underground tanks, many of which leaked toxic waste into ground-water supplies. Toxic gas is also a problem. In 1992, for example, one San Jose neighborhood had to be evacuated after toxic smoke poured out of a local chip plant. (For more, see TF, Nov. 1999, "Toxic Chips.")

Meanwhile, intense global competition is accelerating the pace of change in the tools and materials used during the manufacturing process. In the 1970s, a new technology typically took six to eight years from research to full manufacturing. Today, the industry develops a new chip making process about every two to three years. Intel, the giant computer chip maker, reports that each of its factories makes an average of 30 to 60 significant changes in operations each year in order to ramp up production of new types of chips.

While hundreds of new chemicals are being introduced annually, adequate toxicological assessments almost never precede their introduction into manufacturing settings. You might even say that the workers are being used as guinea pigs. Many of the manufacturing processes take place in closed systems, even though exposures to harmful substances can be difficult to detect unless monitored daily.

At the end of the 20th century, at least 127 new semiconductor fabrication plants were in various stages of planning and construction worldwide, with the total expenditure expected to exceed $115 billion. At the same time, environmental, workers rights, and human rights activists were beginning to detect serious health problems at semiconductor plants in foreign countries. In Taiwan, for example, 57 Filipinos, working at a Phillips Electronics plant from July 1996 to December 1997, got sick. Five of them died, reportedly the result of a disease known as Stevens-Johnson Syndrome (SJS). The rest were fired.

It all makes one wonder: Are the potentials worth the price? And, can we use a technology while also pressing to change it? Hopefully, through decentralized access to information and global networking among activists, use of computers and telecommunication devices can help promote social change, and, at the same time, changes in production processes and the use of technology itself. If not, digital imperialism, as well as enormous health and social consequences, are even more likely.

Concentration of information and the emergence of high-tech sweatshop conditions would be tragic outcomes of this potentially revolutionary time. After all, these technologies at the very least permit cooperation, group action, global consciousness, and decentralized, small-scale production. They can increase our productivity and reduce our travel time. Perhaps they can even reduce the gap between the "in-the-knows" and "know-nots." 

Marshall McLuhan, prophet of the Information Age, once provided a hopeful and relevant diagnosis. "Our new environment compels commitment and participation," he wrote. "We have become irrevocably involved with, and responsible for, each other." Let’s hope he’s right.

Part Four:  Toward Media Self-Management

To truly reclaim equality and freedom in the "marketplace of ideas" — and along with both, the personal right of self-expression — we inevitably come to the issue of autonomy. Liberty of expression, widely valued for its contribution to the search for truth and the functioning of a self-governing society, involves a conscious choice by each person exercising this freedom. Without this basic form of self-management, democracy can’t exist. 

There’s really no such thing as total autonomy. Whether we acknowledge it or not, our existence is influenced by our bodily needs and impulses, cultural norms and values. Without air we perish, and without love we become the brutes that "enlightened" thinkers like Thomas Hobbes claimed we were. Yet autonomy is a real and powerful aspiration, pulling us toward self-sufficiency, moral courage, and the full development of our unique inner selves. It’s the quest for identity, the search for self-actualization that has been studied and debated by psychologists, theologians, and social theorists.

Philosopher Immanuel Kant saw autonomy as the spontaneous action of a mind molding experience and choosing goals. In political terms, it is self-government, the sovereignty of the group, community, or people. Autonomy doesn’t ignore or defy the needs of an organized society; rather, it is tied to the belief that social stability depends on diversity. And diversity must be channeled when necessary to prevent destructive fragmentation. In essence, autonomy incorporates the concepts of self-regulation and equilibrium. Any society that values equality and freedom must encourage the autonomous participation of its citizens. 

The original Greek idea of autonomy was self-rule. In more recent times, however, it has been stripped of its ethical content and defined simply as a form of independence, usually economic in nature, or as an institutional attribute. This is especially deceptive, since selfhood is very much linked both with individual competence and with a person’s claim to power within society. Libertarian Philosopher Murray Bookchin relates this idea to the civic concept of self-management. "Self-rule applies to society as a whole," he writes. "Self-management is the management of villages, neighborhoods, towns, and cities. The technical sphere of life is conspicuously secondary to the social. In the two revolutions that open the modern era of secular politics — the American and French — self-management emerges in the libertarian town meetings that swept from Boston to Charleston and the popular sections that assembled in Parisian quatiers."

When people lack a sense of self-worth and dignity, however, pious talk about the value of self-government rings hollow. Citizens who don’t — or believe they don’t — have the right to self-expression and meaningful choice will not indefinitely remain active in democratic processes. In this context, we must ask whether it is mere coincidence that the era of growing media influence in the political process has also been a time of declining political participation. It is chic to conclude that people are simply "fed up" with politics. In Why Americans Hate Politics, E.J. Dionne, a journalist himself, defines the situation as a revolt against public debate that avoids real solutions to problems. From his privileged perch, Dionne apparently can’t see the possibility that what also turns off voters is being excluded from the debate. 

Courted by politicians, advertisers and pollsters solely as objects of persuasion, most people are left with the distinct impression that nothing they say could have much value or impact. The problem is that a sense of self-worth grows from successful social interactions. When self-expression fades as a personal right, so too does the belief in democratic self-government as a functioning reality. Thus, the failure to respect and support the autonomy that underpins freedom of speech has become a major source of eroding faith in democratic government. 

In place of personal autonomy, a new value has been promoted over the last several decades — institutional autonomy. The progressive mechanization and centralization of social and political affairs has combined with the notion that institutions, whether corporations, unions, or special interest groups, can claim rights once reserved for individuals. Economic entities, particularly in the US, demand protection of their speech rights either as representatives of the public or because law grants them the status of "persons." In the case of the institutional media, the argument rests on their role as private guardians of the public interest. 

Most of these institutions proclaim a dedication to the preservation of diversity. And yet, without a wide variety of self-expressive speakers who bring a stream of new ideas into the marketplace, diversity becomes an illusion. Institutional autonomy instead creates a closed market in which ideas, like prices, are fixed.

Almost without noticing it, we have permitted the foundations of self-government to be undermined. 

And the way out? In my view, it lies in reasserting the personal right of self-expression. This begins with a clear-eyed view of our basic rights and the media’s actual purpose. The main function of the institutional media, must would agree, is the communication of information and ideas. Since the public nature of the media is also widely accepted, we can reasonably conclude that most individual speech isn’t inherently intrusive. Furthermore, both the print and electronic media describe their work as disseminating information and viewpoints that are necessary for self-government.

Approaches may vary, from reprinting press statements to investigating corruption. Still, all mass media enterprises capitalize on the image that they reflect the mood, sentiments, and activities of the public. And most of the time they depend on members of the public as their primary sources. In fact, without the public, what would there be to report? 

The mass media, in short, are mainly private enterprises with an essentially public function. The issue, then, is whether and how these institutions can be made more available to a wider variety of speakers. As long as the presence of more participants doesn’t prevent them from performing their basic functions, it is certainly a valid question to ask. 

Broadening access to quasi-public forums is clearly consistent with the spirit of the First Amendment. The real issue is whether it’s possible. As every student is told, Congress is prohibited from abridging freedom of speech and of the press. It isn’t prevented, however, from taking affirmative steps to enhance the ability of individuals to gain access to public forums. This need not mean that the rights of "listeners" conflict with the rights of the press as "speakers." Rather, we can guarantee that individual speech isn’t snuffed out because powerful media owners believe that only they, their employees, or their friends deserve access. 

The idea that government can take action to insure relative equality in the ideas marketplace has been explored by the Supreme Court. In a 1972 case, Police Department of the City of Chicago v. Mosely, a man who had been picketing peacefully near a school to protest discrimination sought to overrule a city ordinance that prohibited picketing within 150 feet during classes — unless the pickets were involved in a labor dispute.

The case posed this question: What is the relationship between equality and the First Amendment? Noting that the ordinance was a form of censorship based on subject matter, Justice Marshall wrote, "Above all else the First Amendment means that government has no power to restrict expression because of its message, its ideas, its subject matter, or its content." He went on to say that the principle involved was "equality of status in the field of ideas."

Mosely set out two complementary aspects of access to public forums. It prohibited government from judging the content of speech, and also provided ground rules for channeling expression based on "time, place and manner restrictions." In other words, any decision to exclude speakers should be the minimum necessary. For example, if a community wants to protect its children by restricting TV advertising on Saturday mornings, theoretically it can do so if the action is proven to be the "least means" of reducing the exposure of young people to socially destructive messages. Although the Court has found it difficult to set clear boundaries for access, the concepts of "least means" and "content neutrality" do provide a basis for setting limits on corporate and personal speech.

The issue isn’t whether each message makes a significant contribution to self-government. Rather, this approach assumes that all messages are valid expressions of individual autonomy, contributing to the speaker’s sense of self-worth. Media managers can set time, place and manner restrictions — rules dealing with length, distribution throughout the day or the publication, and repetition. Communication can even be barred at certain times, as long as the decision doesn’t discriminate based on content. 

Such access to dominant public forums might be called freedom of "amplified speech." In practical terms, it means expanded citizen access to newspapers, compensating for the virtually unlimited access afforded to corporations and other large institutions. The rights of reporters and editors — society’s "informed speakers" — should be brought into balance with the rights of non-journalists, possibly even enhancing the role of the press in checking abuses of power. Electronic media should also have the right to impose restrictions, but these must apply equally to wealthy and poor speakers, to those with views that agree with the owners and those with ideas they oppose. For the cable and Internet industries, amplified speech obviously involves more equal access and basic equipment for anyone who wants it. The articulate and technically knowledgeable will have an advantage at first, but experience will reduce disparities in time. 

In each case, the right applies only to individuals, since freedom of expression is fundamentally a personal right and freedom of the press really means the right of citizens to use various means of communication without prior restraint. Institutions shouldn’t be prohibited from issuing messages and opinions, but their speech ought to receive no special protection or treatment. 

There remains another the big question: Who decides? Hopefully, the speakers themselves or their communities can make most of the choices. Every person should have the basic right to choose when to speak, whether, and about what. True participation can never be compulsory. On the other hand, the gates of a public forum shouldn’t be locked when someone wants to use it. Relying again on the "least means" test, most disputes can be resolved at the local level.

Such solutions will usually be less expensive and time consuming. When local action is impractical, however, the next level of government will have to intervene. In any case, the goal is to find the way to leave future options open, in the community, across the nation, and around the world. 

But if individuals and communities are to assert such rights, a new form of literacy will have to be cultivated. The right to self-expression has little value unless the message can be effectively conveyed and easily understood. This is a complex social issue, growing out of the technological revolution of the late 20th century, and must be addressed by all our institutions, particularly schools. Working as teachers and resource providers, along with local government and educational organizations, media institutions could be instrumental in developing a citizenry with the capacity for full self-expression. If the young are to become effective and self-regulated speakers, if they’re going to develop a sense of self-worth and make meaningful contributions to a self-governing society, true media literacy — not the false consumer consciousness so effectively promoted today — will have to become an intrinsic part of their education. This area of study includes a critical awareness of the role mass communication plays in society, techniques of speaking, writing, programming, and visual presentation, and an understanding of how media affect opinion formation and the democratic process. If access to the "ideas" marketplace is to be meaningful, skill development must start at an early age. 

Affirming Freedom

Speech is usually considered a negative right; that is, a restriction on government’s ability to restrain communication by the people or the press. Yet any fair analysis of contemporary problems reveals that the great threat today isn’t mainly government but instead the manipulation and abuse of media by giant institutions with enormous economic and information power at their disposal. Protecting free speech therefore requires affirmative action to re-open the marketplace of ideas. Failure to fulfill this responsibility leaves the power to inform and, ultimately, to censor, self-censor, and control in the hands of a few private interests.

Although institutional media claim special rights due to their important public function, they normally deny that they have a responsibility to keep their doors open. In the face of such hypocrisy, intervention is sorely needed if the right of self-expression is to have any real meaning in the years ahead. 

The survival of a free society depends ultimately on the actions of self-governing people. But people can’t manage their society, or their own lives, if they lack the sense of dignity that comes from exercising the right of self-expression. No government can guarantee democracy. No business can manufacture it. And the media can’t sell it. The best any of them can do is to keep the door open. If they simply do that, the vast potential of humanity will take care of the rest, and the promise of a self-governing society may yet be kept.  Cynics will complain that government can’t be trusted, or that humanity simply isn’t capable of self-rule. Sectarian ideologues, on the other hand, will say that all reforms are futile and the only way to transform society is through a disruptive (and inevitably destructive) break with the past. Both approaches carry the burden of despair, a loss of faith in the possibility of moving, day by day, toward a New World. What cynics and ideologues lack is hope, that richness of spirit essential for any lasting change.

Hope offers the prospect that communities can co-create a benign social order. It fills us with faith that people can discover their better selves. Hope fuels our most inspiring visions, and illuminates the path from here to there.

Greg Guma is the editor of Toward Freedom, author of The People’s Republic: Vermont and the Sanders Revolution and Passport to Freedom: A Guide for World Citizens, and a member of the National Writers Union.