Children of the ‘80s Never Fear: Video Games Did Not Ruin Your Life

Inside the ridiculous media panic that scared parents silly

Vintage Pacman video game
ilbusca / iStock

In the early 1980s, spurred by the incredible popularity of Atari, Space Invaders and Pac-Man, everyone seemed to be talking about video games, if not obsessively playing them. A 1982 cover of Time magazine screamed “GRONK! FLASH! ZAP! Video Games are Blitzing the World!” If you turned on the radio that year you’d likely hear “Pac-Man Fever,” a Top 40 hit by Buckner & Garcia. Children begged their parents to buy them an Atari for Christmas or to give them a few quarters to drop in Pac-Man’s coin slot. Hollywood movies like Fast Times at Ridgemont High presented the video arcade as a quintessential teenage hangout.

Decades later they give off a more innocent retro cool vibe, but arcade video games were treated as objects of urgent fascination and concern when they were new. Kids regarded them as the ultimate playthings and competed to master them and set the high score, or the record for longest time playing Asteroids. Some grown-ups enjoyed them too. Many in positions of authority expressed fears about harmful effects of the electronic amusements and wanted to ban them or regulate their use.

Other adult authorities saw video games not just as diversions or toys, but as essential tools for training young people for a future of high-tech, computerized work and leisure. A magazine story framed the issue as one of essential education in the technology of tomorrow: “Is it somehow more valuable to learn Missile Command than to learn English?”

Atari Age: The Emergence of Video Games in America (MIT Press)

In "Atari Age," Michael Newman charts the emergence of video games in America from ball-and-paddle games to hits like Space Invaders and Pac-Man.

This moment in the history of pop culture and technology might have seemed unprecedented, as computerized gadgets were just becoming part of the fabric of everyday life in the early ’80s. But we can recognize it as one in a predictable series of overheated reactions to new media that go back all the way to the invention of writing (which ancients thought would spell the end of memory). There is a particularly American tradition of becoming enthralled with new technologies of communication, identifying their promise of future prosperity and renewed community. It is matched by a related American tradition of freaking out about the same objects, which are also figured as threats to life as we know it.

The emergence of the railroad and the telegraph in the 19th Century, and of novel 20th-century technologies like the telephone, radio, cinema, television and the Internet were all similarly greeted by a familiar mix of high hopes and dark fears. In Walden, published in 1854, Henry David Thoreau warned that, “we do not ride on the railroad; it rides upon us.” Technologies of both centuries were imagined to unite a vast and dispersed nation and edify citizens, but they also were suspected of trivializing daily affairs, weakening local bonds, and worse yet, exposing vulnerable children to threats and hindering their development into responsible adults.

These expressions are often a species of moral outrage known as media panic, a reaction of adults to the perceived dangers of an emerging culture popular with children, which the parental generation finds unfamiliar and threatening. Media panics recur in a dubious cycle of lathering outrage, with grownups seeming not to realize that the same excessive alarmism has arisen in every generation. 18th- and 19th-century novels might have caused confusion to young women about the difference between fantasy and reality, and excited their passions too much. In the 1950s, rock ‘n’ roll was “the devil’s music,” feared for inspiring lust and youthful rebellion, and encouraging racial mixing. Dime novels, comic books and camera phones have all been objects of frenzied worry about “the kids these days.”

The popularity of video games in the ’80s prompted educators, psychotherapists, local government officeholders and media commentators to warn that young players were likely to suffer serious negative effects. The games would influence their aficionados in the all the wrong ways. They would harm children’s eyes and might cause “Space Invaders Wrist” and other physical ailments. Like television, they would be addictive, like a drug. Games would inculcate violence and aggression in impressionable youngsters. Their players would do poorly in school and become isolated and desensitized. A reader wrote to The New York Times to complain that video games were “cultivating a generation of mindless, ill-tempered adolescents.”

The arcades where many teenagers played video games were imagined as dens of vice, of illicit trade in drugs and sex. Kids who went to play Tempest or Donkey Kong might end up seduced by lowlifes, spiraling into lives of substance abuse, sexual depravity and crime. Children hooked on video games might steal to feed their habit. Reports at the time claimed that video kids had vandalized cigarette machines, pocketing the quarters and leaving behind the nickels and dimes.

Nowhere was this more intense than in Mesquite, Texas, a suburb of Dallas where regulation of video arcades became a highly publicized legal affair. The city barred children under 17 from the local Aladdin’s Castle emporium unless accompanied by a parent or guardian. Officials also refused the arcade chain a license to open a new location in a shopping mall on the grounds that the owner was connected with “criminal elements.” Bally, the company that owned Aladdin's Castle, filed suit against Mesquite. The case made its way through the courts until 1982, when the Supreme Court sent the matter back to the appeals court, effectively dodging an opportunity to establish young people’s right to play video games in arcades. In a New York City case of the same year, a court ruled that the municipality could regulate games to curb noise and congestion, finding that games were not a form of protected speech under the First Amendment.

Such cases, among others, were not really about banning or restricting access to video games, however much some adults despised them. Millions of gaming systems were in people’s homes by 1982, and no legal action could remove them. Rather, these efforts sought to regulate the behavior of America’s teenagers. Their presence annoyed adults with their hanging around, maybe skipping school, making fast remarks at passersby, maybe attracting the wrong element, making noise, littering, maybe drinking or smoking dope, and basically being teenagers. Some towns, like Marlborough, Massachusetts and Coral Gables, Florida, managed to keep arcade games out altogether, and others, like Morton Grove, Illinois, managed to prevent arcade openings by enforcing ordinances that forbade businesses from operating more than a certain number of coin-operated machines.

There was a flipside to the freak-out about games and youth, a counterpoint to the panicked discourses that greeted the soaring popularity of the new amusements. Many commentators, particularly social scientists with a skeptical view of the moralizing, sky-is-falling crowd saw great potential benefits in video games, which they identified as cutting-edge technology. Many observers of American society in the 1970s and ’80s had recognized a large-scale shift from work in factories to work in offices, from manufacturing to knowledge and service labor. Among other technologies, electronics and particularly computers were facilitating this shift.

Video games were computerized playthings, often the first introduction to computers young people received, and they could provide a new form of training in the tools of tomorrow’s workplace, the optimists maintained. It was clear that children were learning from the games—how to master them, but also how to interact with digital electronics and computer interfaces. These were “powerful educational tools.” Some kids who were devoted to playing computer games might graduate to programming, making the pastime an introduction to making software. Several news items in the early ’80s profiled kids who sold a video game they had programmed at home, thereby teaching themselves not just technical skills but entrepreneurism. A California teenager named Tom McWilliams, whose parents refused to buy him a computer of his own, sold his game Outpost for $60,000.

Somehow, a generation of teenagers from the 1980s managed to grow up despite the dangers, real or imagined, from video games. The new technology could not have been as powerful as its detractors or its champions imagined. It’s easy to be captivated by novelty, but it can force us to miss the cyclical nature of youth media obsessions. Every generation fastens onto something that its parents find strange, whether Elvis or Atari. In every moment in media history, intergenerational tension accompanies the emergence of new forms of culture and communication. Now we have smartphone addiction to panic about.

But while the gadgets keep changing, our ideas about youth and technology, and our concerns about young people’s development in an uncertain and ever-changing modern world, endure.

Michael Z. Newman is an associate professor at the University of Wisconsin-Milwaukee.  His book, Atari Age: The Emergence of Video Games in America (MIT Press) was released in February, 2017. 

Get the latest History stories in your inbox?

Click to visit our Privacy Statement.