I was recently at a brunch that included couples with young children when the hostess pulled out an attractive diversion for the kids: a mini-NES. To her surprise, all the 30-something fathers were just as excited as the kids, and the mass of young and old video game enthusiasts crowded around, offering tips to set it up on the TV.

The Nintendo Entertainment System entered U.S. markets in the mid-80s and immediately took hold of the minds of American children, myself included. By the time the decade ended, I was just as familiar with the legendary worlds of Mario and Zelda as I was of those of Tolkien and Lewis, or the biblical stories of Samson and Delilah, Jacob and Esau. Excited to pass on this antique joy to today’s 10-year-olds, I and the other 30-somethings gathered around and conjured the original Super Mario Brothers on the screen.

But it didn’t turn out like we thought.

“This is hard,” one boy glared at his father, after directing Mario into the mouth of a fire-breathing carnivorous plant. The most adept child, after almost making it to the end of the first stage, slipped into a bottomless pit, and was set back to the beginning of the level. “You have to start all the way over?” he said, aghast. “That’s not fair.” With that, the children’s interest in the ancient relic deflated like a leaky whoopie cushion. We old folks were left standing around the TV screen, wondering what happened.

As technology has developed and American culture has changed, so too has our relation to video games. Understanding that change is important, because video games and other electronic entertainment have largely replaced literature as the primary nonathletic pastime for children. It’s not an overstatement to say that video games have some effect on how the children who play them later see the world, just as how the minds of earlier generations were shaped by literature. And, I think, the key to this influence is not in the content of the games, as much as in their structure.

The earliest video games were coin-operated booths played in arcades and bowling alleys. They were designed to be brutally hard in order to separate you from your quarters in the most efficient way possible. There was an element of public exhibition in playing them, as people would gather round to watch.

When video games moved into homes via the NES and other consoles, the structure began to change. Early games like Super Mario Brothers still had arcade roots, but because quarters weren’t a limiting factor, the games got longer and more difficult, and were filled with hidden “Easter eggs” and secrets that could only be unlocked through dedication and repeat play.

Parents were the primary buyers and they were buying distraction for their children. The game cartridges advertised how many hours they would keep the brats entertained: 40-, 60-, 100-hours.

This business reality began to change how the games were made. Video games became less challenging and more time-consuming. By the time games became connected to the internet in the mid-90s, several new reward mechanisms were in place to keep people hooked: guaranteed and uncertain rewards.

Guaranteed rewards included a tiered leveling system based on accumulating “experience points” from time spent, as well as achievements for completing different tasks, which would unlock badges and special items that could be viewed and envied by other players online. Uncertain rewards included random “drops” of unique treasures or items that would appear about as frequently as jackpot on a one-armed bandit in Las Vegas – and they cultivated same addictive, obsessive behavior among players.

Games became less about skill than the “grind” – a phrase used by gamers today to describe the commitment required to collect in-game accolades and loot. Game developers also built shortcuts to achieve these in-game digital bragging rights – all you need is your parent’s credit card. This was the “conspicuous consumption” of economist Thorstein Veblen in full effect, and in the mid-‘00s game developer Valve Inc. demonstrated how powerful it was by making their Team Fortress 2 game completely free, but charging players for wacky hats and costumes for their online characters. They made hundreds of millions selling these goods, which had no more real substance than screen pixels and numbers in a database.

So, you can imagine the mindset of the 10-year-olds I met earlier – quite used to these incentives bound up in today’s video games – upon encountering the original Super Mario Brothers. “What’s happens if I play?” Repeated death. “What’s the payoff?” Get better and die less; develop a feeling of competency by overcoming challenge. “What’s the point?” There is none; it’s just a brief respite from more important and edifying activities. “No thanks!”

One of the charms of the early video games was their difficulty, which was a blessing because frustration would set in before long and, our need for distraction sated, we could move on to other things.

Today’s gaming is aimed toward producing an addictive grind mentality in gamers. But there is a counter-trend in which a series of brutally difficult and uncompromising games have found new appreciation, such as the gothic medieval fantasy series Dark Souls, and independent games like Getting Over It, in which players are tasked with navigating a Diogenes, his bottom half confined to a cauldron, up a mountainside with a hammer. The popular YouTube streamer PewDiePie comically almost lost his mind playing Getting Over It, abruptly ending his broadcasts after his pot-bound character tumbled down the mountain. The poet Yeats once observed, with some irony:

The fascination of what’s difficult

Has dried the sap out of my veins, and rent

Spontaneous joy and natural content

Out of my heart.
 

Maybe, for a fascination that can easily grow into an addiction, tremendous difficulty is a good thing.

PewDiePie