DANCING NEBULA

DANCING NEBULA
When the gods dance...

Sunday, July 14, 2013

WORD VIRUS: Video games make us all losers!

WORD VIRUS: Video games make us all losers!: Saturday, Jul 13, 2013 11:30 AM PDT ...

Video games make us all losers!


Modern video games are almost impossible to win. Does dying over and over create a new kind of artistic tragedy?


Video games make us all losers!
 
I was playing Patapon. Things were going well, but when I came to the desert, my tactics began to fail. I repeated the trusted sequence of button pushes, but my warriors continued to burn to death in the sun; I failed the level; I tried again. I could not glean from the game if my timing was off, if I was using the wrong sequence, or if something completely different was wrong. I put the game away; I returned to it; I put it away again. I did not feel too good about myself. I dislike failing, sometimes to the extent that I will refuse to play, but mostly I will return, submitting myself to series of unhappy failures, once again seeking out a feeling that I deeply dread.
It is with some trepidation that I admit to my failures in Patapon, but I can fortunately share a story that puts my skills in a better light. I had been looking forward to Meteos for a long time, so I unwrapped it quickly and selected the main game mode. In a feat of gamesmanship (I believe), I played the game to completion on my very first attempt without failing even once. Naturally, this made me very angry. I put the game away, not touching it again for more than a year. (I have not been able to repeat this first performance.)

I dislike failing in games, but I dislike not failing even more. There are numerous ways to explain this contradiction, and I will discuss many of them in this book. But let us first consider the strangeness of the situation: every day, hundreds of millions of people around the world play video games, and most of them will experience failure while playing. It is safe to say that humans have a fundamental desire to succeed and feel competent, but game players have chosen to engage in an activity in which they are almost certain to fail and feel incompetent, at least some of the time. In fact, we know that players prefer games in which they fail. This is the paradox of failure in games. It can be stated like this:
1. We generally avoid failure.
2. We experience failure when playing games.
3. We seek out games, although we will experience something that we normally avoid.
This paradox of failure is parallel to the paradox of why we consume tragic theater, novels, or cinema even though they make us feel sadness, fear, or even disgust. If these at first do not sound like actual paradoxes, it is simply because we are so used to their existence that we sometimes forget that they are paradoxes at all. The shared conundrum is that we generally try to avoid the unpleasant emotions that we get from hearing about a sad event, or from failing at a task. Yet we actively seek out these emotions in stories, art, and games.


The paradox of tragedy is commonly explained with reference to Aristotle’s term catharsis, arguing that we in our general lives experience unpleasant emotions, but that by experiencing pity and fear in a fictional tragedy, these emotions are eventually purged from us. However, this does not ring true for games— when we experience a humiliating defeat, we really are filled with emotions of humiliation and inadequacy. Games do not purge these emotions from us—they produce the emotions in the first place.

The paradox is not simply that games or tragedies contain something unpleasant in them, but that we appear to want this unpleasantness to be there, even if we also seem to dislike it (unlike queues in theme parks, for example, which we would prefer didn’t exist). Another explanation could be that while we dislike failing in our regular endeavors, games are an entirely different thing, a safe space in which failure is okay, neither painful nor the least unpleasant. The phrase “It’s just a game” suggests that this would be the case. And we do often take what happens in a game to have a different meaning from what is outside a game. To prevent other people from achieving their goals is usually hostile behavior that may end friendships, but we regularly prevent other players from achieving their goals when playing friendly games. Games, in this view, are something different from the regular world, a frame in which failure is not the least distressing. Yet this is clearly not the whole truth: we are often upset when we fail, we put in considerable effort to avoid failure while playing a game, and we will even show anger toward those who foiled our clever in-game plans. In other words, we often argue that in-game failure is something harmless and neutral, but we repeatedly fail to act accordingly.

The reader has probably already thought of other solutions to the paradox of failure. I will discuss many possible explanations, and while I will propose an answer to the problem, the journey itself is meant to offer a new explanation of what it is that games do.
Players tend to prefer games that are somewhat challenging, and for a moment it can sound as if this explains the paradox— players like to fail, but not too much. Game developers similarly talk about balancing, saying that a game should be “neither too easy nor too hard,” and it is often said that such a balance will put players in the attractive psychological state of flow in which they become agreeably absorbed by a game. Unfortunately, these observations do not actually explain the paradox of failure—they simply demonstrate that players and developers alike are aware of its existence. I will be discussing the paradox mostly in relation to video games (on consoles, computers, handheld devices, and so on), but it applies to all game types, digital or analog. I will also be looking at single-player games (failure against the challenge of the game), as well as competitive multiplayer games (failure against other players).
During the last few years, failure has become a contested discussion point in video game culture. Since roughly 2006, we have seen an explosion of new video game forms, with video games now being distributed not only in boxes sold in stores, but also on mobile phones, as downloads, in browsers, and on social networks, as well as being targeted at almost the entire population, and designed for all kinds of contexts for which video games used to not be made. This casual revolution in video games is forcing us to rethink the role of failure in games: should all games be intense personal struggles that bombard the player with constant failures and frequent setbacks, or can games be more relaxed experiences, like a walk in the park? The somewhat anticipated response from part of the traditional video gaming community has been to denounce new casual and social games as too easy, pandering, simplistic, and so on. Yet, what has become clear is that both (a) many of the apparently simple games played by a broad audience are in actuality very challenging and (b) some traditional video game genres, especially role-playing games, all but guarantee players that they will eventually prevail. So failure is in need of a more detailed account, and we must begin by asking the simple question: what does failure do?
Consider what happens when we are stuck in the puzzle game Portal 2; we understand that we are lacking and inadequate (and more lacking and inadequate the longer we are stuck), but the game implicitly promises us that we can remedy the problem if we keep playing. Before playing a game in the Portal series, we probably did not consider the possibility that we would have problems solving the warp-based spatial puzzles that the game is based on—we had never seen such puzzles before! This is what games do: they promise us that we can repair a personal inadequacy—an inadequacy that they produce in us in the first place.
My argument is that the paradox of failure is unique in that when you fail in a game, it really means that you were in some way inadequate. Such a feeling of inadequacy is unpleasant for us, and it is odd that we choose to subject ourselves to it. However, while games uniquely induce such feelings of being inadequate, they also motivate us to play more in order to escape the same inadequacy, and the feeling of escaping failure (often by improving our skills) is central to the enjoyment of games. Games promise us a fair chance of redeeming ourselves. This distinguishes game failure from failure in our regular lives: (good) games are designed such that they give us a fair chance, whereas the regular world makes no such promises.

Games are also special in that the conventions around game playing are by themselves philosophies of the meaning of failure. The ideals of sportsmanship specifically tell us to take success and failure seriously but to keep our emotions in check for the benefit of greater causes. Sports philosopher Peter Arnold has identified three types of sportsmanship: (1) sportsmanship as a form of social union (the noble behavior in the game extending outside the game), (2) sportsmanship as a means in the promotion of pleasure (controlling our behavior to make this and future games possible), and (3) sportsmanship as altruism (players forfeiting a chance to win in order to protect another participant, for example).
This type of emotional control can be challenging for children (and others), and a good deal of material exists for explaining it. The book “Liam Wins the Game, Sometimes” teaches children how to deal with winning and losing in games. The author tells the child that it is acceptable to feel disappointed when losing, but unacceptable to throw a tantrum. “It is being a poor loser and it spoils the whole game. Others do not like playing with poor losers.” To be a sore loser is to make a concrete philosophical claim: that failure in games is straightforwardly painful, without anything to compensate for it. However, it is important to realize that poor losers are not chastised for showing anger and frustration, but for showing anger and frustration in the wrong way. Games, depending on how we play them, give us a license to display anger and frustration on a level that we would not otherwise dare express, but some displays will still be out of bounds, rude, or socially awkward. Contrary to the poor loser, the spoilsport who plays a game without caring for either winning or losing is making the statement that game failure is not painful at all.

The Uses of Failure: Learning and Saving the World

Though we may dislike failure as such, failure is an integral element of the overall experience of playing a game, a motivator, something that helps us reconsider our strategies and see the strategic depth in a game, a clear proof that we have improved when we finally overcome it. Failure brings about something positive, but it is always potentially painful or at least unpleasant. This is the double nature of games, their quality as “pleasure spiked with pain.”

This is why the question of failure is so important: it not only goes to the heart of why we enjoy games in the first place, it also tells us what games can be used for. Given that games have an undisputable ability to motivate players to meet challenges and learn in order to overcome failure, wouldn’t it be smart to use games to motivate players toward other more “serious” undertakings? It is commonly argued that the principles of game design can be applied to a number of situations in the regular world in order to motivate us: examples include designing educational games, giving employees points for their performance, giving shoppers points for checking in at specific location, awarding Internet users with badges for commenting on Web site posts, and so on. This is a long-standing idea, which at the time of writing has resurfaced under the name of gamification. We therefore need to think more closely about why games work so well: at the very least, good games tend to offer well-defined goals and clear feedback. This gives us an objective measure of our performance, and allows us to optimize our strategies. If applying this to nongame situations sounds tempting, consider how the 2008 financial crisis was caused in part by large banks and financial institutions making their organizations too gamelike by giving employees the clear goal of approving as many loans as possible and punishing naysayers with termination. This was a case where the design that works so well inside games can be disastrous outside games, even if we think only of the well-being of the companies involved. Games, apparently, are not a pixie dust of motivation to be sprinkled on any subject. The underlying questions are therefore: When and how do games motivate us to overcome failure and improve ourselves? When is a game structure useful, and when is it detrimental? And most important: Is there a difference between failing inside and failing outside a game?

Inside and Outside the Game

Imagine that you are dining with some people you have just met. You reach for the saltshaker, but suddenly one of the other guests, let’s call him Joe, looks at you sullenly, then snatches the salt away and puts it out of your reach. Later, when you are leaving the restaurant, Joe dashes ahead of you and blocks the exit door from the outside. Joe is being rude—when you understand what another person is trying to do, it is offensive, or at least confrontational, to prevent that person from doing it.
However, if you were meeting the same people to play the board game Settlers, it would be completely acceptable for the same Joe to prevent you from winning the game. In the restaurant as well as in the game, Joe is aware of your intention, and Joe prevents you from doing what you are trying to do. At the restaurant, this is rude. In the game, this is expected and acceptable behavior. Apparently, games give us a license to engage in conflicts, to prevent others from achieving their goals. When playing a game, a number of actions that would regularly be awkward and rude are recast as pleasant and sociable (as long as we are not poor losers, of course).

Similarly, consider how the designer of a car, computer program, or household appliance is obliged to make sure that users find the design easy to use. At the very least, the designer is expected to help the driver avoid oncoming traffic, prevent the user from deleting important files, and not trick the user into selecting the wrong temperature for a wash. A fictional example shows what can happen if designers do not live up to his obligation: in Monty Python’s “Dirty Hungarian Phrasebook” sketch, a malicious author creates a fake Hungarian language phrasebook in which (among other things) a request for the way to the train station is translated into English as sexual innuendo. Chaos ensues. We expect neither phrasebook authors nor designers to act this way.
However, if you pick up a single-player video game, you expect the designer to have spent considerable effort preventing you from easily reaching your goal, all but guaranteeing that you will at least temporarily fail. (Designers are also expected to make some parts of a game easy to use.) It would be much easier for the designer to create the game where the user only has to press a button once to complete the game. But for something to be a good game, and a game at all, we expect resistance and the possibility of failure. Single-and multiplayer games share this inversion of regular courtesy, giving players license to work against each other where it would otherwise be rude, and allowing the designer to make life difficult for the player.

If we return to Joe, the rude dinner companion who denied you access to the salt and blocked the door, we could also imagine him performing the very same actions with a glimmer in his eye, smiling, and perhaps tilting his head slightly to the side. In this case, Joe is not trying to be rude, but playful, and you may or may not be willing to play along. By performing simple actions such as saying “Let’s play a game” or tilting our heads and smiling, we can change the expectations for what is to come. Gregory Bateson calls this meta-communication: humans and other animals (especially mammals) perform playful actions where, for example, what looks like a bite is understood to not be an actual bite. Such meta-communication is found in all types of play, but games are a unique type of structured play that allows us to perform seemingly aggressive actions within a frame where they are understood as not quite aggressive.

In the field of game studies, Katie Salen and Eric Zimmerman have described game playing as entering a magic circle in which special rules apply.  This idea of a separate space of game playing has been criticized on the grounds that there is no perfect separation between what happens inside a game, and what happens outside a game. That is obviously true but misses the point: the circumstances of your game playing, personality, mood, and time investment will influence how you feel about failure, but we nevertheless treat games differently from non-games, and we have ways of initiating play. We expect certain behaviors and experiences within games, but there are no guarantees that players, ourselves included, will live up to expectations.
The Gamble of Failure

“It’s easy to tell what games my husband enjoys the most. If he screams ‘I hate it. I hate it. I hate it,’ then I know he will finish it and buy version two. If he doesn’t say this, he’ll put it down in an hour.”

In quoting the spouse of a video game player, game emotion theorist Nicole Lazzaro shows how we can be angry and frustrated while playing a game, but that this frustration and anger binds us to the game. We are motivated to play when something is at stake. It seems that the more time we invest into overcoming a challenge (be it completing a game, or simply overcoming a small subtask), the bigger the sense of loss we experience when failing, and the bigger the sense of triumph we feel when succeeding. Even then, our feeling of triumph can quickly evaporate if we learn that other players overcame the challenge faster than we did. To play a game is to make an emotional gamble: we invest time and self-esteem in the hope that it will pay off. Players are not willing to run the same amount of risk—some even prefer not to run a risk at all, not to play.

I am taking a broad view of failure here. Examples of failures include the GAME OVER screen of a traditional arcade game such as Pac-Man, the failure of a player to complete a level within sixty seconds, the failure to survive an onslaught of opponents, the failure to complete a mission in Red Dead Redemption, the failure to protect the player character in Limbo, the failure to win a tic-tac-toe match against a sibling, and the failure to win Wimbledon or the Tour de France. It can also be something as ordinary as the failure to jump to the next ledge in a platform game like Super Mario Bros., even when it has no consequences beyond having to try the jump again. Though on different scales, each of these examples involves the player working toward a goal, either communicated by the game or invented by the player, and the player failing to attain that goal. Depending on the goal of a given game, failures can result in either a permanent loss (such as when losing a match in multiplayer game) or a loss of time invested toward completing or progressing in a game.
Certainly, the experience of failing in a game is quite different from the experience of witnessing a protagonist failing in a story. When reading a detective story, we follow the thoughts and discoveries of the detective, and when all is revealed, nothing prevents us from believing that we had it figured out all along. Through fiction, we can feel that we are smart and successful, and stories politely refrain from challenging that belief. Games call our bluff and let us know that we failed. Where novels and movies concern the personal limitations and self-doubt of others, games have to do with our actual limitation and self-doubts. However much we would like to hide it, our failures are plain to see for any onlooker, and any frustration that we indicate is easily understood by anyone who watches us.

This Game Is Stupid Anyway

“This Sport Is Stupid Anyway,” a New York Post headline proclaimed following the US soccer team’s exit from the 2010 World Cup. Fortunately, we have ways of denying that we care about failure. We can dismiss a game as poorly made or even “stupid,” and we understand this type of defense to be so childish that we will use it only half-jokingly as in the New York Post headline. This is an opportunistic “theory” about the paradox of failure: that failure in a specific game is unimportant, because it requires only irrelevant skills (if any).

Having failed in Patapon, I searched for “Patapon desert” and learned that I needed a “JuJu,” a rain miracle which I did not recall having ever heard of. To my great relief, the search yielded more than 150,000 hits—I was not the only player to suffer from this problem, and I could safely conclude that the problem lay with the game, certainly not with me. Our experience of failure strongly depends on how we assign the blame for failing. In psychology, attribution theory explains that we try to attribute events to certain causes. Harold K. Kelley distinguishes among three types of attributions that we can make in an event involving a person and an entity.


Person: The event was caused by personal traits, such as skill and disposition.

Entity: The event was caused by characteristics of the entity.
Circumstances: The event was due to transient causes such as luck, chance, or an extraordinary effort from the person.
If we receive a low grade on a school test, we can decide that this was due to (1) person—personal disposition such as lack of skill, (2) entity—an unfair test, or (3) circumstance—having slept badly, having not studied enough. This maps well to common explanations for failure in video gaming: a player who loses a game can claim to be bad at this specific game or at video games in general, claim that the game is unfair, or dismiss failure as a temporary state soon to be remedied though better luck or preparation.

I blamed Patapon: I searched for a solution, and I used the fact that many players had experienced the same problem as an argument for attributing my failure to a flaw in the game design, rather than a flaw with my skills. As it happens, we are a self-serving species, more likely to deny responsibility when we fail than when we succeed. A technical term for this is motivational bias, but it is also captured in the observation that “success has many fathers, but failure is an orphan.” After numerous attempts at this section of Patapon, I was relieved to be allowed to be furious at the game, which I could now declare to be so poorly designed that it was not worth my time. I put the game back in its box, only returning to it months later. While we dislike feeling responsible for failure, we dislike even more strongly games in which we do not feel responsible for failure (a variation on the fact that we do not want to fail in a game, but we also do not want not to fail). The times I denied responsibility for failure in Patapon and stopped playing, I precluded the possibility that I would eventually cross the desert and complete the game. By refusing the emotional gamble of the game, I was acting in a self-defeating way; by refusing to exert effort in order to progress in the game, I was shielding myself from possible future failures. According to one theory, our fear of failure leads to procrastination: we perform worse than we should in order to feel better about our poor performance.

Still, should we accept responsibility for failure, the question becomes this: does my in-game performance reflect skills or traits that I generally value? Benjamin Franklin notably declared chess to be a game that contains important lessons: “The game of Chess is not merely an idle amusement. Several very valuable qualities of the mind, useful in the course of human life, are to be acquired or strengthened by it, so as to become habits, ready on all occasions . . . we learn by Chess the habit of not being discouraged by present appearances in the state of our affairs, the habit of hoping for a favourable change, and that of persevering in the search of resources.” If we praise a game for teaching important skills, as Franklin does here, we must accept that failing in it will imply a personal lack of the same important skills. That is a question to ask about every game: does this game expose our important underlying inadequacies, or does it merely create artificial and irrelevant ones? If a game exposes existing inadequacies, then we must fear how it reveals our hidden flaws. If, rather, a game creates new, artificial “art” inadequacies, it is easier to shrug off.

Every failure we experience in a game is torn between these two arguments pulling in opposite directions: we can think of game failure as normal: as a type of failure that genuinely reflects our general abilities and therefore is as important as any out-of-game failure. However, we can also think of it as deflated: that the importance of any failure is automatically deflated when it occurs inside a game, since games are artificial constructs with no bearing on the regular world. My point is not that these two arguments are true or false, as much as games work by making these contradictory views available to us: failure really does matter to us, as can be witnessed in the way we try to avoid failure while playing and in the way we sometimes react when we do fail. At the same time, we use deflationary arguments to protect our self-esteem when we fail, and this gives games a kind of lightness and freedom that allows us to perform to the best of ability, because we have the option of denying that game failure matters.

The Meaning of the Art Form

Even if we often dismiss the importance of games, we also discuss them, especially the games that we call sports, as something above, something more pure than, everyday life. In professional sports, games are often framed as something noble, something that truly reveals the best side of humans, something larger than life—think only of movies like “Chariots of Fire,” or the cultural obsession with athletes. In soccer, the Real Madrid–Barcelona rivalry continues to be played out with a layer of meaning that goes back to the Franco era. In baseball, the New York Yankees and the Boston Red Sox have competed for over a century, and every match between the two teams is seen through that lens and adds to that history. This extends beyond games involving physical effort. For example, the legendary 1972 World Chess Championship match in Reykjavik between US player Bobby Fischer and Soviet Boris Spassky was understood as an extension of the Cold War. These examples demonstrate that we routinely understand games as more important, more glorious, and more tragic than everyday life.

Outside the realm of sports, late eighteenth-century German philosopher Friedrich Schiller went so far as to declare play central to being human: “Man plays only when he is in the full sense of the word a man, and he is only wholly Man when he is playing.” In the 1930s, Dutch play theorist Johan Huizinga noted this duality between our framing of games as either important or frivolous, by describing play as “a free activity standing quite consciously outside ‘ordinary’ life as being ‘not serious’, but at the same time absorbing the player intensely and utterly.” We can talk about games as either carved-off experiences with no bearing on the rest of the world or as revealing something deeper, something truly human, something otherwise invisible.

This type of discussion, of whether game failures, and games by extension, are significant, has been applied to every art form. All humans consume artistic expressions from music through storytelling to the visual arts. We may share the intuition that the arts are fruitful, inspirational, and important, yet it is hard to demonstrate or measure such positive effects. In “The Republic,” Plato famously denied the poet access to his ideal society “because he wakens and encourages and strengthens the lower elements in the mind to the detriment of reason.” Compare this with the continued idealization of art as a privileged way of understanding the world. Games share this predicament with other art forms: we may sense that they are important, that they give access to something profound; it is just that we have no easy way to prove that. Games are activities that have no necessary tangible consequences (though we can negotiate to play for concrete consequences—money, doing the dishes, etc.). This lack of necessary tangible consequences (productive, negative, or positive) defines games, but it can also make them seem frivolous. Yet it is precisely because games are not obviously necessary for our daily lives that we can declare them to be above the banality of our simpler, more mundane needs.

Video games have by now celebrated their fiftieth anniversary, while games in general have been around for at least five thousand years. The first decade of this century saw the appearance of the new field of video game studies, including conferences, journals, and university programs. The defense of video games (as of most things) tends to grow from personal fascination. I enjoy video games; I feel that they give me important experiences; I associate them with wide-ranging thoughts about life, the universe, and so on. This is valuable to me, and I want to understand and share it. From that starting point, video game fans have so far focused on two different arguments for the value of video games:
1. Video games can do what established art forms do. In this strategy, the fan claims that video games can produce the same type of experiences as (typically) cinema or literature produce. Are video games not engaging like “War and Peace” or “The Seventh Seal”? The downside to this strategy is that it makes video games sound derivative: if we only argue that video games live up to criteria set by literature or cinema, why bother with games at all?

2. Video games transcend established categories. In this strategy, the fan can argue that since we already have film, why should video games aspire to be film? It follows that we need to identify and appraise the unique qualities of video games. In its most austere form, this can become an argument for identifying a “pure” game that should be purged of influences from other art forms, typically by banishing straightforward narrative from game design. The softer version of this argument (which happens to be my personal position) states that video games should try to explore their own unique qualities, while borrowing liberally from other art forms as needed.

Again, these are theories that we use to explain our experiences. When I play video games, I do experience something important, profound. Video games are for me a space of reflection, a constant measuring of my abilities, a mirror in which I can see my everyday behavior reflected, amplified, distorted, and revealed, a place where I deal with failure and learn how to rise to a challenge. Which is to say that video games give me unique and valuable experiences, regardless of how I would like to argue for their worth as an art form, as a form of expression, and so on. I hope to bring the experience and the arguments closer to each other.

Two Types of Failure (and Tragedy)

In my earlier book “Half-Real,” I argued that nonabstract video games are two quite different things at the same time: they are real rule systems that we interact with, and they are fictional worlds that the game cues us into imagining. For example, to win or lose a video game is an actual, real event determined by the game rules, but if we succeed by slaying a night elf, that adversary is clearly imaginary. As players, we switch between these two perspectives, understanding that some game events are part of the fictional world of the game (Mario’s girlfriend has been kidnapped), while other game events belong to the rules of the game (Mario comes back from the dead after being hit by a barrel). This also means that there are two types of failure in games: real failure occurs when a player invests time into playing a game and fails; fictional failure is what befalls the character(s) in the fictional game world.

Real Failure

Like tragedy in theater, cinema, and literature, failure makes us experience emotions that we generally find unpleasant. The difference is that games can be tragic in a literal sense: consider the case of French bicycle racer Raymond Poulidor, who between 1962 and 1976 achieved no less than three second places and five third places in the Tour de France, but in his career never managed to win the race. Tragic.
On the other hand, if I fail to complete one level of a small puzzle game on my mobile phone because I have to get off at the right subway stop, we probably would not describe this as tragic. Not because there is any structural difference between the two situations—Poulidor and I both tried to win a game, and we both failed. We had both invested some time in playing, we had both made an emotional gamble in the hope that we would end up happy, and we both experienced a sense of loss when failing. Yet it is safe to say that Poulidor made a larger time investment and a larger emotional gamble than I did.

Playwright Oscar Mandel’s traditional but often-cited definition of tragedy explains the difference between Poulidor and me: “A work of art is tragic if it substantiates the following situation: a protagonist who commands our earnest goodwill is impelled in a given world by a purpose, or undertakes some action, of a certain seriousness and magnitude; and by that very purpose or action, subject to the same given world, necessarily and inevitably meets with great spiritual or physical suffering” (my emphasis). We reserve the idea of tragedy for events of some magnitude: my failing at a simple puzzle game does not qualify as tragic, but Poulidor’s failed lifetime project of winning the Tour de France does.

Games are meaningful not simply by representing tragedies, but on occasion by creating actual, personal tragedies. In “The Birth of Tragedy,” Nietzsche discusses the notion that tragedy adds a layer of meaning to human suffering, that art “did not simply imitate the reality of nature but rather supplied a metaphysical supplement to the reality of nature, and was set alongside the latter as a way of overcoming it.” Though I am of a more optimistic temperament than Nietzsche was, I believe that there is a fundamental truth to this idea. Not in the naïve romantic sense that tragic themes are required for art to be valuable, but in the sense that painful emotions in art (such as games) gives us a space for contemplating the very same emotions. To some it may be surprising to hear that video games provide a space for contemplation at all, but it is probably more obvious when we consider that video games are part of an at least five-thousand-year history of games. Games, in turn, are often ritualistic, repeatable, and laden with symbolic meaning. Think only of Chess, or Go, or the Olympics. Or, casting an even wider net, play theorist Brian Sutton-Smith has proposed that play is fundamentally a “parody of emotional vulnerability”: that through play we experience precarious emotions such as anger, fear, shock, disgust, and loneliness in transformed, masked, or hidden form.

Fictional Failure

That was the real, first-person aspect of failure. We are real-life people who try to master a game, but most video games represent a mirror of our performance in their fictional worlds—they ask us to make things right in the game world by saving someone or fighting for self-preservation. For example, the game Mass Effect 2 lets the player steer Commander Shepard through a series of missions, protecting Shepard from harm and attempting to save the galaxy. The goals of the player are thus aligned with the goals of the protagonist; when the player succeeds, the protagonist succeeds. In games with no single protagonist, the player is typically asked to guard the interests of a group of people, a city, or a world.

The question is, can we imagine video games where this is inverted, such that when the player is successful, the protagonist fails? In the early 2000s, this seemed obviously impossible. As fiction theorist Marie-Laure Ryan put it, who would want to play “Anna Karenina,” the video game? Who would want to spend hours playing in order to successfully throw the protagonist under a train? At the time, I also believed that such a game was inconceivable. But only a few years later, there were games in which players had to do exactly that—kill themselves. Some of these were parodic games that openly subverted player expectations. Others were tragic in a traditional sense (SPOILER ALERT): Red Dead Redemption at first seems to let the player be a common video game hero, but the game can in fact be completed only by sacrificing the protagonist in order to save his family.

Director Steven Spielberg has argued that video games will only become a proper storytelling art form “when somebody confesses that they cried at level 17.” This is surely too simple: any checklist for what makes a work of art good will necessarily miss its mark, and works created to tick off the boxes in such a list are rarely worthy of our attention. Ironically, the fact is that players often do cry over video games, but mostly over losing important matches in multiplayer games, being expelled from their guild in World of Warcraft, and so forth. Players report crying over some single-player games such as Final Fantasy VII. Note that tragic endings in games are not interesting because they magically transform video games into a respected art form, but because they show that games can deal with types of content that we thought could not be represented in this form. Tragic game endings appear distressing due to the tension between the success of the player and the failure of a game protagonist, but this distress can give us a sense of responsibility and complicity, creating an entirely new type of tragedy.

Excerpted from “The Art of Failure: An Essay on the Pain of Playing Video Games” by Jesper Juul. Copyright 2013 MIT Press. All rights reserved.
Jesper Juul is assistant professor at the New York University Game Center. He is the author of Half-Real: Video Games between Real Rules and Fictional Worlds and A Casual Revolution: Reinventing Video Games and Their Players, both published by the MIT Press.

Saturday, July 13, 2013

Landscapes of emergency: militarizing public space

Landscapes of emergency: militarizing public space

by ROAR Collective on July 10, 2013
Post image for Landscapes of emergency: militarizing public space
This short documentary reveals the undeclared state of emergency that casts its shadow over the functions of public space in crisis-ridden Athens today.

By the team behind the City at a Time of Crisis research project.
Relying upon the readings of two lawyers, this short documentary attempts a passage through the dark landscapes that the new dogma of public security leaves in its wake. It chooses to view the crisis as a way of managing urban everydayness; as a way of managing it militarily. It comprises a thematic intervention-deflection as part of The Space That Remains, a research strand of the project The City at a Time of Crisis.
Yet through its deflecting characteristics it simply reaffirms the initial fears that led to the creation of this research strand. In other words, it confirms that the space that remains is ever-lessening and that the state of emergency educates us to live, in the end, with this loss.

Produced by Ross Domoney, Christos Filippidis and Dimitris Dalakoglou
Filmed and edited by Ross Domoney
Research by Christos Filippidis
Script editing by Dimitris Dalakoglou and Christos Filippidis
Special thanks to Eleni Vradi



3 comments… read them below or add one
 
Josué July 10, 2013 at 12:53
 
I think we need to focus on changing the minds of those who comply with the “ruler elite” – that is, those who sell their humanity values to promisses of money and a higher place in societyl. I mean, police and soldiers, of course. Without them, without their weapons, the rulers are nothing and basically incapable of doing whatever they want with us. As much as I would like to achieve this through peace and understanding, I think this will not be easy and will probably lead to some sort of war between the people and these mercenaries (yes, that’s what they are by definition). Anyway, the people can get weapons too and those mercenaries also have something to loose, like us – I’m sure they want to stay alive and healthy, as they want their families to be safe – like us. It’s just that the people are still afraid and in a way respect authority. That is, until they have nothing else to loose. Once you reach that point, well, you’re pretty much fucked. Unfortunately, I think that will have to happen so that “they” understand that there is a limit to the abuses you can force on someone. And then there will be peace and true freedom.
Very much agree with Josue above.

The police presence in and around central Athens is extraordinary – on foot, on motor bikes, in cars and on the streets. The areas with high concentrations of refugees are especially heavily policed on the boundaries. That is at the key entry points to the neighbourhoods. It is very reminiscent of the north of Ireland at the height of the ‘troubles’ and reminded me very much of the West Bank during the second intifada.

But as even the merest glimpse of the mainstream media indicates this is not preculiar to Athens, even if it is an extreme case. States throughout the world seem increasingly to resort to violent and aggressive policing of their people when they take to the streets to protest injustice. Whether in London, Frankfurt, New York or Athens the militarised police now look pretty much the same irrespective of place. Similar uniforms, weapons, clubs, chemicals, masks and so on. I know little about international policing but it seems to me that there must be considerable inter-action between the different forces, shared training, intelligence etc.
Repression and violence do not win the hearts and minds of the people. Yet it seems that ruling elites do not care anymore and that they will opt for repression every time.
Many commentators have concluded in the past that such a hard tactic is counter-productive in the longer term. But the past few years suggests that more and more states are prepared to take this route with little or no heed for the consequences.
Derek July 11, 2013 at 20:03
 
Oppression is not a mistake, nor is it an accident. It is planned and serves a purpose. And oppression will not end because of polite, reasoned, or rational arguments. Historically though, it is as Josue says; the masses won’t figure this out or do anything about it until we are all well and truly fucked. Too bad we can’t examine our history and figure this out sooner.

Friday, July 5, 2013

Photo evidence: Maiden flight for the second China’s J-20 stealth fighter prototype May 16, 2012


Posted by David Cenciotti in : China , trackback
 
After a series of high-speed taxi tests with the nosewheel off the ground and subsequent use of the drag chute, here are the first images published on the Chinese forums showing the second J-20 stealth fighter prototype performing its first flight at Chengdu.


Image credit: http://club.mil.news.sina.com.cn/

Egypt’s revolution: between the streets and the army


by Jerome Roos on July 2, 2013
Post image for Egypt’s revolution: between the streets and the army
Egypt’s revolution will never be complete until the authoritarian neoliberal state is finally dismantled. Only the power of the streets can do this.

Morsi is trembling. Two days after millions of Egyptians took to the streets to once again demand the downfall of the regime, the Muslim Brotherhood looks weaker and more isolated than ever. On Monday, the grassroots Tamarod campaign that kicked off the mass protests gave Morsi 24 hours to step down and threatened an indefinite wave of civil disobedience if he failed to comply. The army quickly joined in, giving the government a thinly-veiled 48-hour ultimatum to “meet the people’s demands”.

Since then, at least six government ministers have jumped ship, with rumors doing the rounds earlier on Tuesday that the entire cabinet had resigned. To further compound the pressure on Morsi, the army command released spectacular footage showing Sunday’s mass mobilizations from the bird’s eye view of the military helicopters that circled over Cairo carrying Egyptian and army flags — set to bombastic music, patriotic slogans and incessant chants of “Out! Out! Out!” directed at the President and Muslim Brotherhood.

On Tuesday morning, government officials, opposition leaders and the military command were all quick to deny that the army’s statements and actions were indications of an impending military coup — even though one of Morsi’s advisors had earlier gone off script and argued that the office of the Presidency did regard the army’s ultimatum as such. Still, Tamarod organizers and opposition leaders have unambiguously welcomed the army’s stance in the hope that its secular command will take their side and “gently” nudge the Islamists from power.

Many of those in the streets also seem to be broadly supportive of an army intervention. Every time one of the military helicopters flew over Tahrir, the people would greet it with loud cheers, chanting that “the people and the military are one hand”. Still, the hardcore activists who have struggled ceaselessly to defend their revolution over the past two-and-a-half years remember the lies and brutalities of the military junta that they themselves helped to push from power, and continue to call for total liberation: “No Mubarak, No Military, No Morsi!”

Meanwhile, reactionary elements from the Mubarak regime are staging a come-back. First of all, despite Morsi’s appointment of Al-Sisi as commander-in-chief, the army’s top-brass is still full of Mubarak-era appointments that continue to wield enormous power behind the scenes, not least through their vast economic empire. Apart from this, there is still Mubarak’s unreformed security apparatus — including the police — who despise the Islamists and have refused to protect their premises and headquarters from being ransacked by the protesters. Yet these are the same policemen who killed, tortured and maimed even peaceful protesters during the first uprising of 2011.

This cacophony is further complicated by the two main sources of support that Morsi can still count on: first the popular support base of the Muslim Brotherhood itself, which continues to mobilize in defense of their President and which will refuse to let him be pushed out without putting up a fight; and second the Obama administration, which has just pledged its support for the “democratic” process, undoubtedly to preserve its overarching goal of maintaining regional stability and defending Israeli interests. Morsi hopes that the army won’t take action without the express approval of the US, on whose support he can still count. The question is: for how much longer?

The Clash of Coalitions

The main lesson we can draw from this historic episode is that revolutions are never clean-cut events undertaken by an easily-identifiable revolutionary subject, but always complex processes of inherently chaotic social struggle in which different elite factions vie for power and legitimacy, with the revolutionary multitude itself often caught in between them, at times allying itself with one side or another. Revolutions are almost always made by complex coalitions, and such coalitions may shift dramatically over time, partly out of ideological differences but mostly as a result of diverging economic interests. The Egyptian Revolution is no different in this respect.

For some, this inherently chaotic situation is a reason to urge restraint. The latest editorial pieces by The Guardian are particularly reactionary in this regard. First, the paper argued that the revolution is “on the brink of self-destruction” as a result of internecine struggles; then it urged protesters to exercise the “wisdom of the street” and demobilize in order to focus on meaningful economic reform first and the revolution’s promises of social justice and real democracy later; now its Middle East editor Ian Black writes that, “for all the drama, sacrifices and high-flown aspirations of the Egyptian revolution, the army remains the ultimate arbiter of power.”

Such media commentaries are not only riven with reformist fear but also hopelessly simplistic in their analysis of the extant social forces and the complex power struggles going on between them. While there is clearly a moment of truth in the statement that the army remains the ultimate arbiter of power in Egypt, it also needs to be observed that the army is far from omnipotent. It knows it cannot rule by itself and is therefore bound to join one coalition or another. In the end, the army remains utterly dependent on three critical power resources:
  1. The $1.3 billion in military aid it receives from the US every year (and therefore continued US approval of its actions, which in turn hinges crucially upon the army’s commitment to the Camp David Peace Accords);
  2. The “privileged position” it derives from the economic empire it has built up over the decades, which is deeply integrated into the US military-industrial complex and which is being harmed significantly by investor fears over continued social unrest);
  3. The popular legitimacy that can only be provided by a sense of calm in the streets.
Clearly, these critical power resources of the Egyptian military stand in constant conflict with one another. The army’s need for popular legitimacy constantly runs up against the elite’s continued pandering to US and Israeli interests, as well as the enormous wealth its leadership has acquired over the decades. This is why the army constantly needs to radiate an aura of patriotism that claims to align the military command with the wishes of the people and the goals of the revolution; even if these wishes and goals are in many way in direct opposition to the army’s social dominance and its unaccountable “autonomous” role within the state apparatus.

The Power of the Streets

It is one thing to claim that the army is the ultimate arbiter of power; it is quite another to recognize that the streets have become a power-unto-itself in the contemporary political constellation in Egypt. It is easy (and convenient) to forget that the 1,5-year rule of the Supreme Command of the Armed Forces (SCAF) following Mubarak’s ouster was itself driven out by social rebellion over the army’s brutal practices of torture and repression, its illegitimate influence over state institutions, and its enormous privileges in terms of economic wealth and power. The SCAF realized that its rule was eroding its base of popular legitimacy, which in turn threatened its economic interests. In order to preserve its position of social dominance, therefore, it called elections knowing that the Muslim Brotherhood would win, and that the military command would have to enter into an uneasy coalition combining the secular army’s privileged political and economic position with the cultural hegemony of Islamism.

But the deepening economic crisis meant that even a heavy dose of Islamist rhetoric could not maintain a stable hegemony. The state’s fiscal and monetary position rapidly deteriorated in the wake of the 2011 uprising, with the Central Bank’s reserves depleting, interest rates on sovereign debt spiking up, and foreign exchange shortages feeding into currency depreciation and rising prices of crucial imports like food and fuel. Recent months have witnessed vast fuel shortages, which clearly hit the poorest hardest. This has caused even religious Egyptians who initially supported the Muslim Brotherhood to turn their backs on Morsi and join the Rebellion campaign that kick-started the ongoing second uprising. The army now once again finds itself in a situation where the legitimacy upon which its privileged position depends is being eroded by the implosion of the Muslim Brotherhood. It simply had to shift sides.
What we are witnessing, therefore, is not so much a military coup as an internal rearrangement between different elite factions. While the Brotherhood was hoping to create a Muslim-led ruling class in the vein of Erdogan’s Islamic neoliberalism in Turkey, the leadership of the army still hopes to preserve the privileges it obtained under three successive military dictatorships from Nasser to Sadat to Mubarak. In this game of clashing and constantly shifting coalitions, a military-dominated government is unlikely. The military knows that neither the streets nor the US will let it rule alone. To preserve its privileged position, it will probably try to enter into a coalition with its logical ideological ally: the secular opposition, likely to be led by Mohamed El-Baradei. The opposition itself, however, remains poorly organized and thoroughly divided. It is therefore unlikely that a new round of elections or even a technocratic transition government will do much to stabilize the crisis-ridden Egyptian state.
Ultimately, this crisis cannot be successfully resolved until the authoritarian neoliberal state that was built up by Mubarak in collaboration with global capital, the IMF and successive US governments, is fully dismantled. However complex and fraught with obstacles this process may be, the engine behind the revolution is now unmistakable: without the power of the streets, Egypt would continue to be ruled by authoritarian madmen, whether their names are Mubarak, Morsi or the Military. If the state and the elites who control it are forced to move, they do so not out of voluntary will but because yet another grassroots rebellion forces them to. As Comrades from Cairo just wrote in an open letter published by ROAR, what Egypt now needs is not the fall of another president or regime — but the fall of the system as such. Only the fearless and continued struggle of the streets can bring this revolution to a successful conclusion.

Statement from Edward Snowden in Moscow


by ROAR Collective on July 2, 2013
Post image for Statement from Edward Snowden in Moscow

WikiLeaks published a statement by Edward Snowden in Moscow, where the whistleblower is in involuntary exile following the revocation of his passport.

Originally posted by WikiLeaks on Monday, July 1, 21:40 UTC.

One week ago I left Hong Kong after it became clear that my freedom and safety were under threat for revealing the truth. My continued liberty has been owed to the efforts of friends new and old, family, and others who I have never met and probably never will. I trusted them with my life and they returned that trust with a faith in me for which I will always be thankful.

On Thursday, President Obama declared before the world that he would not permit any diplomatic “wheeling and dealing” over my case. Yet now it is being reported that after promising not to do so, the President ordered his Vice President to pressure the leaders of nations from which I have requested protection to deny my asylum petitions.
This kind of deception from a world leader is not justice, and neither is the extralegal penalty of exile. These are the old, bad tools of political aggression. Their purpose is to frighten, not me, but those who would come after me.

For decades the United States of America have been one of the strongest defenders of the human right to seek asylum. Sadly, this right, laid out and voted for by the U.S. in Article 14 of the Universal Declaration of Human Rights, is now being rejected by the current government of my country. The Obama administration has now adopted the strategy of using citizenship as a weapon. Although I am convicted of nothing, it has unilaterally revoked my passport, leaving me a stateless person. Without any judicial order, the administration now seeks to stop me exercising a basic right. A right that belongs to everybody. The right to seek asylum.
In the end the Obama administration is not afraid of whistleblowers like me, Bradley Manning or Thomas Drake. We are stateless, imprisoned, or powerless. No, the Obama administration is afraid of you. It is afraid of an informed, angry public demanding the constitutional government it was promised — and it should be.

I am unbowed in my convictions and impressed at the efforts taken by so many.

Edward Joseph Snowden
Monday 1st July 2013

You’ve Got Plenty Of Time

You’ve Got Plenty Of Time

YUBEGUBPLENTYAPFTIMESSYES

Yes, you do.

I’ve caught myself whispering this in my own ears frequently during the last few days. And it worked. It worked because it is true, despite how everything seems to indicate just the opposite. Let me explain.

I am currently in the process of moving and it is amazing to see what little, seemingly unimportant things can trigger stress. In retrospect, they seem silly and ridiculous. The common cause of them, however, is everything but. The cause is that I feel hurried to finish so I am finally able to settle down.

I want my new place to be finished as soon as possible. Add this to the fact that I am a perfectionist when it comes to things I deem important, and this combination is like oil and water; they don’t mix, no matter how well I stir.

I don’t remember where I read this, or if I made it up myself, but it was something along the lines of ‘the common denominator of stress is the felt experience of not having enough time‘. Damn, isn’t that observation spot on?
The more we worry about time, the more time is lost to worrying.
Every single time I got annoyed that something wasn’t going as intended, the will I enforced upon the world seemed to be futile; I felt rushed and powerless. I didn’t allow myself enough space to try again in calmness, which perpetuated the feeling of constant hurry. I felt the friction of my own ego.

Why? Because I had the felt experience of not having enough time.
But, I did! And I still do! I’ve got plenty of time!
Yet, in the midst of this big task I selectively forgot that this was the case. Even worse, the possibility did not even occur to me.

So I started to direct my attention to the fact there was nothing that needed any acute action. There was no Lion chasing me, no child that needed any rescue. I was not freezing to death, nor was I starving. I had a full belly and a roof over the top of my head. In the whole scheme of things, I had plenty of time.

This perspective changed the way I felt immediately. When things didn’t go my way, I whispered in my own ear “Martijn, just relax buddy, no need to rush.”

So remind yourself, whenever you feel stressed, that there is no need to hurry, ’cause you’ve got plenty of time!”

Do Not Read These.

A Reason For TodayA Reason For Today22 Mind-Blowing 'Notes From The Universe'22 Mind-Blowing 'Notes From The Universe'

Stars and Swipes Forever?




July 1, 2013
THIS WEEK

Peter Dreier, one of America’s top progressive political scientists, has caught the list bug of late. Dreier has a new book out, The 100 Greatest Americans of the 20th Century: A Social Justice Hall of Fame, and last week he chimed in with a list of this summer’s top 15 books on “what ails America and how to fix it.”

In the list’s third slot, we'd like to note, the new book by Too Much editor Sam Pizzigati, The Rich Don't Always Win: The Forgotten Triumph over Plutocracy that Created the American Middle Class, 1900-1970.

Dreier's list also gives a shout-out to 99 to 1, the latest inequality probe from Chuck Collins, a moving force behind Too Much's ongoing publication.

Americans live, Dreier reminds us, in the only major nation without guaranteed paid time off. But most Americans still manage to find some time every summer to relax. If you're looking for a good read for one of those relaxing moments, check out Dreier’s list. And if you’d like to get a richer taste of The Rich Don’t Always Win, just click your way to the intro chapter online.





GREED AT A GLANCE

America’s pension picture has, in modern times, never looked grimmer. The nation’s private sector hosted 112,000 pension plans 30 years ago. The current total: about 30,000. Half the American people, notes one recent study, have $4,500 or less in their retirement accounts. Now for the good news — if you happen to be John Hammergren, the CEO of drug distributor McKesson. The 54-year-old Hammergren can now collect, if he chooses to retire, a pension worth $159 million. That may be, analysts believe, the largest pension in U.S. corporate history. Hammergren’s regular take-home hasn’t been too shabby either. His annual paychecks have averaged over $50 million for the last seven years . . .

A word to the wise: Think twice before you buy that first yacht. The upkeep can be a killer. A paint job alone can run $1 million, points out luxury yachting expert Rupert Connor. Many yacht owners, Connor also notes, pay another $100,000 for a special protective coating that typically adds two years to a paint job’s life. But costs these awesome aren’t fazing Charles Zhang, one of China's top high-tech executives. Zhang recently shelled out tens of millions of dollars to gain rights to sell China's ultra rich yachts from the UK-based luxury boat-maker Sunseeker. Zhang already has his own Sunseeker, and he sees an intense yacht hunger among his fellow Chinese deep pockets. For these wealthy, one analyst observed last week in the South China Morning Post, Rolls-Royces and Bentleys simply “don't feel new” any more . . .

Luxury yachts, of course, don’t feel “new” either — in lands where the uber rich have a much longer history than they do in China. What does rate as “new” to the global I’ve-seen-it-all set? That would have to be the private supersonic jet. This particular animal doesn’t yet exist, but one specialty aircraft maker, the Nevada-based Aerion, has been working to bring private luxury supersonic transit to life. Aerion has been promising to have the world’s first supersonic private plane in service by 2020. Last month, a setback. Aerion officials acknowledged that 2020 will likely come and go with no supersonic private plane. Billionaires will apparently have a wait a little longer to save three hours on a transatlantic flight. The finished plane, once available, will run somewhere north of $80 million.







Quote of the Week

“We need a secretary of commerce who will represent the interests of working Americans and their families, not simply the interests of CEOs and large corporations.”
U.S. Senator Bernie Sanders (I-Vermont), the only senator to vote against the cabinet nomination of billionaire business executive Penny Pritzker, June 25, 2013

PETULANT PLUTOCRAT OF THE WEEK

These should be happy days for Silicon Valley’s Sean Parker, the 33-year-old Spotify billionaire who just tied the knot at a $10 million Big Sur wedding. But the wedding has turned into a PR disaster for Parker. California environmental officials found that his Lord of the Rings-themed nuptials had plopped a variety of structures, including a dance floor for over 300 guests, amid old-growth redwoods and an endangered-fish stream. Parker ended up paying $2.5 million to settle the mess — and he’s now blaming the resort that rented him the unpermitted space. The whole episode, says the Atlantic’s Alexis Madrigal, perfectly reflects the basic Silicon Valley corporate mindset: “Dream big, privatize the previously public, pay no attention to the rules, build recklessly, enjoy shamelessly, invoke magic, and then pay everybody off.”




IMAGES OF INEQUALITY


Drivers of high-status luxury cars, recent research shows, will violate traffic laws and endanger pedestrians far more readily than drivers of ordinary vehicles. Psychologist Paul Piff describes this and related research — on the impact of wealth on people’s behavior — in an engaging new 10-minute video from PBS.



Web Gem

Subsidy Tracker/ Why battle for customers in the marketplace, today's CEOs have learned, when you can extort lush subsidies out of state and local governments? Since 2008, reports Good Jobs First, states and localities have awarded firms ranging from Sears to Samsung 240 subsidies worth at least $75 million. Subsidy Tracker identifies just which companies have pocketed the most.

PROGRESS AND PROMISE

The legislation that created the federal minimum wage, the Fair Labor Standards Act, turned 75 last week — with an unexpected cause for celebration. A flagrant loophole around minimum-wage protection may finally be closing. In recent weeks, a federal court has ruled against employers that treat unpaid interns as regular employees, and a new advocacy group, Intern Justice, has begun filing lawsuits against firms that continue to use interns to replace entry-level workers. Unpaid internships, even if managed as legitimate mentoring experiences, end up reinforcing inequality, notes analyst Tim Noah, since wealthier kids can more easily afford “to work free of charge.”



Take Action
on Inequality

By law, intern experiences need to benefit interns first, not the employers who bring them on. Help end the exploitation of intern labor. Learn more at Intern Justice





Stat of the Week

Since 1965, the gap between the compensation of America's top corporate chief executives and average worker pay has widened by 14 times over, the Economic Policy Institute calculates.

IN FOCUS

Getting Past Stars and Swipes Forever
Back in 1776, public-spirited patriots emerged from the ranks of colonial America's privileged. But our corporate elite today seems to offer up only thieving, tax-dodging parasites. Why such a contrast?

Almost ten generations have come and gone since 1776. Yet the giants of 1776 still fascinate us. Books about Benjamin Franklin, Thomas Jefferson, and George Washington still regularly dot our best-seller lists.

What so attracts us to these “founding fathers,” these men of means who put their security, their considerable comfort, at risk for a greater good? Maybe the contrast with what we see all around us.

Today's men of means display precious little selfless behavior. Our CEOs, bankers, and private equity kingpins remain totally fixated on their own corporate and personal bottom lines. They don’t lead the nation. They steal from it.

So who can blame the rest of us for daydreaming about a time when a significant chunk of our elite showed a real sense of responsibility to something grander than the size of their individual fortunes?

Actually, suggests a new book from University of Michigan sociologist Mark Mizruchi, we don’t have to go back to 1776 to find Americans of ample means who cared about “the needs of the larger society.” We had this sort of elite, he argues in The Fracturing of the American Corporate Elite, a half-century ago.

Many of America’s major corporate leaders, Mizruchi writes, spent the years right after World War II engaged in public-spirited debate over how best to put the Great Depression behind us and build a prosperity that worked for everyone.

These corporate leaders didn’t try to gut the social safety net the New Deal of the 1930s had created. They supported efforts to stretch this safety net even wider. In the postwar years, major corporate executives helped expand Social Security and increase federal aid to education six-fold. They even accepted high federal income tax rates on high incomes — their incomes.

Mizruchi takes care not to go overboard here. Corporate leaders of the mid-20th century years regularly did do battle, at various times and on various issues, with unions and other groups that spoke directly for average Americans.

But these corporate leaders also did display, notes Mizruchi, “an ethic of responsibility.” They compromised. They tried to offer solutions. They behaved, on the whole, far more admirably than the union-busters, tax-dodgers, and bailout artists who top Americans biggest banks and corporations today.

What explains why our corporate elite behaved so much better a half-century ago? Mizruchi explores a variety of factors. In the 1950s and 1960s, for one, our corporate elite had to share the political center stage with a strong and vital labor movement. Today’s corporate leaders face a much weaker labor presence.

This weaker labor presence has allowed wealth and power to concentrate ferociously at America’s economic summit. We have become, over recent decades, a fundamentally much more unequal nation.

This inequality, in turn, may be the key to understanding why corporate leaders a half century ago much more resembled the elite of 1776 than our own contemporary corporate movers and shakers. In both 1776 and a half-century ago, our most financially fortunate found themselves in relatively equal societies.

On the eve of the American revolution, researchers have recently documented, England’s 13 American colonies had a much more equal distribution of income and wealth than the nations of Europe.

In the years right after World War II, the United States enjoyed a similar epoch of relative equality. Corporate CEOs in the 1950s only made 20 to 30 times what their workers made, not the 200 and 300 times more, on average, that top corporate execs routinely take in today.

In both 1776 and 1976 America, the top 1 percent overall took less than 10 percent of the nation’s income. The top 1 percent share today, as economist Emmanuel Saez details, is running at over double that level, at 20 percent.

Did this relative equality of revolutionary America and America right after World War II help shape how elites interacted with their societies?

That certainly seems plausible. More equal societies, after all, have narrower gaps between those at the economic summit and everyone else. The narrower the gap in any society, the easier for all — elite and average alike — to feel invested in their society and share a sense of responsiblity for its future.

The takeaway for our Fourth of July, 2013 edition? If we want to rekindle that spirit of 1776, not just daydream about it, our course stands clear. We need to create a more equal America.



New Wisdom
on Wealth

Timothy Noah, Spielberg test: Why the One Percenters don’t deserve twice as much, MSNBC, June 24, 2013. Today’s rich turn out to be no more deserving of their wealth than yesterday’s.

Marshall Poe, Growing Apart: A Political History of American Inequality, Big Ideas, June 25, 2013. An interview with historian Colin Gordon on his interactive new look at America's great divide.

Thomas Edsall, What if We’re Looking at Inequality the Wrong Way? New York Times, June 26, 2013. Conservatives are cheering a new analysis of U.S. income distribution. But this new analysis doesn't add up. A Dean Baker addendum.

Jesse Eisinger, Ixnay on ‘Say on Pay,’ Pro Publica, June 26, 2013. Why having shareholders take advisory votes on CEO pay has proved such an ineffective check on executive excess.

Tiffany Hsu, Top restaurant CEOs paid 788 times minimum wage, data show, Los Angeles Times, June 28, 2013. Chief execs at the nation’s top eateries make more in a morning than the average minimum-wage cook or dishwasher earns in an entire year.

Mark Engler, Should There Be a Maximum Wage? Nation of Change, June 30, 2013. An income cap that limited executive pay to no more than a multiple of worker pay would motivate CEOs to augment the pay of their janitors.

NOTABLE

Understanding Our Revolutionary Roots
Harvey Kaye, Thomas Paine and the promise of America. New York: Hill and Wang, 2005. 326 pp.

Thomas Paine, the great pamphleteer of the American Revolution, wanted to see the nation he helped create become a place where “the poor are not oppressed, the rich are not privileged.” Throughout the revolutionary years and beyond, writes historian Harvey Kaye in this indispensable and engaging guide to Thomas Paine’s life and thought and legacy, America's first great egalitarian thinker would display “a disdain for excessive wealth” and “a recognition of the critical connection between affluence and distress.” In our own distressingly unequal times, these pages offer inspiration — and an ideal read for the Fourth of July.



 

Happy Birthday, Milton Glaser

The Iconic Designer on Art, Money, Education, and the Kindness of the Universe

by
“If you perceive the universe as being a universe of abundance, then it will be. If you think of the universe as one of scarcity, then it will be.”

Born 84 years ago today, Milton Glaser — legendary mastermind of the famous I♥NY logo, author of delightful and little-known vintage children’s books, notorious notebook-doodler, modern-day sage of art and purpose — is celebrated by many as the greatest graphic designer alive. From How to Think Like a Great Graphic Designer (public library) — the same fantastic anthology of conversations with creative icons that gave us Paula Scher’s slot machine metaphor for creativity and Massimo Vignelli on intellectual elegance, education, and love — comes a fascinating and remarkably heartening conversation that reveals the inner workings of this beautiful mind and beautiful spirit.



What E. B. White has done for writing — “A writer has the duty to be good, not lousy; true, not false; lively, not dull; accurate, not full of error. He should tend to lift people up, not lower them down,” he memorably asserted — Glaser has done for the visual arts, a legacy Debbie Millman captures beautifully in the introduction to the interview:
While other great designers have created cool posters, beautiful book covers, and powerful logos, Milton Glaser has actually lifted this age he inhabits. Because of his integrity and his vision, he has enabled us all to walk on higher ground, and it is that for which we should be especially grateful.
In fact, this ethos is reflected in Glaser’s timeless addition to history’s finest definitions of art:
Work that goes beyond its functional intention and moves us in deep and mysterious ways we call great work.

Glaser shares the wonderful and sweetly allegorical story of how he became an artist:
The story of how I decided to become an artist is this: When I was a very little boy, a cousin of mine came to my house with a paper bag. He asked me if I wanted to see a bird. I thought he had a bird in the bag. He stuck his hand in the bag, and I realized that he had drawn a bird on the side of a bag with a pencil. I was astonished! I perceived this as being miraculous. At that moment, I decided that was what I was going to do with my life. Create miracles.
His early childhood, in fact, was a petri dish for his genesis as an artist. He recounts another memory that presaged his gift for welcoming not-knowing in order to know life more richly as the muse of his mastery, a skill that would become the guiding principle of his creative ethos:
I was eight years old, and I had rheumatic fever. I was at home and in bed for a year. In a certain sense, the only thing that kept me alive was this: Every day, my mother would bring me a wooden board and a pound of modeling clay, and I would create a little universe out of houses, tanks, warriors. At the end of the day, I would pound them into oblivion and look forward to the next day when I could recreate the world.
[…]
I think that, to some degree, this is part of my character as a designer: To keep moving and not get stuck in my own past. This is what I try very hard to do.
I think at that moment in my life, I found a peculiar path: To continually discard a lot of the things that I knew how to do in favor of finding out what I didn’t. I think this is the way you stay alive professionally.
In the context of discussing those early memories, however, Glaser offers an important disclaimer about the limitations of our memory and its imperfections:
Memory is treacherous; you can’t depend on it. It is basically always recreated to reinforce your anxiety or to make yourself look better, but whatever actually did happen is totally susceptible to subjective interpretation. I absolutely don’t trust my memory.
Glaser seconds Alan Watts’s timeless wisdom on profit vs. purpose and gets to the heart of how to find your purpose so you can worry less about money:
I never had the model of financial success as being the reason to work. When I was at Push Pin, none of the partners made enough money to live on. It took ten years for us to make as much as a junior art director in an agency. We were making $65 a week! But money has never been a motivating force in my work. I am very happy to have made enough money to live as well as I do, but I never thought of money as a reason to work. For me, work was about survival. I had to work in order to have any sense of being human. If I wasn’t working or making something, I was very nervous and unstable.
Echoing Frank Lloyd Wright’s aphorism that “an expert is a man who has stopped thinking because ‘he knows,’” Glaser rejoices in the glory of keeping the internal fire of learning ever-ablaze:
That is a great feeling: when you feel the possibility of learning. It’s a terrible feeling to feel you can’t learn or have reached the end of your potential.
Touching on Sister Corita Kent’s 10 rules for learning and Bertrand Russell’s commandments for teachers, Glaser — a revered educator himself — goes on to offer an articulate vision for what the art of education really means:
What you teach is what you are. You don’t teach by telling people things.
[…]
I believe that you convey your ideas by the authenticity of your being. Not by glibly telling someone what to do or how to do it. I believe that this is why so much teaching is ineffective. … Good teaching is merely having an encounter with someone who has an idea of what life is that you admire and want to emulate.
Echoing Rilke’s counsel to live the questions, Richard Feynman’s advocacy of allowing for doubt, John Keats’s insistence on the power of “negative capability”, and Anaïs Nin’s faith in the richness of living with ambiguity, Glaser reflects on the immutable impermanence of everything, the very thing he once intuited in his childhood experience of sculpting and destroying his modeling clay creations:
There is no security in the world, or in life. I don’t mind living with some ambiguity and realizing that eventually, everything changes.
But the most powerful aspect of Glaser’s ethos, one all the more necessary as a lifeboat amidst today’s flood of cynicism, is his unrelenting optimism — an essential antidote to the zero-sum-game mentality of success that plagues so much of our modern thinking:
If you perceive the universe as being a universe of abundance, then it will be. If you think of the universe as one of scarcity, then it will be. And I never thought of the universe as one of scarcity. I always thought that there was enough of everything to go around — that there are enough ideas in the universe and enough nourishment.
In extending this conviction to the most tender aspiration of the human heart, our longing to belong, he echoes Ted Hughes’s poignant reflection on our inner child and adds to literary history’s most beautiful definitions of love:
Do you perceive you live your life through love or fear? They are very different manifestations. My favorite quote is by the English novelist Iris Murdoch. She said, “Love is the very difficult understanding that something other than yourself is real.” I like the idea that all that love is, is acknowledging another’s reality.
Acknowledging that the world exists, and that you are not the only participant in it, is a profound step. The impulse towards narcissism or self-interest is so profound, particularly when you have a worry of injury or fear. It’s very hard to move beyond the idea that there is not enough to go around, to move beyond that sense of “I better get mine before anybody else takes it away from me.”