Category Archives: Seth On The Arts

Seth presents opinions about how others present their artistic wares.


In the preface to The Picture of Dorian Gray, Oscar Wilde claims:

“There is no such thing as a moral or immoral book. Books are well written, or badly written. That is all.”

I’m not sure if everyone quoted below would agree with Mr. Wilde.

Two film critics, Frances Ryan of The Guardian and Scott Jordan Harris of Slatehave recently scourged the propensity—illustrated by the award winning film, The Theory of Everything—of able-bodied actors portraying disabled people in film. While both critics acknowledge that, in this particular case, the casting of able-bodied Eddie Redmayne to play Stephen Hawking may have been a logistical necessity (since the story covers Hawking’s life both before and after ALS disrupted his mobility), the two intrepid accusers nevertheless contend that the film represents a collage of Hollywood injustices against disabled people.

I think it is a worthwhile discussion and I do empathize with how daunting it must be for disabled actors to find work. Nevertheless, while both critics’ arguments are provocative and useful starting points for this moral discussion, their accusatory presentations seem to ignore the muddiness of these moral waters.

I’ve thus broken down their arguments into five categories to try to distinguish their philosophical baggage from their more interesting cases for change.

(1) Portrayals of disabled people on screen by able-bodied people cost disabled actors roles.

“Like many other disabled people,” Harris says, “I have often argued that disabled characters should, wherever possible, be played by disabled actors. When disabled characters are played by able-bodied actors, disabled actors are robbed of the chance to work in their field.”

I think this is a legitimate concern (although, I wouldn’t use the harsh metaphor of “robbery,” which suggests that such roles are intrinsically the rightful property of disabled actors). Logistically, disabled people cannot currently play able-bodied people in live-action films. Thus, if they’re going to work in the industry, they must be able to get some of the roles depicting disabled people. And, since such opportunities may exist at a lower per-capita rate, such performers—already generally besot by disadvantages in life—have extra trouble finding work, too.

This is unfortunate, and so it seems reasonable to me that, as Harris suggests, all other relevant things being equal (or close to), directors should cast disabled people in the roles of disabled people.

The trouble is that all other relevant things are often not equal. For instances:

(A) Most roles about people with disabilities involve a spectrum between having the disability and not. Such a role, then, can only be played by an actor who can take on the full range of movements covered in the character’s story. As acknowledged by our bold writers, The Theory of Everything is such a film, and so they admit that it may be forgiven on that basis.

(B) Sometimes the best available actor for the part does not have the disability that is being portrayed. All people, after all, have multiple dimensions to them, including their physical, intellectual, and emotional states. While a person with a disability may on the surface look the most like the person to be portrayed, they may not on a deeper level possess the desired connection to the character. To pigeonhole disabled characters by limiting those playing them to be equally disabled actors is to suggest that disabled people are essentially disabled, when that is, in fact, just one of many facets of their identity.

One of my sisters puts this point more eloquently.

“’Disability,’” she says, “is not a binary state; there are all sorts of points on the spectrum.  And a happy and well-adjusted [person living in a wheelchair] might not have access to specific feelings… that a standard issue person who’s struggled with depression has, that might be required in certain roles.”

In my viewing experience, the greatest actor I’ve ever seen is Daniel Day-Lewis. If I were a director of any movie about anything, and Mr. Day-Lewis were available, I would cast him in the lead role, whether it were a disabled man, a four-legged robot, or a two-year old learning to talk. While it may be the case that actors win awards more often than chance when they transform their physical dimensions, they can usually only successfully do so if they are also able to transform their emotional dimension to the point that we believe they are a different person (not just a different mobility level). Nobody, in my opinion, does that better than Daniel Day-Lewis, and so to not cast him for a role just because he lacks certain physical characteristics would be an affront to the art of filmmaking.

(C) Sometimes the disability in question can impede the person’s ability to portray someone else.

Ryan highlighted the casting of non-autistic actor, Dustin Hoffman, to play the role of an autistic person in Rain Man. Well, correct me if I’m wrong, Wikipedia, but does not autism generally impede one’s emotional ability to relate to the outside world? Thus, unless we think all autistic people are the same, it might be challenging for most autistic people to emotionally connect with and insightfully portray a different autistic person.

Meanwhile, a person with a physical disability that is different from the person whom they are playing may struggle—by virtue of their own particular mobility issues—with capturing those of someone else.

(D) Sometimes, the disability is so specific that there are few people with a severe disability who could conceivably play the role.

Both critics referenced My Left Foot, the film in which Daniel Day-Lewis portrayed a man with cerebral palsy who could only control his left foot, and yet did so with artistic dexterity. How many actors with a significant disability, I wonder, would have had the unique combination of ability and disability available to convincingly render that particular set of traits?

(E) Audiences are less likely to see a movie that does not star a well-known actor.

Unfortunately, in the case of the film industry, as Ryan acknowledges, the ability of actors to draw a crowd does seem to be a vital part of their work. Without predictably large audiences, most production companies won’t invest in projects, and so the promise of a previously-approved actor is more likely to satisfy their bottom-line requirements. This further compounds the problem for disabled actors. Without those first roles, they cannot build their stock in audiences’ familiarity-craving minds, so they don’t get a chance to get easier access to second roles. Most actors attempt to circumvent this problem by making big first impressions in smaller roles, but since there are also relatively few small roles available for disabled actors, they are once again stuck in a doubly daunting position.

Nevertheless, in spite of the clear disadvantage here for disabled actors, there isn’t an obvious solution to it. Requiring directors to always impose a disability symmetry between actors and roles would surely—by the capitalistic nature of film-making—result in fewer movies being made about people with disabilities.

(2) We wouldn’t accept black people being portrayed by white people, so we should similarly restrict able-bodied people from portraying disabled people.

“While ‘blacking up’ is rightly now greeted with outrage,” Ryan says, “‘cripping up’ is still greeted with awards. Is there actually much difference between the two? In both cases, actors use prosthetics or props to alter their appearance in order to look like someone from a minority group. In both cases they often manipulate their voice or body to mimic them. They take a job from an actor who genuinely has that characteristic, and, in doing so, perpetuate that group’s under-representation in the industry. They do it for the entertainment of crowds who, by and large, are part of the majority group.”

This is a powerful and worrying argument. Expecting acting outfits to limit their roles to actors of similar physical characteristics would be practically and artistically daunting if applied to all cases. Our conventional moral wisdom has made an exception that doesn’t allow white people to portray black people because of the expired but embarrassing theatrical tradition of “blackface,” in which white and sometimes black actors wore black make up and portrayed black people as a cartoonish collection of stereotypes. Without that ugly past, restricting people from one race from portraying another is as arbitrary as restricting a young person from portraying an old person, or a person with or without glasses from taking on the opposite. In a perfect world without racism, race is as meaningless as shoe size. However, because of theatre history’s treatment of black people like puppets in vaudeville shows, blackface has understandably become synonymous, in most people’s minds, with racism. But this hard-earned convention of artistic restriction can have unfortunate consequences, too; consider how high school drama departments must feel limited only to the stories and casting decisions that happen to match the skin colours of their performers. This may mean that they don’t put on a play about Nelson Mandela because they don’t have someone of the right race to play the lead. Such troubling artistic restrictions ought not to be seen as intrinsically righteous such that they are automatically justified in all situations where a minority group has suffered.

While noting the similarity between race and disability is understandable, we must also consider the differences before consenting to the additional artistic restriction that Ryan suggests. In the film world, now that blackface has been relegated to its uncomfortable place in infamy, and many black actors have found their way to prominence, it is hard to imagine any professional director having difficulty casting the part of any black character. There will never be an issue with discovering an actor who can play all physical states in a character’s trajectory.

Moreover, unlike mobility, age, and even gender, race effectively never fluctuates between states. Thus the consequence of restricting white people from portraying black people in film and professional theatre is essentially just a philosophical injunction, which rarely (I assume) has practical artistic repercussions. However, applying the same restrictions to disability would likely have serious consequences in terms of the frequency of stories told about people with disabilities.

(3) Able-bodied performances of disabled people cost the latter the right to portray themselves on screen.

Ryan argues:

“When it comes to race, we believe it is wrong for the story of someone from a minority to be depicted by a member of the dominant group for mass entertainment. But we don’t grant disabled people the same right to self-representation.”

That is a dangerous justification for restricting art. Condemning “blackface” is not, or should not be, about self representation; it is, or should be, about attempting to undermine a specific historical insult. That is all. Limiting roles to people with equivalent backgrounds for its own sake is a scary idea. No one has the right to require performers, writers, and directors to have lived similar lives, and/or come from an equal demographic, to those whom they portray. Artistic freedom would certainly be damned if that were a legitimate demand.

“When disabled characters are played by able-bodied actors…” adds Harris, “the disabled community is robbed of the right to self-representation onscreen. Imagine what it would feel like to be a woman and for the only women you ever saw in films to be played by men. Imagine what it would feel like to be a member of an ethnic minority and for the only portrayals of your race you ever saw in films to be given by white people. That’s what it’s like being a disabled person at the movies.”

I find this to be a disturbing essentialist argument. Again, no group has an intrinsic right to a role simply because they match up in one particular characteristic. Stephen Hawking is not only a man suffering with ALS; he is also somewhat of a smart guy, heterosexual, and lacking in perfect vision. So must the person who portrays him also be a mathematical genius who likes ladies, but can’t read his own theories without glasses?

Of course not. Hawking is not the property of any of those groups. He is a collaboration of many characteristics, and should be portrayed by the person who can capture, without necessarily possessing, the widest variety of them at the same time.

(4) Portrayals of disabled people by the able-bodied are inauthentic because they are just impersonations.

Contends Harris:

“The ultimate ambition of David Oyelowo’s performance [in Selma] as Martin Luther King, Jr. is to express the reality of black life and black history in a way that resonates with those within the black community and educates those outside it. The ultimate ambition of Eddie Redmayne’s performance as Stephen Hawking is to contort his body convincingly enough to make other able-bodied people think ‘Wow! By the end I really believed he was a cripple!’ Our attitudes to disability should have evolved past the stage when this mimicry is considered worthy of our most famous award for acting.”

I wonder if Harris has any evidence for his claims regarding Selma’s (racially divided) pedagogical intentions. Setting that strange contention aside, I can understand how a disabled person might feel annoyed by someone acting in the way that they have to suffer. Nevertheless, I think the argument to restrict performances for that reason is insufficient because, in the end, all acting is “mimicry.”

During the Harris-approved portrayal of Martin Luther King, Oyelowo is impersonating King’s then state of mind, circumstances, clothing, hair style, mannerisms, and voice. Moreover, returning us to the other side of the analogy, even if a disabled person had played Hawking, the chance that such a person would be someone also tortured by ALS at the same rare rate of progression is remote, so they too would have had to mimic the physicist’s movements at some point in the film.

All acting performances are mostly make-believe. I cannot imagine any coherent line between acceptable simulation and mimicry.

(5) The direction and writing of stories about disabled people are inauthentic unless done by disabled people.

“Even if we accept,” Harris explains, “that Redmayne should get a pass to play Hawking, we are still left with a film that excludes disabled people while pretending to speak for them. The Theory of Everything is based on a book by an able-bodied person, adapted by an able-bodied screenwriter, and directed by an able-bodied director, and it stars able-bodied actors. DuVernay’s egregiously under-nominated Selma, burns with authenticity about black experiences because it was made by members of the black community, not by members of the community that has historically oppressed them. In contrast, The Theory of Everything flickers weakly with truisms that can be mistaken for insight only by people who are not disabled, because it was made by—and for—people who are not disabled.”

Evidence is required for these inflammatory claims. For instance, who says The Theory of Everything purports to speak for disabled people? Maybe it wanted to speak for physicists, or, more likely, for no one, and just wanted to tell a good story. More importantly, though, if it’s the case that disabled people are usually better at directing stories about disabled people than their able-bodied cousins, then—artistically speaking—disabled people ought to indeed get the jobs more often on the basis on their superior merits. But we should never force such a generalization into all cases.

The best antidote for bad story telling is not to criticize the physical characteristics of the storytellers, but instead to criticize their work. If such complaints about failed authenticity are legitimate, then perhaps The Theory of Everything is unworthy of its many award nominations. As a movie critic, Harris ought to point out what aspects of the film rang shallow (a single example of a false truism obvious to disabled people would be helpful). Thus, the next producers of a film about a disabled person may feel more obligated to get it right, and so perhaps would find themselves hiring a disabled director who seemed to have a better understanding of the issues he or she was intended to illuminate.

However, to claim a lack of authenticity just by definition of the particular physical characteristics of the people involved is not only bigoted, but will again provoke the natural consequence of reducing the number of stories told about disabled people.

Consider the case where an influential and successful able-bodied writer or director is contemplating their next project: if, by Harris’s essentialist philosophy, he or she is barred from creating stories about disabled people—sorry, Herman Melville, Captain Ahab is off limits to you!—then surely we’ll have even fewer roles available for disabled actors.

Moreover, such a result may put pressure on disabled writers and directors to only tell stories about disabled people—since no one else is allowed to—even though some filmmakers with disabilities may want to tell other stories, too.)

In conclusion, I refer us all again to the preface to The Picture of Dorian Gray, in which Oscar Wilde claims:

“The artist is the creator of beautiful things. To reveal art and conceal the artist is art’s aim.”

Once again, I’m not sure if everyone quoted above would agree with Mr. Wilde.


The notion that the sun shines and the rain pours is part of Big Sun’s weather propaganda.


III: T-SHIRT OF A RANT (you are here)

You may recall my revolutionary rant against sun-biased weather journalism. I’m delighted to report that some of my leading fans (two of my sisters) bought me a t-shirt of support (derived from a noble t-shirt performance artist on the IT Crowd). Resistance may not be futile, after all.

May the clouds be with us all!


III: T-SHIRT OF A RANT (you were just here)

CBC, NOW PRINCIPLE FREE I: CBC Radio Celebrates Pre-Formance Art

CBC Radio’s Editorial policy is clear:

(1) CBC Radio promises to tell every story from the perspective of truth and justice, and

(2) CBC Radio endeavours to alter their definition of truth and justice depending on who the players are in each story.






A few weeks ago I listened to an interview by CBC Q alternate host Gill Deacon with performance artist, Heather Cassils, which landed a thorn in my paw that I haven’t been able to remove.

I should admit—before I begin my ranting attempt to extricate my irritation—that I am uneducated and often unkind in my viewing of performance art. I instinctively find it to be bogus, in part because it seems wild and meaningless, but also because of the way the artists themselves seem to hide from explaining their work. Infuriating responses such as, “What does my work mean to you?” leave me rolling my eyes. It is a tendency that invades all art forms, I’m sure: poetry, sculpture, and abstract painting being also among the most guilty, not necessarily because they are inherently meaningless art forms, but because their cultural worlds have promoted subjectivity at the expense of comprehensive analysis.

Studies suggest that wine connoisseurs will think a drink tastes better if they are told it costs more; similarly, I suspect, some devotees of performance art and sculpture will more highly value a work if it is not limited by legible communication. It is an exchange that benefits both sides as the artist is able to either randomly or simplistically put their confusing whims on a canvass, call it the workings of a soul in turmoil, and wait for the grand interpretations to come in. “What does the work mean to you?” is a question that allows the greatness of the piece to not be restricted by the merits/intentions of the artist, but instead be (unwittingly?) manufactured by the imaginations and contemplations of the beholders. So, while the artists get to create work without the necessity of substance, their interpreters get to freely express their wild (sometimes brilliant) analysis without fear of contradiction from the source.

But Heather Cassils, in her interview, did not annoy me by this standard artistic babbling. Instead, I was disconcerted to find her straightforward and articulate. However, while my inner critic was not able to mock her for hiding from artistic analysis, it was able to be quenched by the fact that her work, unfettered by ambiguity, seemed shockingly simple to be receiving Q’s attention.

Ms. Cassils had been asked by Los Angles Contemporary Exhibitions to produce a work that paid homage to the history of performance in Southern California. The artistic dynamo then searched their archives and found a 1972 sculpture of photographs by feminist artist, Elinor Antin, who had starved herself for seventy-two days and taken pictures of herself “wasting away” to portray social expectations put on women.

(While this may have been a worthwhile feminist conversation to engage in regarding Western culture and how it seems to glamorize thin femininity to the point that girls may feel pressured to stay lean by any means, I wondered at this point in the interview whether such blatant artwork added anything new or helpful to the 1972 discussion. I would be surprised, that is, if such a heavy-handed and simple artistic rendering of this standard feminist argument provoked a change in any entrenched minds. But maybe at the time it was a revelatory point. Moreover, at least the artwork in this case was transparent and communicating directly with its audience.)

In response, Cassils wanted to make her own point through changing her body, but instead of a feminist criticism of how society misuses the female body, she wanted to “empower” women through a show of strength. Already a fitness trainer, herself, she hired professional bodybuilding experts to help her load as much muscle onto her physique as possible in six months. The result was an appearance that, to her apparent delight, baffled conventional gender guidelines as people had trouble wrapping their eyes around a woman looking similar to a well-muscled man. As a result, she says she was mocked by strangers and challenged to arm wrestling matches.

While I admire her strength (literally and figuratively), and recognize the pain she must have gone through to achieve this result, her product once again seems boring to me. Yes, with extra work, women can acquire muscle, too, and our brains—so used to large muscles primarily highlighting male bodies—will be surprised and perhaps disconcerted. Yet has Cassils taught us anything profound that we couldn’t have achieved from a few moments’ contemplation (or looking at female bodybuilders)?

But my biases are showing. According to Cassils, at one of her shows, a person approached her and said that, if he had seen her ten years before, he would have made different (presumably healthier) choices with his body. So, simple as it may seem on her surface, perhaps Cassils’s particular rendering can intuitively provoke some troubled observers to see themselves from a new (psychologically helpful) perspective.

The thorn that landed in my paw, however, was not Cassils’s presentation, but was derived from her interpretation of her own work. When asked about the experience of overloading her body, Cassils admitted that—while she had intended it to be empowering—it was, in fact, uncomfortable, explaining that:

“…the regime of the act of creating that transformation became very rigid: I couldn’t leave the city, I had to eat every three hours, the workouts became gruelling, I lost flexibility, I couldn’t do any kind of heart rate training, and so it became difficult to walk up stairs because I had twenty-three pounds of extra meat hanging off my body… and so something that I had initially thought would be this empowering thing became this oppressive thing.”

“So,” she ought to have concluded in reference to the ‘wasting away women’ metaphor that had first inspired her, “my artistic result makes me wonder if Western society also puts pressure on men to imprison themselves in a painful, obsessive exercise regimen that may eventually break their over-muscled bodies.”

Nope. Instead, the pains she felt while increasing her “masculinity” were not observed through the same lens that had told us how hard it was to be “feminine.”

Nor did the interviewer ask a question that would bring this obvious conclusion to the forefront. I suppose I can’t blame the artist or the interviewer. We live in a culture that rarely acknowledges that there may be painful pressures experienced by men that parallel those felt by women. Anorexia is considered a disease (or a form of cultural murder, according to some feminists), while excessive steroid use is a sign of men’s obsession with power. Cultural analysts rarely acknowledge that boys might feel pressured by images of shirtless large-muscled male superheroes in the same way that we think girls are influenced by images of uber-thin women in tiny clothing.

(I recall the Special K ad campaign a few years ago that tried to tease women out of their body image concerns through a series of vignettes of fictionalized men, such as a truck driver or a Harley Davidson rider, concerned with their bodies, and saying unexpected lines such as, “I just wish I could fit into my skinny jeans again.” These phrases from men were meant to be comical since it was far from how we see men seeing themselves. The ad concluded with a message that “Men don’t obsess about these things. Why do we?” This was a ridiculous and offensive assertion that did not consider the possibility that many men do aggressively scrutinize their own physiques, but they don’t express it as openly or in the same way that women do.)

It seems to me that part of what could make Cassils’s performance art interesting is that she is experimenting with her body to see what happens. I don’t like this style of body manipulation (why do something so unhealthy for philosophical exploration that my simple brain thinks could just as easily be made through an essay or a drawing?). However, I would respect Cassils’ exploration if she had held herself to her experimental results. The fact that she ignored the unambiguous conclusion that being overly masculine might hurt, too, demonstrates that she was not going to deviate from her feminist argument, regardless of the results. Thus, Cassils’s message, in addition to lacking profound insight, does not possess an openness to discovery that would have justified it living in Cassils’s experimental medium. But at least now the thorn is out of my paw.

Here’s a look at the above mentioned Special K ad campagin. It’s handy because, whether your bias matches mine (that modern Western society minimizes male body issues) or feminists’ (that modern society puts more body pressure on women than men), the ad can serve your purpose.








Last night I saw 12 Years A Slave (based on the memoir of the same name by Solomon Northup). While I think the film is both significant (as it takes on the rare task of telling a story about American slavery, circa 1841-1852, from a slave’s perspective), and moving (as it grabs any human with a morsel of empathy by the throat through its detailed imagining of the daily suffering of American slaves), I don’t think it is a great movie, for two reasons:

(1) All of the slaves in the film, even those who were never educated, speak with a poetic prose that is almost Shakespearian.

Thus, while the barbaric reality of their situation drew my imagination into their painful past, their fantastical linguistic prowess pulled me back out. It seemed to me that the screenwriter (John Ridley) and director (Steve McQueen) wanted to convince us that, even though the slaves were uneducated and could not read or write, they were still intelligent beings. Of course they were, but surely the writer can illustrate intelligence by other means (perhaps by the clever use of tools, and/or ideas, and/or an ability to manipulate situations and/or people). It seems to me that the writer and director do not think that their audience is smart enough to recognize subtler symptoms of active minds.

(2) The story—as seems to be the convention of any movie that wants to be seen as sophisticated these days—is told out of order.

12 Years a Slave begins by taking us to one of the Solomon Northup’s most painful moments as a slave, before flashing back to his origins as a free man, where we watch him for a few fleeting scenes. Then, our protagonist’s tale jumps between various spots in the narrative until finally resting in the main arena of the story. Why do modern directors fear the linear so much? The convention of bounce-around storytelling has become so prevalent that even the superhero movie Man of Steel (2013) thought it was important to go for a mixed timeline. I understand that sometimes non-linear sequencing can benefit the drama if:

(A) Telling the story out of order allows different perspectives to be presented one at a time. This way, the audience is learning about new characters, or pieces of the puzzle, as the significance of the events grows, as in Vantage Point (2008), or, most impressively, in The Debt (2010). In the latter case, the story gives us a look at future events that will later turn out to not be as they seem, and so when the back story reveals the secret, the future plot and and the past plot collide beautifully. 12 Years a Slave, however, does not unravel puzzle pieces of the tale in this fashion; instead, we know most of the story in the first few scenes.

(B) Instead of one long narrative, the movie presents a collage of tales. As a result, such a film is sometimes divided into several narrative strands that overlap in time, and so technically happen out of order; however, each tale has its own linear coherence that is not significantly altered by the slight knowledge of the future given by the other stories (for instance, Pulp Fiction (1994)). However, 12 Years a Slave focusses on one character throughout, and thus does not reap the benefits of such collage-based storytelling.

(C) In special cases of character development, having the protagonist initially seen without their background context is later supplemented by flashbacks that enhance our understanding of them. This device allows us to realize that there is more to them, and perhaps humanity in general, than we realized (for instance, American History X (1998) and the TV series Lost (2004-2010)). The creators of 12 Years a Slave could have opted for this option and done it effectively had they begun with the protagonist as a slave and then inserted flashbacks that gradually revealed he was not always one. Alas, they did not.

(D) The story is told from a future perspective such that we know the final result but are spared the details until they occur. I think this device is particularly effective in cases where the outcome is common knowledge, so the circumstances of how it happens is more interesting than the ending, such as stories about a famous historical event or figure, for instance, Titanic (1997) and Amadeus (1984). 12 Years a Slave could have used this option, as the title already implies the conclusion of the film, but, instead of telling us the entire story from a future perspective, the film merely mixes our hero’s past events into one murky stew.

(E) The movie is actually about time travel, so an out-of-order sequence is part of the narrative, for instance, Star Trek IV: The Voyage Home (1986) and Back to the Future (1985).

Despite the successful offerings above, movies that effectively tell stories out of order are relatively rare, in my opinion. In most cases, the trick of giving away future events damages the drama because it diminishes the film’s ability to provoke curiosity and fear about what’s to come.

In the case of 12 Years a Slave, the story begins with Solomon Northup already living in one of his most hostile southern environments during his time as a slave. The movie then slides backwards to his time in the north as a prosperous free man; his kidnapping, then, is not nearly as scary to us as it should have been because we already know how bad it will get.

Moreover, even the kidnapping is not told in order, and so instead of luring us into the protagonist’s happy past at first, before shocking us with a moment-by-moment depiction of what went wrong, we are knocked back and forth between his future and his past such that we never settle into the joy of his initial happiness. The horror of his change in circumstances, therefore, is not nearly as well articulated as if it were simply told to us in order.

Furthermore, the downward trajectory of Northup’s life as a freeman-turned-slave—while rendered with brutal and effective detail—is again not as powerful as it could be because we know, from the original flash ahead, how bad it will get.

I think I understand why directors believe in this pinball-style story telling: along with wanting to capitalize on the strange perception that all smart stories are non-linear, they believe

(A) that the audience needs to be constantly shown the contrast between the good times and the bad times, or the thematic relationship between past and present, by putting them side by side, so that we can fully empathize with the distinction, and

(B) that flashbacks to past events during the action will remind us of where the character has been such that we will appreciate their current motives.

If I’m right about any of these director and writer motivations, I think movie makers need to have more faith in both their stories and their audiences. A worthwhile and well-articulated story doesn’t need to remind the audience of the significance of current events (in contrast with the past) as we will have been on the journey with the protagonist, and so will feel the significance of the change instinctively; moreover, if the the characters are drawn well, we will understand their motivations without needing the director to constantly point at them for us as though we’re primary school children.

In short, I implore directors and writers to trust their stories instead of leaning on this condescending and over-used gimmick.


WARNING: The following entry features two seemingly unrelated babbles, but I hope they will come together in the end.


I have recently made a pact with myself to read the novels of Charles Dickens. I met him as a kid, when my dad read to me Great Expectations, which may have given me a false expectation of the writer since my dad, along with my mom who read books to my siblings and me, is one of the greatest readers aloud of books that history has ever known. Both of my parents provide pathos in their tone that enlivens the spirit of every character. My particular favourite was the lawyer with the thick fingers, Mr. Jaggers, to whom my dad’s voice delivered a confidence and intelligence that would have left Perry Mason jealous. I then read Hard Times in university (at the instruction of a professor), which I think must be one of Dickens’ few concise works, as it didn’t take long to get through. I recall it being humorous, in spite of its dark themes, but embarrassingly I don’t remember much about the story, so from it alone I still cannot claim to have verified Dickens’s greatness.

So, this year, I decided to take on A Tale of Two Cities, in part because it is so well introduced by Dr. Frasier Crane in the excellent sit-com, Cheers

FRASIER (reading to his less literate bar buddies): “It was the best of times, it was the worst of times—“

NORM: Wait, whoa, whoa, whoa. Which was it?

FRASIER: Just stay tuned, Norm. “It was the age of wisdom; it was the age of foolishness. It was the epoch of belief; it was the epoch of incredulity.”

CLIFF: Boy, this Dickens guy really liked to cover his butt, didn’t he?

—but also because I wanted to have the work read to me as an audiobook during my many free times on transit, and the audio version for A Tale of Two Cities was available at a good price on my local internet.

The book, I quickly discovered, would have been more appropriately given the label of “Hard Times,” both for its characters, and for its reader (listener), as there are many passages of description that baffled my mind. Upon two or three listenings of the bulkiest sections, however, I understood most of it, and whenever the characters spoke to each other, the story soared. Each person in the narrative has a distinct character (and voice provided by the amazing narrator, Peter Batchelor, who proves himself to be a worthy Dickens-reading understudy for my dad) as their lives mingle together with both the nuance of a true story and the unexpected turns of a mystery novel. Dickens’s puzzle pieces fit so well together in service to the grand story, and yet all of the characters act as autonomous beings, never wavering from their individual motivations.

The finale of the Tale arrived in my ears as I jogged the New Westminster sea wall; with a cool wind in my face, I was stunned as each of the characters collided into a perfect heart-palpitating conclusion. I was forced to come to the following determination: Charles Dickens is the greatest novelist whom I have met so far.

After the tale was done, I dialed up the audiobook store again, and selected David Copperfield because it was both selling at a good price, and because my new friend and narrator, Peter Batchelor, would be supplying his voice again.

I was warned, upon this choice, though, that I might find it to be aggravating because, in the novel, Dickens apparently spends much of his time telling stories from the past in the present tense. Uh oh.


I have been ranting (in my non-blog life) for a while now about the omnipresent usage of the present tense to describe events that happened in the past. I understand that, when telling a story, rendering it in the present tense can sometimes create the impression that the narrator and listener are experiencing it as it happens. However, the trend has turned to a requirement in the media. One of my two radio stations, CBC, insists on utilizing the present tense in all of its documentaries to the point that, when experts join the discussion to give their belated perspective on events, it is often confusing which parts of the discussion are current and which are past. Moreover, interviewers often don’t even give their witnesses the option of using the correct tense.

INTERVIEWER: So what are you thinking when you first see the dragon?

SCIENTIST: Well, I’m thinking: that’s the biggest rhinoceros I’ve ever seen!

INTERVIEWER: And when do you realize that you’re dealing with a dragon?

SCIENTIST: Well, I’m talking to my colleague, Dr. Expert about it, and she says that rhinoceroses don’t breathe fire, and so I realize I’m onto something. My rival Bernie McSkeptic says it’s the greatest discovery of the 20th century.

INTERVIEWER: Bernie the dragon skeptic was there, too?

SCIENTIST: No, he just said that now on his Facebook page. He’s listening to this interview.

INTERVIEWER: Oh, I see, well let’s get back to the story. I understand you’re worried that your puppy is going to be eaten by the dragon?

SCIENTIST: Oh, yes, he chases the dragon initially, but he escapes, and I’m totally relieved.


SCIENTIST: But then he gets eaten a few minutes later.

INTERVIEWER: Oh, I thought he survives?

SCIENTIST: He does… initially. And then he gets eaten.

All right, that’s enough. I realize I may have exaggerated the point a wee bit here, but the fact is: often, when listening to stories on the radio, or in a television documentary, it can actually become confusing at various moments in an interview whether the speaker is describing their current thoughts on a past incident or their past thoughts as they happened in the then-present.

Thus, I have come to the following demand: all media should desist in wielding this tool completely because they are incapable of using it sparingly in particular incidences where they think it will bring specific tales extra significance. Instead, like underlining every word in a document, they use present tense storytelling almost exclusively, and so the technique has lost both its power and its clarity.


David Copperfield begins with the phrase, “I am born,” which sets the tone for a novel that, although it is told from the perspective of a time long passed its events, nevertheless dips into the memory of its protagonist, and so sometimes shares those memories from his perspective of re-living them.

Amazingly, though, ten chapters into this tale told in two tenses, not once has Dickens irritated me. The majority of the story is cheerfully described in the correct, past tense, but occasionally the narrator zooms in on a sequence and gives a verbal snapshot about what he was feeling at the time of the event. The result is never confusing, but always clearly delineated as an exception. I, as a reader (listener), always know when the storyteller is providing a close-up memory that he is feeling as though it is happening again in the present tense, and when he is panning out from the story and offering his long distance perspective of the past.

And so I am tempted to reverse my call for a ban on the present tense in past tense storytelling in the media. But not quite. Instead, I will now authorize the following middle ground: anyone in the media who possess something near Dickens’s skill may use the present tense for past descriptions. For future reference, all others must stop immediately.


In 2009, the owners of Star Trek resurrected its franchise with a recalibrated young Captain Kirk and friends. It was an audacious and—I thought—brilliant effort. But New Yorker critic Anthony Lane, who had perhaps been passed over for a prestigious William Shatner biographer post, railed against the new Star Trek with scathing wit and predetermined unwillingness to consider what he had witnessed. Enter pre-SethBlogs to the rescue (SethBlogs was not yet born).

I wrote a review of Lane’s review, attempting to retaliate against the expert moviegoer by utilizing the same red herrings of empty cleverness that he had levied against his prey. I was pleased with the results: Star Trek was clearly vindicated by my Lane-style review of Lane. So I sent the double-edged piece onto The New Yorker, thinking they would surely be amused to print my whimsical retort against their top reviewer. Surprisingly, they did not reply to my cheerful submission.

Therefore, since SethBlogs was then just a glint in its founders’ (my sisters’) eyes, I was forced to dock the essay until a time when it could find a place to be free.

Well, in honour of the just-released sequel to the revamped Star Trek (now they’re going Into Darkeness), I believe it is time to finally unleash my review of Anthony Lane’s review of Star Trek. For best results, I recommend first reading his prequel to mine. (Note, it’s a two-page review, so hit the “Next” button when you finish the first page.)

Read long and prosper.


There is a tendency in the blockbuster-movie universe to let the special effects do the talking: Star Trek: The Motion Picture did it in 1979 as it proudly forced us to look through far too many pictures of its baby, the shiny new Enterprise, as though too adorable for plot. Thirty years later, New Yorker reviewer Anthony Lane relies on a similar technique in his review of Star Trek the 11th.

With his quill set to stun, Mr. Lane reacts to the previously well-reviewed and well-attended new “prequel” Star Trek film by accusing its director, JJ Abrams, of exactly what I will charge him:

“He gorges on cinema as if it were one of those all-you-can-eat buffets, piling his plate with succulent effects, whether they go together or not.”

Replace “cinema” with “review” and you have Lane’s tragic flaw.

Mr. Lane brings to our table a special demonstration of his ample authorial talents as he describes Star Trek with tasty metaphors (sketching the enemy Romulan ship as “a dozen Philippe Starck lemon squeezers”), along with humorous allusions to both ancient history (noting that the rivalry between Federation Captain James Tiberius Kirk and Romulan Captain Nero “suggests a delightful rerun of first-century imperial Rome… in zero gravity”), and, of course, nineteenth-century English literature (pointing out that Commander Chekov’s confusion between his “v”s and “w”s is “a tongue-slip that Dickens pretty much exhausted for comic value in The Pickwick Papers, but,” he says, “I guess the old jokes are the best”).

Lane also teases our literary palettes by deftly accusing the film of anachronisms-to-be (“nice work, Jim,” he says, smirking at Kirk’s earthbound-Corvette, “getting hold of fossil fuel in the twenty-third century”), before filling us up with his main course, the rage-against-the-back-story (flogging it as a “a device that, in the Hollywood of recent times, has grown from an option to a fetish”).

What a smorgasbord for the literary taste buds! Nevertheless, once one begins to chew through it, an inevitable question comes forward, “Where’s the beef?”

Mr. Lane’s ability to turn a Star Trek phrase against its purveyors is impeccable (“shields up,” he says in anticipation of a sequel prequel), and yet, after a full scan, I have not detected any substance (or, for the Trekkies among us, grey matter) in his argument.

Lane begins his essay by questioning the movie’s need to exist:

“What happened,” he laments, “to Star Trek? There it was, a nice little TV series, quick and wry, injecting the frontier spirit into the galactic void… It ran for three seasons, and then, in 1969, it did the decent, graceful thing and expired… Except that the story was slapped back to life and forced to undergo one warping after another… based on the debatable assumption that you can take a format designed to last fifty minutes and stretch it out to twice that length, then pray that the thinness doesn’t show. Believe me, it showed. One of the movies was about humpback whales.”

Whamo! That’s quite the impressive shot: eleven movies and four television series dismissed by one out-of-context reference. Lane refers, of course, to the 1989 Star Trek (The Voyage Home), my childhood favourite in the series because of its comedic placement of the characters in then present-day San Francisco where, upon leaving his cloaked spaceship in the park, Kirk remarked—for the trailers—“Everybody remember where we parked.” The whales were required because an earth-destroying whale collector was looking for them—and was unwilling to leave until they surfaced—but, unfortunately, unlike muscle cars, humpbacks had become extinct by the twenty-third century, and so Kirk and crew had to travel backwards in time to retrieve a pair of sample creatures as antidotes to the earth’s demise. I suppose this premise could be considered a smidge awkward, but in contrast with the Lane-approved original where go-go-boot-she-aliens and mini-dress-wearing female officers reside, it seems rather tasteful (not to mention environmentally compassionate before its time).

Lane’s assault, though, is bigger than a squabble over points of plot: he seems to wonder, with a shake of his pen, at Star Trek’s imposition on cinema as though it’s a weed that nobody wants. But surely the critic is aware that people love this star-soaked universe: they watch it; they wear it; they marry to it.

I’m ready to stipulate that most literature would be best left to its original conclusion because a sequel will undermine its artistry, but the voyages of the starship Enterprise, while perhaps “quick and wry,” are no literary masterpiece whose profound conclusion would be forever tainted by a continuation. Gene Roddenberry’s first vision, as Lane aptly notes, “[manages] to touch on weighty themes without getting sucked into them and squashed”—Aye, aye, Captain! It was an optimistic playground of the mind wherein one was free to bounce around some thought-worthy scenarios. So shouldn’t the question of whether it continues be answered by an analysis of its ability to continue entertaining us?

Of course, the questions of whether a movie entertains versus whether it is intelligently rendered can have two significantly different answers (as The Matrix series helpfully demonstrates) so I could forgive Mr. Lane if he had merely pronounced the film to be a bad Trek, but, by scoffing at the very notion of the “continuing voyages,” Lane de-cloaks his pre-viewing agenda: his thumbs were down before the curtains were up.

Consider his contempt for prequels in general as he growls at Batman Begins, asking: “What’s wrong with ‘Batman Is‘?”

“In all narratives,” he says, “there is a beauty to the merely given, as the narrator does us the honor of trusting that we will take it for granted. Conversely, there is something offensive in the implication that we might resent that pact, and, like plaintive children, demand to have everything explained. Shakespeare could have kicked off with a flashback in which the infant Hamlet is seen wailing with indecision as to which of Gertrude’s breasts he should latch onto, but would it really have helped us to grasp the dithering prince?”

I find this critic-angst to be brilliant and funny—I’ll admit to being amused by the thought of Hamlet pondering, “To left, or not to left?”—and yes!, far too many screenwriters coddle and condescend their viewers with justifications for behaviours we would have gone along with, anyway. The only (tiny) trouble I can see with the argument is that it’s fired at the wrong film. Star Trek is not a flashback built within a movie to help us understand; we already took Kirk’s status as Captain for granted—we were okay with Spock’s pointy ears, and nobody wondered how McCoy got through med-school; in fact, we were so comfortable with taking the universe as it was that we kept on flying with it through all those other series. Star Trek does not seek to answer a chorus of confused Trekkies who have always wondered about Scottie’s curious accent; instead, the film is a treat for Star Trekkers who are so enamoured by Roddenberry’s universe that a little hint into their heroes’ pasts makes their wee hearts grin.

More importantly, although it has all the titillations of a prequel, this Trek is not actually telling us what happened before the other episodes: instead it is the consequent of a post-Kirk Vulcan blunder that found its way back in time and killed a butterfly (Captain Kirk the 1st) just as Kirk Junior—our Kirk—was born. The result is a universe similar enough to allow our favourite Star Trek characters to still exist, but altered sufficiently to tweak their histories and personalities. The back-story here, then, is more than just Trekker-gratification: it also allows us to grasp the new rules in the adjusted-for-butterfly universe before we start re-Trekking our steps. Thus, the film is not a prequel, but a requel.

Lane regrets this “dose of parallel universe.”

“Come on, guys,” he says, rolling his eyes, “you’re already part of a make-believe world in which mankind can out fly the speed of light. Isn’t that parallel enough for you?”

This sounds like an impressive accusation; however, is it not the case that every science fiction movie (in fact, every movie, and, for that matter, every fiction ever invented) presents a parallel universe where the characters and sometimes the rules of the world are, to varying extents, different from our own? Lane seems to be suggesting that if we already have one fantastical element within a single film, we cannot have another. I’m not exactly sure why; a world that includes speed faster than light seems to me to be the most likely one to also include time travel.

Sure, time travel is irritating on film (How come someone from the future can still exist after he’s changed the past? and all that); nevertheless, can we not acknowledge that the Star Trek industry has boldly gone where no one has gone before? The authors of the film have, in essence, erased their universe and are set to begin a new stream of the same water. As a moderate fan, I am saddened to think of the many previously-witnessed voyages that will no longer happen in Kirk’s life, but, as a connoisseur of consistency, I am awestruck: Star Trek can persist, beginning again with the icons that brought it, without having to worry about matching up story lines. If they want Spock to be captain instead of Kirk, he can be; if they want the half-orphaned Kirk to be cockier than ever, the sky’s the limit!

I will leave the question of whether this do-over offering is the right course for Star Trek to more addicted fans than myself (and it seems that they have reported in with their support), but, to ignore its creative moxie is another symptom of Lane’s unwillingness to consider this movie long enough to pay attention to what it is doing.

Recall his complaint that Chekov’s funny accent was merely a regurgitation of an old Dickensian joke. Maybe, but I think the Star Trek writers and fans were also laughing at the original Star Trek for having such a silly-voiced character. (Which, in fact, was likely a necessity of the cold warring time in which Chekov was created: he signified Roddenberry’s utopian view of future earth relations in which he predicted the Russians and Americans would have long patched up their frigid dispute. He even paralleled the anticipated truce with a pledge from the Star Trek future, itself, that his own rivals, the Federation and those over-gruff Klingons, would eventually become allies—a promise he fulfilled in Star Trek: The Next Generation. But, in order to ensure that Roddenberry’s “preferring to be dead than red” audience would go along with his super-truce, he designed his Russian symbol to be silly.)

Indeed, this requel teased several of our beloved: McCoy gave us one fantastic “Damn it, man! I’m a doctor not a physicist!” and Scotty, in his own unwieldy accent, complained that the ship didn’t have enough power to comply with Kirk’s demands. I am confident that the audience with whom I attended were laughing not because the lines themselves were so funny; instead we appreciated the unapologetic wink towards our corny original. I doubt Dickens’ Pickwick jokes were meant to satire himself.

But, if that doesn’t convince you that Lane’s review is a triumph of skill over substance, consider one final point: Lane’s review of Star Trek contains “humpback whales.” Enough said.


Thank you to both Tom Durrie, of the Tom & Seth Operatic Society, and our associate and opera scholar, Natalie Anderson, for aiding my understanding of opera sufficiently to write this blog entry; my conclusions, however, do not necessarily match either of theirs.

I’m not an opera connoisseur, but—with wide eyes and ears—I have been attending operas in Vancouver (and occasionally Seattle) for the past ten years under the expert instruction of my friend, and opera aficionado, Tom Durrie. I was excited, this past Saturday, to take in my debut viewing of The Magic Flute, by Wolfgang Amadeus Mozart. My pleased anticipation was based on two factors:

(1) Tom says that Mozart’s music is simply the greatest. Evidence for this claim was illuminated at our Tom & Seth Operatic Society preview party in which Tom played recordings from previous renderings of The Magic Flute; we were treated to songs so gentle that even opera-fearing people who view most arias as glass-breaking shrieks might not have been offended.

(2) Vancouver Opera had revived its 2007 production in which they set the story in a historical and supernatural First Nations landscape. Refitting operas for alternate settings is common, and The Magic Flute, Tom explained, is a perfect candidate for such reinterpretation, because it is a simple tale set in a forest with magical characters, and so lends itself to any culture that possesses supernatural myths.

To warm up for the event this past Saturday, our group was treated to a pre-show backstage tour with VO’s charismatic Development Manager for Grants & Proposals, Joseph Bardsley, followed by the the customary (and always informative) pre-show talk by the VO’s Marketing Director, Doug Tuck. They explained that the production we were about to witness had been created with careful collaboration with experts in the First Nations community. The sets, costumes, and dancing were all developed through the advice of a special First Nations advisory council, while the script was altered to fit a First Nations perspective, and included thirty words from the Coast Salish language. Everything seemed to be in place for a magic ride into an unfamiliar world.

The production is visually stunning as the result of multi-layered projections that function as the set; this fluid, shimmering environment create a visceral feeling of being in a realm that is both natural and supernatural. The show begins with a man wearing modern Western attire who awakens in a forest in British Columbia, unsure of how he’s gotten there, but aware that he’s in a mystical land unlike any he’s been to before. My fancy was tickled, as I imagined he was a sort of Alice in Wonderland, or Dorothy in Oz, or the Darling children in Neverland: surely this was the land of the First Nations before colonization, infused with magical creatures from indigenous legend.

And so began three very boring hours.

In their noble efforts to check off their cultural obligations, the VO seemed to have forgotten that their ultimate responsibility was to tell a story—to bring together characters in such a way that their various intentions conflicted and coincided to create a compelling drama. Instead, the show was a collage of obscure and disconnected moments, in which the characters were too simple to relate to.

(Tom had warned us of the sparse details within the original libretto by Emanuel Schikaneder, but he explained that Mozart’s music included vivid characterization, and so it was the job of the dramaturg to enliven and interpret the characters, and to fill in the blanks of the story as it rode along beside the richness of the score.)

Indeed, given that the VO had re-written much of the libretto of The Magic Flute for this production, they had plenty of opportunity to infuse the text with interesting First Nations characters: between arias, the production team could have provided whatever dialogue it saw fit to tell us about the universe and people they imagined. But, instead, most of the characters identified as First Nations are the same person: stoic, proud, and wise, with not a single nuance to separate them.

The hero, Tamino (our aforementioned Dorothy who is not in Kansas anymore) and heroine, Pamina, are similarly one-dimensional: he falls in love with her over a picture, and she, in turn, falls for him when she finds out that he has fallen for her image. It is a mystery that the writers at the VO did not bother to fill in this shallow aspect of the original plot with greater nuance or depth befitting the universe they were honouring. Moreover, in spite of being the daughter of a blue butterfly creature (the Queen of the Night), Pamina is not blue, and unlike the other inhabitants of this strange new world who are either First Nations people or animal creatures, she wears modern Western attire like Tamino, even though she is supposedly indigenous to this magical land. She is both an exotic other and a westernized woman for Tamino’s convenience.

(Although, when the couple is finally united at the end of the story, they are suddenly, and inexplicably, dressed in First Nations costume as though the production had identified their hopes to join the culture. This undeveloped retroactive motivation operates in conjunction with the characters’ more apparent aspirations for enlightenment (found in the libretto). The production thus implies that the First Nations are the sole holders of such profound insight.)

The Queen of the Night tries to disrupt the union, but we’re not given a hint of her motivation. Again this is a weakness of the original text, but it is the duty of the operatic storyteller to provide at least an implicit explanation within his or her interpretation for why, in this particular world, the Queen of the Night is such an unfortunate mother-in-law. (Perhaps Tamino is of a culture she mistrusts? Anything would have been useful to give her odd behaviours a context worth contemplating.)

Meanwhile, Sarastro, here cast as a First Nations elder, sets challenges for the couple (such as requiring Tamino to spend a lengthy amount of time being silent around Pamina without giving her any hint as to why he’s ignoring her) in order for them to earn their connection and general enlightenment. Why Sarastro and other First Nations overseers feel the need to test the love of our leads in such a cruel fashion (to the point that the wounded Pamina almost kills herself—before being talked off the ledge by some First Nations youngsters) is not clear, but evidently they are the good guys. Again, Schikaneder may have been equally mysterious in his open-ended text, but this was a lost opportunity for Vancouver Opera to justify its production by bringing new meaning, within First Nations context, to the trials (perhaps through the illumination of a myth or rite of passage).

Similarly, the story’s official bad guy (Monostasos), who is the servant of Sarastro, and who in this production has rat features as well as peculiar 18th-century European attire, makes little sense: we are left to wonder how he became a servant to a First Nations elder. The colonial power structure is incoherent, and serves only to check off an obligation of ridiculing Western culture.

Such insistence on announcing to the audience that Vancouver Opera would like to officially distance itself from colonialism is perhaps the weakest part of the production. (I cannot imagine that anyone who believes colonialism was a good thing would be convinced in the opposite direction by such a blunt instrument, and those who regret colonialism do not require such an out-of-place and awkward lecture to remind them of their convictions.)

Instead, I envy my anticipation of the opera in which I expected to be taken to a pre-colonial First Nations supernatural world. (How often has that universe been explored in modern Western art?) Surely there are First Nations myths that allow for pre-existing antagonists in the forest. (Or does every villain have to be non-First Nations, just as every good character either has to be First Nations or become First Nations in the end?) Had Vancouver Opera agreed to draw inspiration solely from First Nations prehistory and myth, then maybe our minds might have felt some sadness, of our own volition, that such a culture had been destroyed.

It’s hard to blame Vancouver Opera for so blatantly moralizing in this production; it is not easy for a Western artistic company to tell a story featuring First Nations culture because the former is in constant danger, no matter how hard it tries to be sensitive and deferential, of being accused of cultural ignorance. Indeed, in the Georgia Straight’s assessment of the production, the reviewer complained that that it contained insufficient references to the evils of European colonialism. (A more bluntly-chiseled castigation of Western culture would be difficult to fathom.) The reviewer also asserted that Pamina’s relationship with Tamino recalled the Eurocentric myth of Pocahontas and John Smith, even though the VO production doesn’t match that interpretation, having Pamina dress in Western garb and both Pamina and Tamino—through the superficial means of a costume change—become part of the First Nations community in the end.

Clearly, the VO had set itself an impossible task. No matter how much they consulted the First Nations community, and how much they tried to treat the ancient culture as wise and infallible, they are still criticized for being Eurocentric. Perhaps, then, the expected moralizing from the critical audience censored the company from telling an interesting story.

I find this to be an unfortunate result because Vancouver Opera’s intentions seemed to be to nourish a wounded culture, and to remind us (through beautiful scenery, costumes, dance, and music) of what has been lost, but their rendering is so jumbled and condescending that I for one lost interest half way through.


I was surprised to see the “dramedy” Silver Linings Playbook nominated for an Academy Award this year, but then again the Oscar deciders often make strange choices for my palette. They have a habit, I think, of choosing message over substance.

Silver Linings Playbook features three major characters with mental illness. So, like the Oscar voters, I am happy to see this less-funded area of health have some quality time on screen. However, it seems to me that part of the job of an award show dedicated to rewarding great story telling is checking in on both the accuracy of the character depictions (especially when it comes to a misunderstood illness) and the coherence of the plot.

The Playbook characters with mental illness seemed cartoonish to me as their symptoms were most pronounced when the plot needed either comedic or dramatic relief. For instance, the lead male (played impressively by Bradley Cooper) had a broken social filter and so would often say whatever inappropriate things that he was thinking, but this only came out when it would benefit the script; when the plot required some discretion on his part, he managed somehow to subdue himself. Indeed, most manifestations of his illness were grand gestures for us to easily recognize (when he didn’t like a book, he smashed it through his second-floor window, as opposed to throwing it on the floor).

Moreover (and I’ll be vague on this one, as I don’t want to give away a major plot point for those who haven’t seen the movie yet), one of the mentally ill characters made a decision (unconnected, I thought, with his/her mental illness) which in most movies would have been considered ethically questionable, but no such scrutiny surfaced in the film. This struck me as condescending as it seemed to suggest that mentally ill characters need not pester themselves with pesky matters such as ethics: we’re just glad to see them doing well.

But maybe I’m missing the silver lining here: perhaps I should be happy with the rarest of all Academy Awards results: a semi-comedy is being considered for their usually unamused top spot. And, given we live in a time where raunch comedy rules the comedy screen, I’m content to see that a somewhat grounded comedy has been selected.

Subtlety, it seems, is a dirty word in popular movies today. Comedy writers take turns trying to out shock each other. I look forward to the day they realize that the audience is now expecting the supposedly sweet elderly citizen to utter a raunchy phrase: we would actually be more surprised if the writers let the characters talk like people instead of pawns in their gross-out humour games.

Similarly, there is a trend in action movies (particularly action-dramas) to be cartoonishly gory. Murder isn’t enough: it has to be gruesome, and it has to be relentless. Consider Gangster Squad whose violence is so aggressive and so soaked with blood that it became (to my weary eyes, at least) like a video game whose characters are toys.

There is a dramatic danger, I think, whenever gory violence (or raunch comedy) becomes an end in itself. Recall the great television drama, Law & Order, whose action moments were so rare and realistic that they meant something.


When I was growing up, Halloween seemed magical. (Not just because it was a time that ghosts and witches were imagined to be real, and not just because as kids we could knock on the doors of neighbours and strangers, who subsequently gave us candy that we were allowed to eat.) Every Halloween, during our trick-or-treating years, my mother was able to conjure costumes for my siblings and me out of thin fabric.

I remember (sometimes on Halloween, itself) my mom coming home from work and asking us what we’d like to be as though anything was possible. If we couldn’t think of something, she would suggest some options from her magic workshop, and then upon us making our selections from the future, she would set about creating them. I think that may have been my favourite part—watching my mom create something out of nothing recognizable was both exciting and, in retrospect, inspiring.

For the Halloween in which I was seven years old, the small town we were living in was feeling rather rainy. So, after work, my mom asked my dad to go to the store to buy a collection of as many coloured garbage bags as he could find, and then, as always, she turned to those of her children still of trick-or-treating age and asked what we’d like to be.

A few hours later, we travelled into the damp night wearing costumes that were intricately-detailed as always, but also shiny in the dark, and perfectly rain proof because they were made out of plastic bags. The next day, at school, all students in the elementary school were taken in our costumes on a parade of the city. It was still raining, and so while some of my classmates moaned about water-logged limbs, I remember smiling around every sparkling puddle.

Perhaps in part due to my warm mood, I won the costume contest (I think it was for the whole school, but my memory might be exaggerating for effect), and I was given a decent prize for it, too. If I may boast for a moment, I was aware that it was unjust for me to win an award for my mother’s talent, and I told her, at the time, that I thought she should get the proceeds, but she insisted that I’d earned it by wearing the costume so well. I’m glad to say that I wasn’t convinced. (In retrospect, I now like to think I learned something that day about how the world sometimes rewards the wrong people.)

Growing up, my siblings and I knew that my mom could create anything because the evidence was always around us. Instead of buying a Barbie camper or Hot Wheels race track, my mom built them for us, and they were better than the ones on TV. I think as a result I see creativity not merely as an expression of one’s individuality, but more significantly, as a means by which to solve a problem.

It seems to me that some want to instill creativity in youngsters by telling them they can create anything and then praising whatever they produce. Perhaps this works for some, but it certainly wouldn’t have worked on me. I have never had a natural talent for putting things together, and I was smart enough as a kid to recognize that my four much-more-skilled siblings could produce results much more impressive than my own. But that doesn’t mean I’m not creative. When I see a problem now, I am able to imagine plenty of possible solutions (and then to choose from them the option that could actually fit my particular limitations).

For instance, when I was in university, I was invited to a costume party with the theme of “white trash.” I was offended by the idea, and yet I wanted to attend the gathering, so I found a white garbage bag, and with a few incisions, turned it into a shirt. It was the least impressive costume at the event, but it may have been the most creative. I’d learned from the best.


In the face of difficult questions, the most talented egos use impeccable sleights of language to rebrand their behaviours to seem heroic. This series is dedicated to those rhetorician-magicians.





IV: POET KNOWS BEST (you are here)




On CBC radio’s Q with Jian Ghomeshi, I find that the host’s brand of cheerful, introspective inquisition usually succeeds in bringing out the non-pretentious side of his guests; however, in a recent Q leading up to the London Olympics, Jian interviewed the billboard brandal, Scottish poet, Robert Montgomery, who fought through the host’s friendliness and managed an impressive level of condescension.

Montgomery’s “brandalism” project—that of superimposing his poetry, along with other art, over billboards (including recent Olympic advertising)—is interesting; as he says, cities decorated on all sides by commercial imagery could be exhausting to the psyches of the inhabitants, and so many city dwellers may prefer a quiet poetry break. Nevertheless, I was intrigued to hear how the poet would tackle the notion that the places on which he places his wares have already been paid for by law-abiding citizens. Montgomery’s personal preference for his ideas over corporate products sounds lovely in theory, but what gives him the right to overrule the message of the legal tenants of the space?

I mean the question sincerely. As anyone who’s ever taken a philosophy of law course knows, Martin Luther King Jr. argued—while he was in jail—that some laws are in such violation of human dignity that they should not be considered valid. That’s compelling to me, so I was ready to be persuaded that Montgomery’s brandalism is confronting an oppression that the corporations have no right to inflict upon us.

Yet, instead of making any attempt to suggest the intrinsic immorality of the original billboards, Ghomeshi’s guest simply explained that most people seem to enjoy the respite from the noise of commercialism. Is that really all the argument that’s required to overrule the law? That people would prefer it? I’m sure most people would also rather go without parking tickets, so should we tear them up if we get them?

Presumably the proceeds from billboards go to the city (or at least the economy), which can then pay for infrastructure for the citizens. I’m happy to hear an argument that the billboards are nevertheless immoral and so must be fought, but Montgomery’s follow-up defence that he is providing his fellow humans with a kind of therapy is wholly insufficient, and incredibly paternalistic. Despite his poetic pedigree, I’m not convinced that he’s necessarily equipped to provide such collective psychological treatment.

All of this I would have forgiven were it not for his hubris-riddled anecdote in which he described being caught in the act of brandalism by a police officer, who, happily enough, said he enjoyed the poetry and told our hero to carry on.

“Not all police officers are stupid,” the poet concluded.

So, along with providing therapy, Montgomery’s poetry has the ability to test the intelligence of its readers? If you “get it,” you’re smart; if not, sorry, you’re not too bright.

(Moreover, whether or not the officer was smart, since when are individual members of the police supposed to ignore the law because they happen to like the sentiments expressed by the criminal?)

I am more than happy to be persuaded that brandalism is a worthwhile enterprise, but I think Q should consider bringing on a defender who can see far enough past their own ego to be capable of taking on the genuine question at stake here: when is it okay to forsake the law for what you perceive to be the greater good?





IV: POET KNOWS BEST (you were just here)