Know Your Medium, Part 2

Previously, I introduced the topic of linguistic relativity—how the choice of “language” affects what concepts are easy to think about.

Another wrinkle of linguistic relativity is that a language affects what you are obliged to think about. For example, when talking about an event in English, we need to consider when it happened (past/present/future tense). Other languages include what’s called evidentiality1: you need to consider how you know about the event; did you see it yourself, or did someone else tell you about it (first/second/etc. hand).

These considerations (what am I forced to convey? what is going to be difficult to convey?) are important when you are trying to tell a story, as the answers are different for a novel than they are for a screenplay.

For example, a common “bad writing” complaint is a book starting with the main character examining themselves in the mirror2 (thus providing a description). The reason this keeps cropping up is that—with only text—it’s not easy to show what a character looks like. Typically, one or two salient features will be mentioned about a character, and the rest will be left to the reader’s imagination3.

In contrast, a scriptwriter would have to tie themselves in some very uncomfortable knots in order to not present a character’s appearance to the audience. It happens the instant the actor emerges. What is difficult is revealing a character’s name. If they are being introduced, that’s easy enough, but there are bound to be characters the protagonist already knows (but the audience doesn’t). Naming is comparatively trivial in a novel.


There’s a deeper significance to this movie-appearance/book-name difference, though, which becomes apparent when it comes to a certain type of twist ending: a character (especially the protagonist) is not be who they have seemed to be. Sometimes, this is accomplished by having the character masked, in the shadows4, or otherwise hidden until a dramatic reveal at the end. This can be very effective if done well, like in the original Metroid game where the main character, bounty hunter Samus Aran, is unexpectedly revealed to be female, smashing players’ preconceptions.

This sort of twist crops up more often in sci-fi/fantasy settings, where hand-waves like “life-like androids”, “clones”, “plastic surgery”, or good old-fashioned “magic” allow for a character to not be who they appear to be. But it’s success is not dependant on the justification (if the rest is done well, the audience are more forgiving). There’s a couple of ways in which the writer can trip themselves up with this trope, which requires some detailed examples.

(Cue spoiler warnings for the films “The Tourist” and “Self/Less”.)

In The Tourist, Johnny Depp’s main character seems to be an everyman caught up in the hunt for a vanished criminal mastermind. Interpol want to catch the baddie. The other crooks want his loot stash. The femme fatale has made everyone think Johnny is the crook (post plastic surgery). After many hijinks, the cops shoot/arrest the other crooks, and Johnny is free to go. But wait! He knows how to get at the crook’s secret safe (he is the criminal mastermind after all), so he gets the money and the leading lady, and lives happily ever after.

Based on the presentation (i.e. the cinematic language), this is a happy ending. Emotionally, we go along with it, because the face we’ve been following/rooting for throughout has won. But when you pause a moment, you feel discomfited: the character you’re attached to is a cunning criminal, who changed his entire appearance to escape the police. This made for an awkward ending.

In Self/Less, old, rich, and ailing Ben Kingsley undergoes a black-market medical procedure to transfer his consciousness to a younger, healthier body (Ryan Reynolds). We follow Ryan as his initial carefree hedonism turns to concern over the weird dreams/flashbacks he starts having (especially when he forgets to take his meds). Eventually, he discovers his new body is not “vat grown”, but originally belonged to someone else, who sold it to pay his family’s debts. Ultimately, Ryan brings down the immoral scientist doing the mind transfers, stops taking the medication (so “Ben Kingsley” fades away), and reunites with his family in traditional hollywood-happy-ending fashion. But wait! Though we’re attached to his face, we know basically nothing about this Ryan Reynolds. Again, there’s something slightly awkward about the ending.

Both movies kind of got away with it (though neither were especially critically successful), but it wouldn’t have worked at all in a novel. There we’re attached to a name, not a face, and it would be more obvious that we’re actually dealing with a different person, but in a movie we’re not obliged to think about that.

The point is to know the medium you’re working in. What is easy? What is hard? What do you need to think about? And most importantly, what do you not need to think about but might trip you up later?


1 Several languages are mentioned in the wikipedia page; the impression I got (which may be inaccurate, I’m no expert on languages) was that a lot of them were from eastern-europe, the middle east, or america.

2 As with all writing “rules”, there are exceptions: Divergent gets away with it, as the scene also reveals details about the world, i.e. that these people restrict the use of mirrors.

3 An interesting example of this cropped up with the casting of Harry Potter and the Cursed Child. Noma Dumezweni was cast as Hermione, sparking much debate. JK Rowling pointed out that the character’s race was never specified. (Did the author envision Hermione like that to begin with? Your guess is as good as mine.)

4 Easy in a novel, requires lots of tricky lighting to make it work in a movie.

Know Your Medium, Part 1

The concept behind Linguistic Relativity1 has been around for quite a long time (predictably, Greek philosophers had musings on the topic). Summarised, it is the idea that the language we speak shapes the way we think.

Now that sounds fairly reasonable. But it has caused controversy when it has been presented as linguistic determinism; that your language restricts what you are able to think. In this form, it is argued that if a language has no word for something, then people who speak that language cannot conceive of that thing. English itself is a fantastic counter to this—for example, we had no word for Schadenfreude, so we nabbed3 it from German.

The evidence does support, however, that particular concepts become easier/harder to consider/discuss in different languages. And again, this is fairly intuitive—it’s harder to express yourself to others if you lack the vocabulary4. Where I find it particularly interesting, though, is the ways the concept applies to other forms of communication. For example, the same tune could be expressed differently for different instruments (guitar chord diagrams for example).

One of my jobs has been (essentially) training problem-solving, and an important tool in solving any problem is notation. If you’re faced with a problem like:

My grandson is about as many days as my son in weeks, and my grandson is as many months as I am in years. My grandson, my son and I together are 120 years. Can you tell me my age in years?

You may find it much easier to work with (and ultimately solve) once you translate it (where g, s, and i are the grandson, son, and “I”‘s ages respectively)5:

g x 365 = s x 52
g x 12 = i
g + s + i = 120

Where am I going with this? The point is that any form of communication involves a vocabulary (in the more general sense), which will be more accommodating to some ideas than others. I plan to delve into some more specific examples (comparing books and movies, as I am wont to do), but this has gotten long enough (and I’m getting muddled with my footnote numbering), so that will have to wait for next time. Ciao6.


1 You may have heard of it as the “Sapir-Whorf Hypothesis”, an honourific title at best as the two (Edward Sapir and Benjamin Lee Whorf2) never directly collaborated.

2 Whorf also helped popularise the famous not-really-a-fact about Inuit having many different words for snow.

3 I’ve always liked James Nicoll’s quote: “We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.”

4 Case in point, 2-year-olds. Eternally frustrated that Mum and Dad just don’t seem to get them. Some would argue this phase lasts about 18 years. Others would say it never ends.

5 If you’re interested, their ages (in years, rounding to the nearest year) are 6 (grandson), 42 (son), and 72 (“I”).

6 In English it means “goodbye”, but it was purloined from (Venetian) Italian where it could be used as either greeting or farewell. A more literal translation might be “at your service”. Just thought you might like to know that.

Twin Medics

I’m generally a fan of thought experiments (see the blog title, for example), whether about the nature of reality, ethics1, language and meaning, technology, or anything else. They may be called by various names: experiments, paradoxes2, dilemmas, problems…

The advantage of a thought experiment is that it allows one (or many) to consider the nuances and implications of a situation before getting into it. This is especially handy if the situation is one that requires quick decisions, or has a high cost. Plus, it’s just interesting to consider what might be, and what are the implications and ramifications of a decision.

I find scenarios a little frustrating, however. There may be a good point behind them, but the way they are presented means the immediate solution is a matter of grammar or semantics. For example, the “Omnipotence Paradox”, usually expressed as (something like) “if God can do anything, can he create a stone too heavy for him to lift?”. Whether the answer is yes or no, it establishes something that God cannot do, thus God cannot be omnipotent. It’s really about the logical inconsistencies of our concept of omnipotence, and the limitations of our language in expressing certain concepts. Which is fine, those are worthy topics of discussion, but we shouldn’t claim it tells us anything useful about the nature/existence of God.

Another famous one that doesn’t really hold up is the “Epimenides’ Paradox”, named after a Cretan philosopher who claimed that all Cretans were liars. But he was a Cretan, so he must have been lying. So Cretans are not liars, so he was telling the truth, so … 😕

But that's a false dichotomy. The statement “All Cretans are liars” is not the same as the (more specific) statement “All Cretans always lie”. In the real world no-one lies all the time (despite recent evidence). Of more relevance is the (somewhat blander and more formal) “Liar Paradox”, encapsulated in “This sentence is false”. This has been the basis of much discussion of the problems of self-referential language.


Speaking of lying, though, I saw an article purporting to list the 5 lies you are allowed to tell in a relationship. The morality of lying has been a hobbyhorse of mine, so I was intrigued. But ultimately disappointed. Their list of acceptable topics to lie about was:

  1. Whether you liked the meal they cooked
  2. Whether the hotel room is okay
  3. Whether it’s fine for their family to visit
  4. Whether those clothes look good on them
  5. Whether they’re right (in an argument)

In general, this seems to be mistaking lying for diplomacy. In all these situations, lying about your feelings to spare theirs is a bad idea. Again, it’s presenting a false dichotomy: you have more options than lying through your teeth, or giving it to them with both barrels. Telling the truth can (and should) be done gently, and with respect for the person you’re talking to. It’s a lack of that respect that makes the truth blunt and rude.

A specific note on outfits: they advise praising an outfit that works and saying nothing about an outfit that doesn’t (i.e. lying by omission). Again, the truth would be better, but this is a scenario where you have to show you deserve the right to tell the truth. The stereotypical girlfriend’s “test” (“Does this make my bum look big?”) isn’t about the clothes. It’s not a yes/no question. You pass by showing that you want her to look good, and can say something’s wrong without hurting her feelings.

Ultimately, don’t you want those close to you to respect and value your feelings and opinions? How can they do that if you’re not being honest?


1 A topical example is the Trolley Problem—first popularised in the late 1960’s—which directly relates to the decision-making of automated vehicles in potential crash situations (do you drive into the other car, or the pedestrian?).

2 Yes, the heading is a dreadful pun. No, I’m not sorry. 😛

Lost in the Crowd

I have a definite dose of the cynicisms at the moment.

The whole “post-truth”, “alternative facts” nonsense could very well have something to do with it. In some ways, I feel like this is a predictable result of swinging too far in the post-modernist “truth is relative”, “my interpretation is [just as/more] important than your intentions” school of thought.

The trouble is that reality is nuanced. I’d like to be able to make a blanket statement that this perspective is wrong, but that falls into the same trap of over-simplifying reality to save having to engage in critical thought.

When it comes to the “meaning” behind a piece of art—be it a novel, film, painting, sculpture, or whatever else—or a statement that someone has made (say, a political speech for example), it is important to recognise that different people will have different reactions to it. Everyone brings their own opinions, history, understanding, and perspectives to bear when they take in something. That’s why things like innuendoes or inside jokes work; some people will interpret them differently than others. And (assuming people of generally sound mind), multiple interpretations are valid.

As an example, a few years ago there was some debate about censoring the “n-word” in The Adventures of Tom Sawyer—what was considered acceptable parlance has changed, and a modern reader likely brings additional baggage. I don’t have a strong inclination one way or the other (as a relative outsider), but I do think it’s an important discussion to have. Values dissonance can have quite an impact on how one views a story/character. A Shakespearean character who spouts “zounds!” and the like comes across to modern readers as quaint, when for the time that may have been considered offensive language.

Not everything is this open to interpretation, however. You may be able to say anything with statistics, but you cannot change the underlying data. Some things are true, some things are false, and some things are ineffable.

I don’t know any politicians personally, so it would be extremely arrogant of me to make statements about their beliefs or attitudes. Recent experience has shown me that in any large group of people, there can be vast differences in attitude towards an issue, even among those who take the same “side”.

That said, I find the recent actions of certain newly-minted state leaders to be very worrying. They may be done with good intentions towards improving the lives of their citizens, but they seem to be giving entirely the wrong impression in terms of being confrontational, alienating, and divisive; emboldening to bigots both domestic and foreign.

Scott Adams (of Dilbert fame) has been posting interesting explanations of the negotiation and persuasion tactics behind certain decisions, and recently pointed out that—by pushing lots of controversial things through in a short space of time—one can undermine strong protests about any of them (as the opponents have too many things to complain about). Like John Key and flags, however, I’m left wondering what else has gone on that’s been overlooked in the rush.

We are in for interesting times. Kia kaha.

Real, True, or Plausible?

People sometimes make the distinction about whether aspects of fiction are “realistic” or not. Generally, I feel fiction doesn’t have to be (it is fiction after all), and that it’s more important that it be “true”.

What I mean is that some aspect of the scene has to be presented truthfully. The reader/audience’s reaction should be “that’s how that character would behave”, “people are like that”, or “that’s what would happen”. It’s about resonance, often on an emotional level. If you (the writer) have achieved that, then the audience will be following you, even if events are not realistic.

It does help, though, if events are also plausible. They don’t need to match how things behave in the “real world”, but they should fit with the way things work in the fictional world that is presented. If a fantasy novel establishes how magic works, then it’s cheating to have it suddenly do something different, and smacks of a writer who has painted themselves into a corner.

Alternatively, you can focus too much on making a scene work on an emotional level, so the audience/reader goes along with it, but later on thinks that something seemed not quite right1.

For an (extended) example, I recently was watching a scene described as a “spies goodbye”. A couple of agents had been captured, their covers were blown2, and their only option was to “retire” from the game and never make contact with their former allies/co-workers. The two are morosely drowning their sorrows in a dimly-lit bar. The waitress brings over a drink, saying it’s a gift from another table. They glance around, and spot one of their (former) colleagues at another table in the corner. This happens several more times, as they realise their whole team are lurking in various parts of the room3. One by one, they make (tearful) eye contact, raise their glasses, then quietly leave.

Emotionally, it hits the mark. It’s fiction, so it doesn’t matter that in the real world this would be a blatant violation of the “cannot make any contact” restriction, but on reflection, it still feels a bit implausible. My main issue is that it breaks one of the cardinal rules of subterfuge: have a reason for being there4.

Is this fixable? A similar effect could be achieved by having the team members nabbing a nearby table and loudly sharing a toast to absent friends—they’re all wearing black (or at least dark colours), anyway, so they’ll give other patrons the impression that they’ve come from a funeral. Far less likely to draw unwanted attention than several people buying drinks for and saluting an otherwise inconspicuous couple. You can still use largely the same camera angles, but without the sense that people are staring at those they supposedly don’t know.

The advantage of being a writer is that any painted corner is escapable. And, if you do it right, you can reinforce other aspects of character/world, without the audience ever realising you were in a pickle to start with.


1 TvTropes refers to this sort of thing as “fridge” moments. As well as the “wait, how does that work?” they also note things that seem brilliant, or horrifying, when thought about later.

2 You know the drill—a mission goes wrong, agents have to improvise, “if you are captured, the Agency will disavow any knowledge of your actions (or even existence)”, etc.

3 And clearly want to get them really drunk.

4 It’s been frequently shown that people (only) remember things they focus on. Details that “fit” an expected pattern, that don’t stand out, will likely be forgotten.

Crime and Punishment

I was shown an interesting blog article talking about the Game of Thrones tv series, and the conflicting drives (in the audience) of empathy and vengeance. You can read it here, but in summary it was addressing the way characters do horrible things, so we want them to be punished, but then get penalised brutally, so we feel bad for them.

At this point, I should clarify that I’m not a fan of GoT—it’s too … intense … for my tastes1. Given the popularity of revenge-based stories throughout history (as in, they usually inspire catharsis, not ambivalence), I suspect the makers of the show are trying to portray the acts of retribution in such a way as to emphasise their brutality and engender empathy in the audience. It would certainly fit with the theme of “everyone is equally nasty (and those that aren’t tend to get killed off quickly)”.

It does raise an interesting thought, though. When we see another human suffering, we feel sympathy. If we see someone wronged, we feel anger: we want justice. But what do we mean by “justice”? Sure, revenge is viscerally satisfying, but only if we dissociate from the other party (usually either through seeing them as somehow inhuman—monstrously evil and unredeemable—or by otherwise distancing them—they are from a rival clan/group).

Many stories of vengeance also convey the idea of “‘an eye for an eye’ leaves the world blind”. Our desire for punishment can be defused by seeing the humanity of the perpetrator. Some political parties like to focus on “tougher sentences for crime” as though it would help, but evidence suggests it does not: likelihood of punishment (“Will I get caught?”) matters more than severity of punishment in deterring lawbreaking.

This is all focusing on the penalties of wrongdoing, however (whether via an individual avenger, or state sanctions). And while the presence of these can mitigate our sense of injustice, I do wonder if they are ambulance-at-the-bottom-of-the-cliff measures.

Perhaps the way to make the world a more just place would be to try and ensure there were no benefits to breaking the rules.

But we could go further. It’s also known that people are more likely to take risks to avoid a loss than to gain a bonus. So maybe the real problem (and the real injustice) is that following the rules doesn’t mean you’ll be successful.


1 I do know enough bits and pieces of history to recognise the reality of the political machinations; it’s been said2 that democracy doesn’t guarantee you the best ruler, but allows you to change them without bloodshed. It’s worth remembering. We don’t know how propitious are the circumstances, Frederick. In the Ottoman Empire, for example, a new sultan would have his extended family killed off to prevent the possibility of civil war over heirship.

2 I seem to recall a specific quote along these lines, but I cannot remember the wording, or who said it. If anyone does know, please enlighten me!

Expectations Colour Reality

I tend to be a bit cynical about the self-help industry; it often seems geared around getting your clients to open their wallets and say “Help yourself”. Yet I cannot deny the positive impacts of motivational media. When you feel like your day has been nothing but wading through chest-high blancmange1, a cheery reminder that “You only fail when you stop trying!” can be just the tonic to help you reach dinner-time with your sanity, if not intact, at least not missing any pieces.


There’s a lot of it about.

And yet, at other times, the same statement can seem like the most tedious inanity that ever cloyed its way out of the primordial syrup. So what gives?

There’s a learning metaphor I like that suggests concepts are like Lego blocks, and we better assimilate new ones if there are sufficient others to connect it to2; a block on its lonesome is easily misplaced, but a firmly connected one is likely to stay where you put it. If we don’t have the appropriate framework, we won’t be able to connect with a new concept, so it will seem either impenetrable or silly3.

A similar metaphor can be applied to moods. If we’re in a particular mood (e.g. grouchy), our available connectors may be incompatible with the thing we’ve just encountered (e.g. a cutesy “it gets better!” quote), and so it will be easily brushed aside.

This pattern shows up all over the place. In our biases (any new information about someone or something has to connect to—and thus reinforce—our existing framework). In priming/anchoring (once we start thinking in a particular direction, it can be hard to change). Placebos work because we’re told they will heal us. Over-hyped experiences inevitably disappoint.

Changing our perspective will change the way we react to something, separate from the actual value of what we’re reacting to. Imagine you go to a restaurant and see a particular dish on the menu—the one you fondly remember your mother making when you were a child.

You eagerly order, only to find that they do it … differently. Not badly, just not like mother used to make. You leave the restaurant feeling unsatisfied with your meal (and maybe with the evening out in general). Whereas if you’d acknowledged beforehand that the dish was likely to be different, you would probably have been quite happy with it.

And this, I think, is what’s really behind the common motivational concept (which I’ve seen many variations of, attributed to all kinds of people): “If you can’t change your circumstances, change your reaction”. I found this idea irritating for a long time, because we can’t control (all of) our reactions; if we get a shock, for example, our body dumps adrenaline into our system before we’re even consciously aware of it. But we can control our expectations going into a situation, and that will impact how we react.

If we don’t expect a movie based on a favourite childhood book to be that great, we’ll still be disappointed when it’s turned into largely empty spectacle with an overdose of Legolas4, but we won’t be shocked and tempted to write angry letters to the director. Our expectations colour our reality. Which hopefully is more meaningful with the rest of the post to undergird it.


1 Please note, I’ve never actually tried this, it just seems like it would be difficult (it may actually be tremendous fun). And “blancmange” is a funny word. 😉

2 I might not connect my block in the same place as you—my pre-existing structures may be quite different. We may both be able to lock in the new idea, but because we connect it differently, we’ll have different associations with that idea. Hence one of the values of brainstorming, in that the same concept can send different people off in different directions.

3 When you’re trying to convey a concept to someone else (especially if it’s new to them), it’s easy to be so focused on the concept itself that you take for granted the framework around it. If you’re thoroughly familiar with a concept, a short statement can be deep and meaningful. If you’re not, the same statement can seem vague and airy-fairy.

4 I’m not angry, just disappointed given what might have been. And it makes for an amusing example.

Procrastination

I saw an amusing TED talk the other day explaining what goes on in the mind of a procrastinator. The only complaint I have with it is that it oversimplifies a little in assuming all procrastination is the “messing about unproductively leaving important task to the last minute followed by mad deadline panic” type.

I’m generally pretty good at not doing that, but I frequently suffer from the “finding other productive things to do to avoid dealing with particularly daunting/unpleasant task” type. And how does one overcome procrastination? Just read this handy-dandy self-help guide:

  1. Don’t waste energy trying to be someone else—be yourself!
  2. Only, be a more organised and productive yourself. Because winners get up at 5am to make to-do lists using quinoa and mason jars.

What brought this topic to mind? I’m procrastinating, natch1. I’ve been wanting to get some feedback on a project I’ve been tinkering with (especially as it could use a jump-start), but I’ve been reluctant to show it to anyone. It required a little introspection to realise that I was putting this off.

It’s kind of weird that despite being well aware that it’s at a first draft/prototype stage, knowing about several deficiencies, and wanting suggestions on what direction to proceed, the thought of revealing it has me curled in a corner, clutching it and wailing that “it’s not ready!”2, and making vague mutterings including frequent use of the word “precious”.

So, yeah. I’ll get over it. It just amused me once I realised what I was doing, and so I thought I’d share.


1 No, I have no idea how long it’s been since “natch” (short for “naturally”) was in the common vernacular, either. 😉

2 Or should that be “I’m not ready”?

Might Be?

It’s comforting to think that we live in a fairly egalitarian society, where we have advanced beyond “primitive” concepts like “Might Makes Right“.

But have we?

Allow me to present an example. The other day, I was waiting to cross the road at a pedestrian crossing. Weighing up whether to step out, I (wisely) chose to wait and see if the approaching car was planning to stop for me or not. As they breezed past, and I internally grumbled about right-of-way, the following occurred to me.

According to the road rules, the car should have given way. Were I to step forward with that expectation, they would certainly try to stop in time. If they hit me, they would likely bear the brunt of any legal censure, while I would be told to be more careful.

Assuming I was still around to be told.

Various pieces of legislation exist to empower the “little guy”—the one on the wrong side of any disagreement where “might* makes right” could apply. And this is fair enough: if someone is correct, they shouldn’t need to coerce others to agree; if someone is wrong, they shouldn’t win an argument.

The trouble is that these rules and laws work more as a proclamation. Pragmatically, they have only a limited effect. Thanks to the rules, cars will occasionally stop to let me cross the road. But if they don’t, there’s generally no recourse. In theory I could note number-plates and pursue legal action, but it would likely be a long hard slog with little or no reward. So a driver is free from reprisal unless they actually run someone over. However, even in this case, there’s still an advantage to being the “mighty” one—they get fined, or maybe even sent to jail, some weeks or months after the event. The pedestrian gets injured immediately.

Might still makes right in some situations. But I don’t see any practical way around that, so having rules to say “this is the way things should be” is the next best thing.

And this musing distracted me from getting grumpy about inconsiderate drivers, so there’s that too. 🙂


* “Might” doesn’t necessarily mean physical strength. The concept could apply anywhere that one party is able to intimidate another party into submission, whether that is through strength, size, majority (outnumbering the other party), intelligence (ever see someone beaten down with complex rhetoric?), status (holding greater authority), etc.

The Marginal Myth

I’ve been reading quite a bit recently about how the world works and what one must do in order to succeed. It’s become a source of minor frustration that much of this advice (though likely effective) comes from those whom I would not wish to emulate.

Put simply, the modern world rewards effort, but also ruthlessness. As the saying goes, “Nice guys finish last”.

People are catching on to the fact that the world is not a meritocracy—your circumstances make a huge difference both to what opportunities you receive, and your ability to act on them—but what I’m calling the Marginal Myth seems to be either unrecognised, or actively ignored (as with many other uncomfortable truths).

The Marginal Myth comes from the common practise of examining the “margins” of a situation in order to streamline (e.g. being able to produce goods faster and/or more cheaply).

“But wait!” you say, “That’s not a myth; it works!” And you’d be right. It’s more insidious and subtle than that: the myth is that this approach is always worth taking.

To cite another saying: “When all you have is a hammer, everything looks like a thumb nail”. So, when times are tough, people get bogged down in trying to work out where they can “economise”; governments speak of “belt-tightening”; a lot of emphasis is placed on “the bottom line”. We trim and squeeze the margins further and further, our blinkered focus not seeing the other drawbacks that don’t directly affect the magic number.

Using the example of a soda-making factory: profit margins are being squeezed by their competitors, so they have to economise. Various tweaks are made to the bottling process, after which they find to their delight that they can save a few cents a bottle. A seemingly trivial amount, but given the volume they produce it has a significant effect. Hands are shaken, new instructions given to workers on the floor, and the bigwigs head out for a celebratory round of golf. But…

  • Maybe there isn’t time to properly clean the machines between batches, leading to build-up of syrupy residue which attracts insects, leading to contamination of the product.
  • Maybe the cheaper supplier of ingredients is farming them unsustainably, creating environmental problems due to deforestation or use of pesticides.
  • Maybe making the plastic bottles slightly thinner leads to increased leaks, causing wasted and unsold product, and frustrating retailers.
  • Maybe no-one’s properly checking the bottles before they get boxed up anymore. Occasionally, one has the wrong label, or the label upside-down, or just skewed/misprinted. No disaster, but customers start having subconscious thought of declining quality and are more likely to try a different brand if it catches their eye.

And so on. Okay, I’m presenting worst-case scenarios here, but the thing about “belt-tightening” is that it almost invariably happens again. Whether because other economic pressures arise, or because some board-member who doesn’t know the factory’s address, let alone having ever visited, gets excited at the improvement and thinks if they do a bit more he can add a couple of feet to his yacht.

Easy changes are made, and everyone’s okay with that. All seems well. Further changes are made. People on the ground are under more pressure than before, but once they get used to it, everything will settle down again, right? When the next change happens, they suddenly realise things didn’t settle down again. Now they aren’t getting a pay rise for this year. Still, not a problem, right*? Next time, more drastic cuts are required. People may accept reduced hours/pay because the alternative is redundancy. Either way, there are (on average) less staff on the factory floor. Compromises get made. Mistakes creep in.

We’re viewing this as a quantitative adjustment—changes made cause numbers to be different—but at some, not necessarily predictable, point in the process, a qualitative shift can happen. The proverbial straw that broke the camel’s back. And it’s all driven by competition: the incredible, ephemeral “market” that makes companies and governments march to its drum**. It’s a commons dilemma where too many are placing no value on the long-term good.

There’s a story about a village that decided to hold a great celebration. Everyone in town was asked to contribute a bottle of wine into the vast barrel placed in the square, and that night there would be revels aplenty. But the blacksmith thought to himself, “I don’t want to pay for a bottle of wine, but if I add a bottle of water, there’s so much wine, no-one will notice”, so he did. Many others had similar thoughts, so that evening, when the mayor—with great ceremony—poured the first flagon, only water came out. All went home, chastened, the celebration cancelled.

Maybe we need to hold our glasses up to the light.


* People are less likely to complain at missing out on a bonus (pay rise) than suffering a penalty (pay cut), not realising that it’s effectively a pay cut given the likely increase in the cost of living (due to inflation and suchlike).

** There’s a lot of other issues I have with being driven by “the market”, but that will have to wait for later posts.