Changing Perspective

This is kind of related to/following on from my earlier posts about the maliciousness of technology, but looking into a specific example. I’ll try to present enough of an overview so it makes sense to those unfamiliar with the topic (which is probably most of you), but be aware that I’ll be glossing over a lot of the detail.

Many high-profile sports have made the most of television coverage to improve the decisions made by the referee/umpire (just being able to watch a replay can make a huge difference, especially if it can be slowed down). Cricket (yes, it’s my favourite sport. So there) recently introduced a few other tools to assist the umpires, collectively referred to as the Decision Review System, or DRS. Mostly, the system has been well received, as it has enabled both greater accuracy for tricky decisions (did the batter nick the ball? did the ball bounce before the fielder caught it? etc.), as well as being able to correct blatantly wrong decisions by the presiding umpire*.

Not everyone is happy with it, however. The Indian cricket board are against the technology, citing concerns about its accuracy**. Unfortunately, such things are never going to be 100% accurate, but I do think the system could improve on the way it presents its results.

The particular technology I want to focus on is called Hawk-Eye (it’s also used in several other sports) which uses multiple cameras to determine the ball’s position in space and track its movement. Besides being a general analysis tool, it’s mainly used to adjudicate LBW decisions.

A quick aside for those not familiar with cricket. If you already know what an LBW is, feel free to skip the next part:

The batter can be out if they miss the ball and it hits the wickets. LBW stands for Leg Before Wicket, and is designed to prevent the batter just standing in front of the wickets to prevent themselves getting out. It’s a complicated rule, but the basic idea is if:

  1. you miss the ball (with the bat)
  2. the ball hits your legs (or any other part of your body)
  3. the ball would have hit the wickets (had you not been in the way)

then you are out.

Everyone up to speed? Okay. It’s usually fairly clear when the batter misses and the ball hits their legs; what Hawk-Eye is used for is predicting whether the ball would have hit the wickets. If you’ve watched (on tv) a cricket match in the last couple of years, you’ve probably seen the outcome of this: a nifty graphic showing the path of the ball and where the system predicts it would have gone (in a different colour—see an example on youtube).

Seems pretty useful, right? Except… the prediction, like all predictions, is fallible. You cannot know for certain where the ball would have gone (even taking into account wind, spin, variable bounce, etc.). This is a good illustration of my topic: there are two major aspects of any piece of software: the underlying processes/calculations, and the user interface (i.e. how the user inputs information and receives results). In this case, the calculations are probably doing about as well as could be expected (the ball tracking part is apparently accurate to less than ½ cm), but the user interface could stand to be improved.

This is a common problem. A competent programming team is generally able to build (and properly test) a program that performs its calculations correctly the vast majority of the time. But a user interface doesn’t have the same clear “this input should produce this output” requirements. It should be “intuitive”, “user-friendly”, and other such subjective adjectives. This makes it a lot harder to know if it’s “correct” or not. Fortunately, there are a lot of useful guidelines to follow, and it’s a very good idea to get representative users in front of the system as much as possible to see if they can make sense of it or not. But it remains a design process, and as such is as much an art as a science.

So, what is the most easily-remedied interface problem with these LBW predictions? The fact that the predicted path is displayed as exact, which conveys to the user that the ball would definitely have ended up here. The fallout from it is evident from listening to commentators and fans discussing LBW decisions: everyone treats the path as a singular thing. In fact, there’s a margin of error***. This could be easily shown (given that they already have these fancy 3D computer graphics) by representing the predicted path as a cone-like shape (indicating that the prediction is less accurate the further it has to extrapolate. Rather than giving a binary “hitting/missing” response, it could report a percentage certainty (e.g. we are 73% sure that the ball would have hit).

It may seem counter-intuitive, but this lessening of clarity would make it more useful. The most important principle of user interface design is to consider what the user wants to do (not what is most convenient for the programmers). The user is the umpire, who wants to gather evidence to make a decision. If there’s clear evidence, they can feel confident in making their ruling. If the evidence is inconclusive, traditionally the benefit of the doubt is given to the batter. Either way, the tool is doing what it’s intended to: making the umpire’s job easier and helping them make a better decision. It’s also making it clearer to the viewers that it’s not a sure thing.

False confidence does not help. If I claim that I’ve measured my piece of string to be 12.4637 cm long, but my ruler only has centimetre markings, I’m not being precise or accurate, I’m fabricating. It would be more accurate to say the string is approximately 12.5 cm long.


* There is an issue surrounding whether players respect the umpires (which is undermined by being able to essentially say “No, you got it wrong, look again”), but that’s another story.

** They may well have other issues, I don’t claim an encyclopaedic knowledge of the situation, but that is the reason I’ve seen given.

*** Which the system acknowledges in that—for example—the ball being shown to just clip the wickets is not enough to overrule a “not out” decision.

Are Lies Ever White?

For a long time I’ve been puzzled by the idea of lying. Various moral philosophies have fairly clear edicts on the matter (i.e. deceit = bad) and this seems to be the prevailing opinion. Sure, there are those who see absolutely no problem with lying, but they’re generally either fictional, psychopaths, or both.

But at the same time, this suggests that most people are hypocrites, in that they decry lying with one hand, and indulge in it with the other, justifying things as “social niceties” or “little white lies” (textbook denial-by-diminishing). But I tend to agree: there are plenty of times that to tell the truth would be impolite, unwise, in some way less good than not doing so.

So what’s the deal? Are we all filthy liars, or is lying not as bad as it’s portrayed?

Actually, I think there’s something else involved (from my perspective at least). Part of my confusion has been due to my perception of what a “lie” is, which has been influenced more than the average by “Knights and Knaves” logic puzzles (e.g. the two doors, two guards scene in the film Labyrinth).

The effect being that I had a very black-and-white view of lying: any false statement is a lie. More recently, harkening back to my thoughts on morality, I’ve realised that this neglects the “liar”‘s intentions. Do they realise what they’re saying is false? Or are they mistaken? Misinformed? Deluded? Why are they making a false statement?

Another distinction that should be made is between lying by commission (stating something false) and lying by omission (leaving out something true). It took me a while to come to grips with the idea that someone saying “How are you?” is just making conversation, and you don’t need to feel uncomfortable about saying that you’re fine even though you’re actually traumatised over yesterday’s episode of Days Of Our Lives. There’s nothing wrong with keeping private things private.

Another situation that seems, at face value, to be lying is the whole area of fiction: literature, theatre, comedy, etc. Knowingly asserting things that are not true, but without malice. Indeed, there is an implicit assumption that both parties know the story is false. Similarly in various types of games, deliberate deceit is a component (e.g. bluffing at poker, dummy passes in football), but it’s under specific conditions.

I don’t want to suggest by this that lying is a good thing; merely clarify the definition. Much of the function of society depends on there being a level of trust between people. What got me (back) onto this topic was an episode of the tv show Perception (another in the “abrasive, mentally-ill, but brilliant layman* helps solve crimes” genre – this time with a schizophrenic neuroscience professor) which mentioned that we react to lies with the same part of our brains that process pain: discovering you are being deceived literally feels uncomfortable. I also vaguely recollect reading (somewhere) that when we hear a statement, our brain processes it as though it were true (which I suspect is why rumours can hurt so much), and we have to actively refute it (literally have second thoughts about it).

To conclude with a metaphor (because I like metaphors**): tigers are dangerous, but we shouldn’t treat zebras the same way just because they’re stripey.


* Because they’re almost always men (which is a whole other kettle of worms).

** I’m rather fond of footnotes, too.