Fear and Gender

A note to begin with: this post discusses gender issues. I don’t claim to be able to speak for all men, let alone all women, and I’m well aware that people are many and varied (probably in ways I’m not even aware of). So can we agree at the start that we are dealing with the [mythical] “average man/woman”? Right then…

I’m not sure if I have much of a conclusion/point (beyond “peoples is messed up, yo”); this is more expressing some thoughts on a topic that bothers me.

I recently read an article by Caitlin Moran (in The Times Magazine—behind a paywall unfortunately) called “What men need to know about women”. The gist was that women are exhausted (from trying to live up to societal expectations—basically “Women can have it all! You don’t have it all? You slacker!”) and scared.

Scared because ~50% of the population are bigger, stronger, and more aggressive than them. Scared because if attacked there is little they can do to fight back. Scared because—quite frankly—the statistics around violence and sexual assault (male attacking female) are terrifying.

This struck a chord with me. Not because I’m a woman and have experienced this fear. Not because I’d never heard it before (there is a theory that you have to hear something several times before it really sinks in). Probably from the way it was expressed.

You see, I’m a small man. I’m roughly average height, but I have to wear heavy boots if the wind is blowing. I can definitely relate to the sense of being aware that most people in the room are bigger than me. I don’t feel entirely comfortable walking home alone late at night; not generally afraid, just extra alert and cautious. I did spend a while being afraid, following an unpleasant encounter with a boisterous drunk (though that was weirdly location-specific), so I can appreciate that regular verbal harassment and the like would quickly erode one’s sense of safety.

As well as size, though, I suspect the worry is related to the impulse (or lack thereof) to fight back, which seems more of a cultural construct. For boys, there’s a (usually unspoken) encouragement to “hit them back”. It seems to engender an odd perspective in that, àpropos of nothing, you occasionally find a thought lurking in the back of your mind to the effect of “yeah, I could totally take them down”. Even though my instinctive reaction is to freeze up when threatened (the lesser-known third option of the fight-or-flight response), I still entertain fantasies of showing an assailant that I was not to be messed with (straightens monocle imperiously), lest I feel myself “less of a man”. Stupid, huh?

I did kung fu for a few years, and one of the significant factors in me stopping was a mental block about hurting others. I was fine with learning moves, practicing falls, hitting a bag, etc. I still consider the board-break from one of my gradings as a particular achievement. But I blanched at doing more contact sparring-type drills, and when it sunk in that I was capable of seriously injuring someone by accident. Plus getting sick of the common bumps and bruises, and the fact that I could be easily knocked around (scrawny, remember? Technique has only limited benefit when your opponent is twice your size. This is why there are weight divisions in boxing/wrestling/etc.).

In contrast, girls are socialised to not cause trouble. I remember seeing it somewhere expressed as women shrinking and men growing (in terms of imposing themselves—or not, as the case may be—on the people/space around them). This doesn’t mean they are never aggressive, but it can often manifest verbally/emotionally rather than physically. Which, strangely enough, can be a most effective avenue for wounding men (again, remember this is about generalities and stereotypes).

I guess the only real option is to try our best to forget about these boxes we’re put in and just treat other people as people. “If it is possible, as far as it depends on you, live at peace with everyone.”


What’s wrong with turtles?

A while ago I was reading a blog post on gamasutra about how to design a game so as to discourage players from “turtling”.

Just to make sure we’re all on the same page, “turtling” is a pejorative description of someone’s style of play. This type of player is focused on defence, bunkering down and rarely attacking.

What I found interesting was that, throughout the post and several of the comments that followed, I was nodding along with the author, thinking “yes, that seems sensible”. Then one comment stopped me in my tracks by asking—in effect—”why shouldn’t players turtle if they want to?”; I suddenly realised I was mindlessly following the prevailing attitude that says turtling is inherently bad; something the game designer ought to prevent.

There are several behaviours in the same (or similar) boat. Save scumming*. Pause scumming*. Cherry tapping*. Kiting*. Camping*. Some are more acceptable than others (depending on context**), but they are generally seen as being negative, “unsporting”, or “cheap”. This also seems to be susceptible to the actor-observer effect: we accept it when we do it, because of perfectly valid reasons. We condemn it when others do it because they’re just cheats.

Players Behaving Badly

So, are there ways you can design a game to prevent (or at least deter) such behaviours? Sure, but you have to be very careful that you don’t create a worse problem by doing so. To make sure the change is actually affecting the behaviour you want, though, it pays to understand why people act that way (and not just why they say they did something—what are the underlying psychological principles).

I believe all these sorts of behaviour share a common motive: people are generally risk-averse (preferring a “sure thing”) for gains, and risk-seeking (preferring to gamble) for losses. Most games are framed in terms of gains (increasing points, winning matches, etc.) rather than losses, which predisposes people towards what they perceive*** as being the best strategy. “Playing the percentages”. Not taking undue risks.

For example, imagine if in each level of a platformer (Super Mario Bros for example) there were five bonus stars you could collect. Completing the level gives you 50 points, and each star is worth 10 points. The stars are placed in locations that challenge the player—either requiring them to navigate through dangerous terrain, or defeat/escape powerful enemies. When you examine the playtest data, you find that, while some players try for every star****, most players don’t bother risking it.

So, lets say you reframe things. The level is now worth 100 points, but you lose 10 points for every star you miss. And you find that, now that they’re thinking in terms of losses, players become more likely to risk trying for the stars, and overall more stars are collected. Success! Right? Except that players are also unhappier and more frustrated with the game; no-one likes being penalised. Probably not a good thing overall. You’ve reduced players turtling, and got them exploring more of your levels, but maybe they’re doing more save/pause scumming.

Players Behaving… Badly?

Maybe we need to take a step back. Sure, there are situations in which you want to discourage [some of] these behaviours, but is it a big enough issue to expend much design effort on? To clarify my point, I want to think about why we get so annoyed at these behaviours.

This doesn’t only apply to video games; there are plenty of examples in the sporting world, too. Pick your favourite sport, and you can probably think of players or teams who are “turtlers”: cautious and attritional rather than daring. They may well be top players, with enviable records. How do fans, commentators, journalists refer to them? Dependable. Hard-working. Consistent. Making the most of their talent. But are they loved? Do fans drop everything and rush to their televisions when that player walks onto the field? Not so much. They may even be seen as selfish, and overly focused on their numbers. There are exceptions, but people seem more drawn to the audacious and flamboyant players/teams, who may lose more often, but gosh darn if it isn’t exciting either way.

And I think that’s the key word: exciting. Entertaining. Dramatic. High level sport is a physical contest, but in the modern world it’s increasingly perceived as a performance as well. Hence, of course you want your team to win, but you don’t want it to be boring. We’re distracted by our deeply-ingrained sense of stories. We’re disappointed if we don’t see aspects of the “Hero’s Journey” play out: our heroes must bravely venture out to face their foes. It’s equally easy for players to get caught up in this, and try to play in a way that doesn’t reflect their strengths or their character.

Most video games are not competitive sports. How about (within reason) we give players the space to enjoy the game however they want to play it, without judging them for not playing it “right”. Maybe, if the turtles don’t feel discriminated against, they’ll be more comfortable coming out of their shells.

* Rough definitions:

Save scumming
Repeatedly reloading save games until you achieve a desired result (even though you could have continued).
Pause scumming
Repeatedly pausing time-critical sections.
Cherry tapping
Using weak/joke weapons/attacks to defeat an opponent. Requires either excessive training (so your character is far stronger than necessary), or wearing the opponent down via “death of a thousand cuts”.
Repeatedly attacking an opponent then running away, thus slowly defeating them. Can also refer to teasing out individual opponents from a group rather than facing them all at once.
Lying in wait at a particular location to either have a clear shot at opponents or grab resources when they arrive.

** Generally, they are considered more acceptable in player-vs-computer situations, and less acceptable in player-vs-player situations.

*** Not necessarily the actual best strategy; humans are bad at probability.

**** Known as “achievers” or “completionists”. See the Bartle Test for example.

What’s the point?

Players of video games—particularly role-playing games (RPGs)—will often lament the problem of grinding (I suspect named in reference to “the daily grind”, but it is also sometimes referred to as “treadmilling”). The commonly-accepted definition of grinding is having to complete tedious and/or repetitive tasks. It often arises in the context of “leveling up” a character (essentially training to improve abilities).

Various workarounds have been proposed and/or implemented (see some examples). Completely removing the potential for grind would mean completely changing the leveling systems (which are otherwise tried, true, and effective), which would have significant consequences, so the approach de rigueur is to include some sort of payoff; a gold star for having defeated 100 swamp rats. This is applying an extrinsic reward to either motivate the player to grind, or placate them after a period of grinding.

While some aspects of game design—like the diminishing returns of experience points/leveling, and the random reinforcement of loot drops—are heavily informed by psychological findings, similar findings about the poor motivational effects of extrinsic rewards seem to have been passed over. Of course, it may also be that figuring out how to tap into intrinsic motivators is not only difficult, but getting back into the “overhaul the whole system” approach, which isn’t what we want.

I find myself wondering, though, if this is a case where the old adage “you never understand the solution until you understand the problem” applies. We have a definition of what grinding is, but maybe we need to consider why grinding is off-putting to so many players. Think of an RPG—whether it’s Ultima, Diablo, World of Warcraft, or even Pokémon—the parts of the game that are perceived as “grinding” aren’t mechanically different to the rest, they’re when your goals are different. You need to get stronger before you can overcome the next challenge. Your character still gains experience and levels while completing quests, but it’s a side-effect. “Grinding” is when leveling up becomes the main goal. And that’s just not very interesting*.

We can see something similar in the world of sports. The equivalent would be playing a match that has no impact on which team wins the trophy, so the only advantage to the players is the potential for improving their stats (though there’s still ticket sales, broadcast revenue, etc. to entice the higher-ups). For example, the fifth match of a best-of-five final when the score is 3-1; such a match is referred to as a “dead rubber”, and in some cases is abandoned.

Maybe this perspective can help. Grinding doesn’t seem like grinding if there’s another reason for doing it besides boosting stats**. Earning a gold star doesn’t help, unless it makes a difference to later gameplay. Perhaps other characters could start referring to your character as “Bane of Swamp Rats”. Perhaps swamp rats become more likely to flee rather than attack. But something beneficial—give them a reason, not an arbitrary number.

* For most players, anyway. For some, it’s the main attraction, and that’s fine, but I don’t believe that’s the case for the majority.

** Partly because I was feeling the lack of footnote, but also because this is a genuine side-issue: granularity. Sometimes the problem isn’t that there’s no other reason to kill all those swamp rats, but that you have to kill so many before it matters. It comes down to the same thing though: if you make the player’s actions feel meaningful, they’re less likely to get bored/frustrated with their progress. This is sometimes called “breadcrumbing”—leaving a trail of small markers/rewards to lead the player onward.

The Forgotten

It’s been a while since I’ve felt like writing anything (or at least, anything that isn’t a rant about plumbers). What I want to address today, though, is the disparity between what we view as significant or valuable, and what actually is.

I’ve encountered the same idea so many times, from people in so many different fields: “If I’ve done things right, no-one will notice. If I mess up, everyone will be glaring at me.” I’d even venture to suggest that the vast majority of jobs are like this.

Think of a rock star, strutting their stuff in a big stage show—lasers, pyrotechnics, the works. If everything goes well, the audience reaction will be “Gosh [band name] were well radical!”*. If the lighting display is out-of-sync, and the spotlight fails to follow the lead singer around the stage, everyone will be complaining about the technicians.

But just pause for a moment. Whether a concert goes well, or bombs, how many people are involved in making it work? Advertisers, ticket sellers/collectors, sound/lighting/sfx technicians, “roadies”, prima-donna wranglers, and probably heaps more, but all the acclaim goes to the handful of oddly-dressed bods on the stage.

This leads to two rather odd mental blocks relating to the actual cost and the perceived value of the performance.

Firstly, people complain about the ticket prices, insinuating that they would be cheaper if the guitarist was willing to only buy one new Lamborghini this year, apparently oblivious (unless they consciously stop and think about it) to all the behind-the-scenes folk who also deserve to get paid**. It’s not that people are unaware, but our brains will take the easy way out given half a chance (I recommend the book “Thinking Fast and Slow” for anyone curious about this phenomenon).

Secondly, regardless of how ticket prices get parcelled out, the few jobs that do receive attention also tend to receive significant remuneration. Think of the (exorbitant) pay-packets of famous athletes. They can (though not all do, to be fair, only those that reach a high enough level in a popular enough sport) earn hundreds of times what, say, a teacher does. But would anyone seriously argue that kicking a ball around on television is more valuable to society than teaching the next generation so that they can be content and productive themselves? Yet capitalism says otherwise, in one of its lies that western society has internalised: money represents value, ergo if you earn more money, you are more valuable.

What’s going on? Well, in typical fashion, we are measuring what is easy to measure and disregarding what isn’t. I’ve seen in pointed out that the reason a sportsman (and sadly, it is almost always a man***) can earn so much is because their performance works regardless of the audience—how many there are, whether they’re paying close attention or just watching for the atmosphere, etc. If televised, millions could be watching. For a teacher to do their job, they need to engage with each member of the class, which is just impractical once the class gets over a certain size****.

So, in a way, maybe this is a rant about plumbers. And everyone else doing those valuable-but-hidden jobs. Because I for one am very glad that you do what you do, and that I can take a shower without having to think about how the water gets there; this is a prompt to myself, as well as anyone else, that such things shouldn’t be forgotten.

* Maybe not in those terms. I may be showing my lack-of-hip.

** Please note that I am unaware of how much of the ticket price goes to the various parties. It may well disproportionately favour the performer(s), it may vary depending on the prestige of the act. But that’s a separate issue.

*** Again, separate issue. Important, yes, but this post is long enough already.

**** I make no claims as to what the feasible upper limit of a class size is—it probably depends on who both the teacher and the students are—but it’s certainly not in the hundreds, let alone the millions.

Changing Perspective

This is kind of related to/following on from my earlier posts about the maliciousness of technology, but looking into a specific example. I’ll try to present enough of an overview so it makes sense to those unfamiliar with the topic (which is probably most of you), but be aware that I’ll be glossing over a lot of the detail.

Many high-profile sports have made the most of television coverage to improve the decisions made by the referee/umpire (just being able to watch a replay can make a huge difference, especially if it can be slowed down). Cricket (yes, it’s my favourite sport. So there) recently introduced a few other tools to assist the umpires, collectively referred to as the Decision Review System, or DRS. Mostly, the system has been well received, as it has enabled both greater accuracy for tricky decisions (did the batter nick the ball? did the ball bounce before the fielder caught it? etc.), as well as being able to correct blatantly wrong decisions by the presiding umpire*.

Not everyone is happy with it, however. The Indian cricket board are against the technology, citing concerns about its accuracy**. Unfortunately, such things are never going to be 100% accurate, but I do think the system could improve on the way it presents its results.

The particular technology I want to focus on is called Hawk-Eye (it’s also used in several other sports) which uses multiple cameras to determine the ball’s position in space and track its movement. Besides being a general analysis tool, it’s mainly used to adjudicate LBW decisions.

A quick aside for those not familiar with cricket. If you already know what an LBW is, feel free to skip the next part:

The batter can be out if they miss the ball and it hits the wickets. LBW stands for Leg Before Wicket, and is designed to prevent the batter just standing in front of the wickets to prevent themselves getting out. It’s a complicated rule, but the basic idea is if:

  1. you miss the ball (with the bat)
  2. the ball hits your legs (or any other part of your body)
  3. the ball would have hit the wickets (had you not been in the way)

then you are out.

Everyone up to speed? Okay. It’s usually fairly clear when the batter misses and the ball hits their legs; what Hawk-Eye is used for is predicting whether the ball would have hit the wickets. If you’ve watched (on tv) a cricket match in the last couple of years, you’ve probably seen the outcome of this: a nifty graphic showing the path of the ball and where the system predicts it would have gone (in a different colour—see an example on youtube).

Seems pretty useful, right? Except… the prediction, like all predictions, is fallible. You cannot know for certain where the ball would have gone (even taking into account wind, spin, variable bounce, etc.). This is a good illustration of my topic: there are two major aspects of any piece of software: the underlying processes/calculations, and the user interface (i.e. how the user inputs information and receives results). In this case, the calculations are probably doing about as well as could be expected (the ball tracking part is apparently accurate to less than ½ cm), but the user interface could stand to be improved.

This is a common problem. A competent programming team is generally able to build (and properly test) a program that performs its calculations correctly the vast majority of the time. But a user interface doesn’t have the same clear “this input should produce this output” requirements. It should be “intuitive”, “user-friendly”, and other such subjective adjectives. This makes it a lot harder to know if it’s “correct” or not. Fortunately, there are a lot of useful guidelines to follow, and it’s a very good idea to get representative users in front of the system as much as possible to see if they can make sense of it or not. But it remains a design process, and as such is as much an art as a science.

So, what is the most easily-remedied interface problem with these LBW predictions? The fact that the predicted path is displayed as exact, which conveys to the user that the ball would definitely have ended up here. The fallout from it is evident from listening to commentators and fans discussing LBW decisions: everyone treats the path as a singular thing. In fact, there’s a margin of error***. This could be easily shown (given that they already have these fancy 3D computer graphics) by representing the predicted path as a cone-like shape (indicating that the prediction is less accurate the further it has to extrapolate. Rather than giving a binary “hitting/missing” response, it could report a percentage certainty (e.g. we are 73% sure that the ball would have hit).

It may seem counter-intuitive, but this lessening of clarity would make it more useful. The most important principle of user interface design is to consider what the user wants to do (not what is most convenient for the programmers). The user is the umpire, who wants to gather evidence to make a decision. If there’s clear evidence, they can feel confident in making their ruling. If the evidence is inconclusive, traditionally the benefit of the doubt is given to the batter. Either way, the tool is doing what it’s intended to: making the umpire’s job easier and helping them make a better decision. It’s also making it clearer to the viewers that it’s not a sure thing.

False confidence does not help. If I claim that I’ve measured my piece of string to be 12.4637 cm long, but my ruler only has centimetre markings, I’m not being precise or accurate, I’m fabricating. It would be more accurate to say the string is approximately 12.5 cm long.

* There is an issue surrounding whether players respect the umpires (which is undermined by being able to essentially say “No, you got it wrong, look again”), but that’s another story.

** They may well have other issues, I don’t claim an encyclopaedic knowledge of the situation, but that is the reason I’ve seen given.

*** Which the system acknowledges in that—for example—the ball being shown to just clip the wickets is not enough to overrule a “not out” decision.

The Pain Barrier

I recently saw a news article about a basketball player who suffered a fractured wrist early on in a game. He continued to play. Apparently he also spent the day on an IV drip owing to illness, and it was uncertain whether he would play at all. This got me thinking about sports injuries, and the attitude towards them from players, coaches, and spectators.

In New Zealand, there are a lot of stories of this ilk (playing on with a serious injury), mainly about various All Blacks (the NZ mens rugby team, for those not in the know). One notable example being the current captain, who—amongst other things—admits to concealing a broken foot in order to keep playing in the World Cup in 2011. Rather than being called out for this, it enhanced his reputation as a “real man”. Similar tales have assumed somewhat mythical status (google Wayne “Buck” Shelford or Colin Meads for other examples).

Which is probably a bad thing.

Why? Well, another way in which sports injuries have been in the news recently has been due to the treatment of concussions*. More care is being taken to ensure players don’t end up with permanent brain damage, which is a very good thing. It’s becoming expected that if you take a blow to the noggin, you get checked, and players aren’t allowed back until they’ve been medically cleared. All very sensible, and laudable.

If only the same attitude were applied to other serious issues. Sadly, there’s still too much of a “man up!” attitude in a lot of sports, and so players keep on, risking worsening their injury, or even doing themselves permanent harm, because they don’t want to be seen as weak or uncommitted.

In fairness to the players, injuries are often exaggerated (particularly by the media—the more drama the better), and I know from experience that breaking a bone may not hurt a lot at first (once you relax for a few minutes, and the adrenalin wears of, though…). Plus, for minor injuries (e.g. scrapes and bruises) it is valuable to be able to ignore it and carry on. That’s why there needs to be medical staff who are able to be objective**. Take the decision away from the player (they can keep their never-back-down image intact), and either patch them up, or pull them out as necessary.

Attempting to continue with a serious injury is heroic if the alternative is death. If the alternative is losing a game, it’s foolhardy.

(By the way, this post has focused on sportsmen. I’m sure the same issue affects sportswomen, but it’s exacerbated by the whole “real men don’t show weakness” guff.)

* In contact sports, like rugby or gridiron. It’s somewhat rarer in table-tennis.

** No human being can be completely objective, but the medics will hopefully be focused on the players’ health first and foremost.

1000 Words, part 2

Part 1.

So far in pondering the nuances and consequences of the “Equality vs. Justice” image I’ve been focused on the people and their ability to succeed. The other type of consideration the nature of their goal(s).

What are they trying to see?

Despite the stylised, cartoony nature of the image, the background appears to be an actual photograph (albeit a very fuzzy one) of a baseball game. So, the goal of the three people relates to entertainment*. My intuition is that any measures to improve equality (of opportunity) should be prioritised on Maslow’s hierarchy of needs—in other words, essentials before luxuries.

Why is there a fence in their way?

The obstacle in this case is an artificial one; someone has intentionally aimed to restrict their access to their goal (the baseball game). The implication is that these people are outside the stadium, but still want to see the game. The fence has been constructed for financial benefit (you have to buy a ticket to get in). Some people might construe this scenario as a form of piracy (enjoying content without paying for it). Regardless of the specific legal ramifications, I would argue that there’s very little (if any) moral transgression here, for the following reasons:

  • They’re not making money from it
  • Getting tickets may have been impossible, either because they couldn’t afford them, or the game may have sold out
  • There’s still an incentive to buy tickets—more comfortable, closer to the action, better view, etc.

The stadium owners may actually like having (small numbers of) fans able to watch like this, as it keeps them enthused about the “product” (baseball), and encourages thoughts like “one day, I’ll be able to get a front-row seat!”. It costs the owners very little, and helps out the disadvantaged.

What if the goal was negative?

For the sake of argument, maybe instead of fans watching a game, this is spies from a rival team trying to see a closed practice session. This flips the desired outcome on its head—now the issue is do you take away their boxes to prevent them seeing? This will work for the medium-height person, but the tallest and shortest people will be penalised ineffectively and unnecessarily (respectively). Much better to make the fence taller.

(Analogously, locking your door is a sensible move to reduce the risk of being burgled. It won’t stop a determined thief, but will deter an opportunist. And there are those who wouldn’t think of burglary regardless).

Alas, the real world is seldom simple, and raising the fence would negatively affect the poorer fans, which goes against the very idea of improving equity. Few things are clear-cut, black-and-white, with obvious solutions. More often, we need to take the time to weigh up pros and cons, risks and rewards.

(And in case my opinion’s not obvious: better to lock your door when you go out, but don’t worry too much about people watching sport over the fence.)

* One could contrive other, more vital, reasons for them to want to watch the game, but they would be contrived. 😉

Most Valuable Players

Okay, so I will get back to the 1000 words idea (at some point—I’m somewhat of a dilettante), but this is what has grabbed my interest at the moment.

The sport I’m most interested in is cricket. If you have no interest in the sport, feel free to ignore this post. 🙂

It’s another intellectual pursuit, I guess, as I was never a particularly good player, but I’ve long felt that the statistics and measurements used to define players leave a lot to be desired. The difference is particularly stark in comparison to its cousin, baseball (compare the Wikipedia pages for baseball stats and cricket stats, even just by length). For example, there are no records related to fielding beyond “number of catches taken”, which is only part of the story. Where were they fielding? How many catches did they not take? How many matches have they played?*

I’m encouraged that some new measures are emerging, like “control percentage” (e.g. how often did a batsman play the ball cleanly), but I also feel that some of the existing measures could use some adjustment. For example, a player’s batting average (total runs / number of times dismissed) can be warped by large numbers of “not-outs”; in extreme cases this can lead to the farcical situation of a player’s average score being higher than their high score. Additionally, all runs are not created equal—a score of 42 in a tight, low-scoring game may be more valuable to the team than a score of 67 in a run-fest that ends in a draw.

In order to better analyse, one must first have data, and fortunately there are a lot of publicly-available scorecards that could be used to delve into various parameters (though again, ignoring some useful measures, but it’s a start). For now, though I’ve examined “Player of the Match” awards. The existing data only gives a ranking for how many times a player has won the award (see here for example). As such, the “ranking” tells you more about longevity** than value, so rather than a raw count, I looked at matches per award for each player***.

While any informed fan (myself included) wouldn’t quibble with the calibre of the names atop the previously-linked list, my calculations yielded a different ordering (though, unsurprisingly, a lot of the same names). Here’s the top 20****:

  1. Vernon Philander (SA) 5.2
    (5 awards in 26 matches)
  2. Wasim Akram (Pak) 6.12
    (17 awards in 104 matches)
  3. Daryl Tuffey (NZ) 6.5
    (4 awards in 26 matches)
  4. Mitchell Johnson (Aus) 6.56
    (9 awards in 59 matches)
  5. Muttiah Muralitharan (SL) 7.0
    (19 awards in 133 matches)
  6. (Sir) Curtly Ambrose (WI) 7.0
    (14 awards in 98 matches)
  7. Jacques Kallis (SA) 7.22
    (23 awards in 166 matches)
  8. Irfan Pathan (India) 7.25
    (4 awards in 29 matches)
  9. Joe Root (Eng) 7.33
    (3 awards in 22 matches)
  10. Kumar Sangakkara (SL) 8.0
    (16 awards in 128 matches)
  11. Imran Khan (Pak) 8.0
    (11 awards in 88 matches)
  12. Stuart Clark (Aus) 8.0
    (3 awards in 24 matches)
  13. Malcolm Marshall (WI) 8.1
    (10 awards in 81 matches)
  14. Rangana Herath (SL) 8.14
    (7 awards in 57 matches)
  15. Dale Steyn (SA) 8.33
    (9 awards in 75 matches)
  16. Aravinda de Silva (SL) 8.45
    (11 awards in 93 matches)
  17. (Sir) Ian Botham (Eng) 8.5
    (12 awards in 102 matches)
  18. Shakib Al Hasan (Ban) 8.5
    (4 awards in 34 matches)
  19. Shane Warne (Aus) 8.53
    (17 awards in 145 matches)
  20. Dean Jones (Aus) 8.67
    (6 awards in 52 matches)

It’s not ground-breaking or anything, but I feel it’s an interesting start.

* Compare two players who have taken 10 catches. Player A has played 7 matches, and dropped 1 catch. Player B has played 40 matches and dropped 12 catches. Which seems to be the better fielder?

** Not to mention that the award was not given out for every match; any ranking of these awards is going to favour players in the modern era.

*** If all players were equally valuable, you would expect (on average) any given player would win an award every 22 matches (two teams of 11 in each match). Having a score lower than this indicates a more valuable player.

**** To prevent outliers I removed players who have played less than 22 matches.