Knapsack problems

Some computer games draw a lot of ire because of what is sometimes called “Inventory Tetris1. This is where the items the player is carrying are represented on a (finite) grid-like structure, and usually crops up in action/rpg games. It can quickly become fiddly and annoying to the player, trying to decide what to keep (given there isn’t enough space for all the valuable doodads they may come across).


By SharkD – Own work, GPL, on wikipedia

It gets especially complicated when items are not a uniform size, going from “I can carry this many items” to “I can carry these items provided I can rearrange them to fit“. Which just evokes all the joys of packing2.

Given that “Ugh!” is many people’s initial reaction to this sort of puzzle, why do so many games include it? Well, some people’s reaction to any puzzle is “Ugh!”; we need a more compelling reason for avoiding it. Solving a packing puzzle can be very satisfying—just ask anyone moving house/apartment/etc. who has (finally) managed to sort out their furniture in their new room(s). Any puzzle can be a worthwhile challenge to include in a game, provided it harmonises with other aspects of the game.

Limited inventory also adds realism3 to a game—a character is not able to carry a small village in their pockets. But realism is not the be-all-and-end-all when it comes to video games. All games abstract away details from the real world to try to capture the core of an experience (Does anyone ever run out of petrol in a racing game?). As such the relevant question to be asked is not “do I/my audience like this kind of puzzle?”, but “does this fit with the game’s core experience”.

Let me give a couple of examples of where the “inventory tetris” mechanic fits, and where it doesn’t.

Sir, You Are Being Hunted


From Rock, Paper, Shotgun

This is a game about sneaking around to collect parts for the MacGuffin that will allow you to escape. Armed robots attempt to stop you. Along the way, you scavenge necessary supplies, like food and weapons (and stuffed badgers).

The whole experience is about coping with limited resources, and a restricted inventory forces you to prioritise. If you choose to leave your majestic stuffed badger behind, you could potentially come back for it later, but just getting to where you left it can be difficult (i.e. having to fight/sneak your way past the robots).

Adventure Games

With the traditional “adventure game” genre (think “Colossal Cave”, “The Secret of Monkey Island”, “King’s Quest”, “Myst” etc.), the player’s inventory is essentially unlimited. This may be because there are only a small number of collectible items in the game anyway, but of more relevance is that retracing your steps (in this type of game) is not interesting or challenging. If a player is at the front door and needs to open a parcel, a knife being in their inventory is essentially an abstraction for remembering seeing a knife in the kitchen and going to get it.

These games often induce a sort of kleptomania; experienced players will grab anything that isn’t nailed down because it’s bound to come in handy later, and it saves them backtracking.

Occasionally, a particular puzzle will require putting limits on what the player is able to carry, but these should be treated as exceptions, and not change the normal inventory mechanic. For example, in the text-adventure Hitchhiker’s Guide to the Galaxy, solving a certain problem involves the player traversing a narrow space which gives an excuse for them to only take one item with them. In Broken Age, carrying a particular item (noted in-game as being exceptionally heavy) means the player cannot cross a particular surface without falling through.

So, as with all game mechanics, inventory tetris has a place, but can be very annoying if it is used somewhere it doesn’t fit.


1 Or, more prosaically an Inventory Management Puzzle, but that just doesn’t have the same pizzazz.

2 You may recognise this an example of the knapsack problem, one of many NP-complete problems which we have no efficient way of solving. I may burble more on this distinction in a later musing, if anyone is interested.

3 I use the term very loosely. 🙂

Advertisements

Better Game Stories In One Step

…it’s just not a very simple one.


A note to start with: This is focused on games where the story is an important component. Not all games are like this. Assume that we’re talking about action/adventure/rpg/etc. games with a significant narrative.


Few would argue that a compelling story involves the following four elements:

  1. An interesting* protagonist
  2. …who wants something
  3. …but has to overcome obstacles to get it
  4. …and either succeeds or fails**

“Traditional” storytelling media (e.g. books, films) are pretty good at ticking these boxes (literally—for example, there’s a how-to book for movie scripts).

Following the same advice and patterns has worked … okay … for video games, but runs into the usual problem with an interactive medium. The player is the protagonist. This means you have a conflict between giving the player freedom to do what they want to do, and ensuring that the protagonist does what is needed for the next part of the story.

Different games manage this better or worse, and various techniques have been used (e.g. “gating” parts of the game to make sure the player experiences things in the right order). But players of some games have reacted loudly against being “railroaded”; feeling disconnected from the game, that their actions don’t matter, that the controls may as well be “Press X to see the next scene”.

Yet it should be easy, shouldn’t it? Games are all about the player/protagonist trying to overcome obstacles to achieve a goal. And games are pretty good at making the protagonist interesting—either through being a blank slate that the player can project themselves onto, or making appropriate use of pre-existing literary/filmic character design techniques.

Whether you refer to it as “ludo-narrative dissonance”, “lack of engagement”, “railroading”, or whatever else, I suspect the same underlying issue with the story. The problem is that the player and the protagonist have different goals. As such, story progress (related to the protagonist’s goal), makes the player feel disinterested (at best). If it gets in the way of the player achieving their goal, they may come to see the narrative as another obstacle.

An example of this is in open-world games where the player wants to muck about and explore, and becomes frustrated at the game trying to get them back to the main quest. Another example is a cut scene that presents a character the protagonist needs to rescue. The player is essentially told “this is your best friend”, but they’re thinking “no, Sam is my best friend****, this is just some random NPC that I’m going to be forced to rescue. Aw man, I hope this isn’t going to be one of those escort missions…”.

To fix this, we just need to make sure the player’s goal matches (or at least is compatible with) the protagonist’s. “Oh, is that all?” you might be thinking. The difficulty is how. To support my attempt at a general answer, I submit the following example.

Think of the opening scene of “Raiders of the Lost Ark” (What do you mean, you haven’t seen it?!?). Imagine playing through something like that in a game. You have to navigate various traps to obtain the magic +3 Sword of Wompage—a significant improvement over your -1 Blunt Twig of Equivocation. You then get a brief chance to use the Sword of Wompage before, just as you’ve escaped the collapsing dungeon by the skin of your teeth, the villainous Baron Smarmy Twirlmoustache shows up and takes your new toy away. I would suggest that at this point, the goals of you (the player) and the protagonist are in perfect alignment.

So what are some general principles we can draw from this?

  • Players won’t care about something just because they’re told to
  • They will care about something that affects gameplay
  • Cut scenes are better for introducing obstacles than goals
  • Baron Twirlmoustache is kind-of a jerk

Game developers already consider the various types of player motivation they want to tap into when designing gameplay (see the Bartle taxonomy, for a formal example); the next step is considering how to align the story with it as well.


* Note: “interesting”, not “likeable”. The main character doesn’t necessarily have to be someone the audience wants to be, or would like to meet, but the audience does have to be curious about what will happen to the character*** next.

** This doesn’t necessarily align with whether the story has a “happy ending”. Sometimes the best outcome for the protagonist is not getting the thing but realising they don’t actually want/need it.

*** One of the benefits of an ensemble cast is that different audience members may be intrigued by different characters, thus keeping a wider audience tuning in than if the focus was mainly on a single protagonist.

**** Few know that Frodo was an avid gamer. There had to be something to while away those quiet, lonely nights in Bag End.

What’s wrong with turtles?

A while ago I was reading a blog post on gamasutra about how to design a game so as to discourage players from “turtling”.

Just to make sure we’re all on the same page, “turtling” is a pejorative description of someone’s style of play. This type of player is focused on defence, bunkering down and rarely attacking.

What I found interesting was that, throughout the post and several of the comments that followed, I was nodding along with the author, thinking “yes, that seems sensible”. Then one comment stopped me in my tracks by asking—in effect—”why shouldn’t players turtle if they want to?”; I suddenly realised I was mindlessly following the prevailing attitude that says turtling is inherently bad; something the game designer ought to prevent.

There are several behaviours in the same (or similar) boat. Save scumming*. Pause scumming*. Cherry tapping*. Kiting*. Camping*. Some are more acceptable than others (depending on context**), but they are generally seen as being negative, “unsporting”, or “cheap”. This also seems to be susceptible to the actor-observer effect: we accept it when we do it, because of perfectly valid reasons. We condemn it when others do it because they’re just cheats.

Players Behaving Badly

So, are there ways you can design a game to prevent (or at least deter) such behaviours? Sure, but you have to be very careful that you don’t create a worse problem by doing so. To make sure the change is actually affecting the behaviour you want, though, it pays to understand why people act that way (and not just why they say they did something—what are the underlying psychological principles).

I believe all these sorts of behaviour share a common motive: people are generally risk-averse (preferring a “sure thing”) for gains, and risk-seeking (preferring to gamble) for losses. Most games are framed in terms of gains (increasing points, winning matches, etc.) rather than losses, which predisposes people towards what they perceive*** as being the best strategy. “Playing the percentages”. Not taking undue risks.

For example, imagine if in each level of a platformer (Super Mario Bros for example) there were five bonus stars you could collect. Completing the level gives you 50 points, and each star is worth 10 points. The stars are placed in locations that challenge the player—either requiring them to navigate through dangerous terrain, or defeat/escape powerful enemies. When you examine the playtest data, you find that, while some players try for every star****, most players don’t bother risking it.

So, lets say you reframe things. The level is now worth 100 points, but you lose 10 points for every star you miss. And you find that, now that they’re thinking in terms of losses, players become more likely to risk trying for the stars, and overall more stars are collected. Success! Right? Except that players are also unhappier and more frustrated with the game; no-one likes being penalised. Probably not a good thing overall. You’ve reduced players turtling, and got them exploring more of your levels, but maybe they’re doing more save/pause scumming.

Players Behaving… Badly?

Maybe we need to take a step back. Sure, there are situations in which you want to discourage [some of] these behaviours, but is it a big enough issue to expend much design effort on? To clarify my point, I want to think about why we get so annoyed at these behaviours.

This doesn’t only apply to video games; there are plenty of examples in the sporting world, too. Pick your favourite sport, and you can probably think of players or teams who are “turtlers”: cautious and attritional rather than daring. They may well be top players, with enviable records. How do fans, commentators, journalists refer to them? Dependable. Hard-working. Consistent. Making the most of their talent. But are they loved? Do fans drop everything and rush to their televisions when that player walks onto the field? Not so much. They may even be seen as selfish, and overly focused on their numbers. There are exceptions, but people seem more drawn to the audacious and flamboyant players/teams, who may lose more often, but gosh darn if it isn’t exciting either way.

And I think that’s the key word: exciting. Entertaining. Dramatic. High level sport is a physical contest, but in the modern world it’s increasingly perceived as a performance as well. Hence, of course you want your team to win, but you don’t want it to be boring. We’re distracted by our deeply-ingrained sense of stories. We’re disappointed if we don’t see aspects of the “Hero’s Journey” play out: our heroes must bravely venture out to face their foes. It’s equally easy for players to get caught up in this, and try to play in a way that doesn’t reflect their strengths or their character.

Most video games are not competitive sports. How about (within reason) we give players the space to enjoy the game however they want to play it, without judging them for not playing it “right”. Maybe, if the turtles don’t feel discriminated against, they’ll be more comfortable coming out of their shells.


* Rough definitions:

Save scumming
Repeatedly reloading save games until you achieve a desired result (even though you could have continued).
Pause scumming
Repeatedly pausing time-critical sections.
Cherry tapping
Using weak/joke weapons/attacks to defeat an opponent. Requires either excessive training (so your character is far stronger than necessary), or wearing the opponent down via “death of a thousand cuts”.
Kiting
Repeatedly attacking an opponent then running away, thus slowly defeating them. Can also refer to teasing out individual opponents from a group rather than facing them all at once.
Camping
Lying in wait at a particular location to either have a clear shot at opponents or grab resources when they arrive.

** Generally, they are considered more acceptable in player-vs-computer situations, and less acceptable in player-vs-player situations.

*** Not necessarily the actual best strategy; humans are bad at probability.

**** Known as “achievers” or “completionists”. See the Bartle Test for example.

What’s the point?

Players of video games—particularly role-playing games (RPGs)—will often lament the problem of grinding (I suspect named in reference to “the daily grind”, but it is also sometimes referred to as “treadmilling”). The commonly-accepted definition of grinding is having to complete tedious and/or repetitive tasks. It often arises in the context of “leveling up” a character (essentially training to improve abilities).

Various workarounds have been proposed and/or implemented (see some examples). Completely removing the potential for grind would mean completely changing the leveling systems (which are otherwise tried, true, and effective), which would have significant consequences, so the approach de rigueur is to include some sort of payoff; a gold star for having defeated 100 swamp rats. This is applying an extrinsic reward to either motivate the player to grind, or placate them after a period of grinding.

While some aspects of game design—like the diminishing returns of experience points/leveling, and the random reinforcement of loot drops—are heavily informed by psychological findings, similar findings about the poor motivational effects of extrinsic rewards seem to have been passed over. Of course, it may also be that figuring out how to tap into intrinsic motivators is not only difficult, but getting back into the “overhaul the whole system” approach, which isn’t what we want.

I find myself wondering, though, if this is a case where the old adage “you never understand the solution until you understand the problem” applies. We have a definition of what grinding is, but maybe we need to consider why grinding is off-putting to so many players. Think of an RPG—whether it’s Ultima, Diablo, World of Warcraft, or even Pokémon—the parts of the game that are perceived as “grinding” aren’t mechanically different to the rest, they’re when your goals are different. You need to get stronger before you can overcome the next challenge. Your character still gains experience and levels while completing quests, but it’s a side-effect. “Grinding” is when leveling up becomes the main goal. And that’s just not very interesting*.

We can see something similar in the world of sports. The equivalent would be playing a match that has no impact on which team wins the trophy, so the only advantage to the players is the potential for improving their stats (though there’s still ticket sales, broadcast revenue, etc. to entice the higher-ups). For example, the fifth match of a best-of-five final when the score is 3-1; such a match is referred to as a “dead rubber”, and in some cases is abandoned.

Maybe this perspective can help. Grinding doesn’t seem like grinding if there’s another reason for doing it besides boosting stats**. Earning a gold star doesn’t help, unless it makes a difference to later gameplay. Perhaps other characters could start referring to your character as “Bane of Swamp Rats”. Perhaps swamp rats become more likely to flee rather than attack. But something beneficial—give them a reason, not an arbitrary number.


* For most players, anyway. For some, it’s the main attraction, and that’s fine, but I don’t believe that’s the case for the majority.

** Partly because I was feeling the lack of footnote, but also because this is a genuine side-issue: granularity. Sometimes the problem isn’t that there’s no other reason to kill all those swamp rats, but that you have to kill so many before it matters. It comes down to the same thing though: if you make the player’s actions feel meaningful, they’re less likely to get bored/frustrated with their progress. This is sometimes called “breadcrumbing”—leaving a trail of small markers/rewards to lead the player onward.

Reputable sauce

Reputation, whether good or bad, is a nebulous thing—hard to pin down, let alone control. People have it (everyone loves Benedict Cumberbatch*). Corporations and institutions have it (Microsoft are staid, bland, and corporate. Apple are hipsters). Words and concepts have it (the word “feminism” seems to make some people break out in hives).

But in general, a reputation cannot be expressed in so few words, because for all that we can talk about someone’s reputation, it’s not really a property of that person at all: It’s an amalgamation of the individual attitudes of everyone else towards them**. This is shown by—for example—what happens when someone ruins their reputation. It happens not immediately upon them doing whatever detestable thing causes their depreciation, but as people discover and spread the word.

It’s a bit easier to understand and work with an individual person’s attitude to someone or something. A customer who dislikes the shop that sold them a faulty toaster may change their mind if the staff are helpful in getting them a new one. And there’s a lot of truth in the cliché that “first impressions count”, because they define the starting point which later impressions adjust.

The internet has had two significant impacts that make reputation rather more fragile: there is unprecedented ability to “spread the word”; and—as the classic Peter Steiner cartoon says—”nobody knows you’re a dog”***. This means an imposter can do something heinous and the real person wakes up to sudden opprobrium from all corners with no idea why.

I guess it’s another reason to think twice about anything you read. Especially online. Including this ;).


* Examples of reputations expressed in this paragraph are used playfully and should not necessarily be taken as the opinion of the author. Though Cumberbatch is pretty cool.

** Clearly not all contributions are equal; generally strangers have less impact than close friends, but as I said, how it works is hard to define.

*** My (faulty) memory initially thought this was originated by Gary Larson (of “The Far Side”), but he would probably have gone with a cow.

Changing Perspective

This is kind of related to/following on from my earlier posts about the maliciousness of technology, but looking into a specific example. I’ll try to present enough of an overview so it makes sense to those unfamiliar with the topic (which is probably most of you), but be aware that I’ll be glossing over a lot of the detail.

Many high-profile sports have made the most of television coverage to improve the decisions made by the referee/umpire (just being able to watch a replay can make a huge difference, especially if it can be slowed down). Cricket (yes, it’s my favourite sport. So there) recently introduced a few other tools to assist the umpires, collectively referred to as the Decision Review System, or DRS. Mostly, the system has been well received, as it has enabled both greater accuracy for tricky decisions (did the batter nick the ball? did the ball bounce before the fielder caught it? etc.), as well as being able to correct blatantly wrong decisions by the presiding umpire*.

Not everyone is happy with it, however. The Indian cricket board are against the technology, citing concerns about its accuracy**. Unfortunately, such things are never going to be 100% accurate, but I do think the system could improve on the way it presents its results.

The particular technology I want to focus on is called Hawk-Eye (it’s also used in several other sports) which uses multiple cameras to determine the ball’s position in space and track its movement. Besides being a general analysis tool, it’s mainly used to adjudicate LBW decisions.

A quick aside for those not familiar with cricket. If you already know what an LBW is, feel free to skip the next part:

The batter can be out if they miss the ball and it hits the wickets. LBW stands for Leg Before Wicket, and is designed to prevent the batter just standing in front of the wickets to prevent themselves getting out. It’s a complicated rule, but the basic idea is if:

  1. you miss the ball (with the bat)
  2. the ball hits your legs (or any other part of your body)
  3. the ball would have hit the wickets (had you not been in the way)

then you are out.

Everyone up to speed? Okay. It’s usually fairly clear when the batter misses and the ball hits their legs; what Hawk-Eye is used for is predicting whether the ball would have hit the wickets. If you’ve watched (on tv) a cricket match in the last couple of years, you’ve probably seen the outcome of this: a nifty graphic showing the path of the ball and where the system predicts it would have gone (in a different colour—see an example on youtube).

Seems pretty useful, right? Except… the prediction, like all predictions, is fallible. You cannot know for certain where the ball would have gone (even taking into account wind, spin, variable bounce, etc.). This is a good illustration of my topic: there are two major aspects of any piece of software: the underlying processes/calculations, and the user interface (i.e. how the user inputs information and receives results). In this case, the calculations are probably doing about as well as could be expected (the ball tracking part is apparently accurate to less than ½ cm), but the user interface could stand to be improved.

This is a common problem. A competent programming team is generally able to build (and properly test) a program that performs its calculations correctly the vast majority of the time. But a user interface doesn’t have the same clear “this input should produce this output” requirements. It should be “intuitive”, “user-friendly”, and other such subjective adjectives. This makes it a lot harder to know if it’s “correct” or not. Fortunately, there are a lot of useful guidelines to follow, and it’s a very good idea to get representative users in front of the system as much as possible to see if they can make sense of it or not. But it remains a design process, and as such is as much an art as a science.

So, what is the most easily-remedied interface problem with these LBW predictions? The fact that the predicted path is displayed as exact, which conveys to the user that the ball would definitely have ended up here. The fallout from it is evident from listening to commentators and fans discussing LBW decisions: everyone treats the path as a singular thing. In fact, there’s a margin of error***. This could be easily shown (given that they already have these fancy 3D computer graphics) by representing the predicted path as a cone-like shape (indicating that the prediction is less accurate the further it has to extrapolate. Rather than giving a binary “hitting/missing” response, it could report a percentage certainty (e.g. we are 73% sure that the ball would have hit).

It may seem counter-intuitive, but this lessening of clarity would make it more useful. The most important principle of user interface design is to consider what the user wants to do (not what is most convenient for the programmers). The user is the umpire, who wants to gather evidence to make a decision. If there’s clear evidence, they can feel confident in making their ruling. If the evidence is inconclusive, traditionally the benefit of the doubt is given to the batter. Either way, the tool is doing what it’s intended to: making the umpire’s job easier and helping them make a better decision. It’s also making it clearer to the viewers that it’s not a sure thing.

False confidence does not help. If I claim that I’ve measured my piece of string to be 12.4637 cm long, but my ruler only has centimetre markings, I’m not being precise or accurate, I’m fabricating. It would be more accurate to say the string is approximately 12.5 cm long.


* There is an issue surrounding whether players respect the umpires (which is undermined by being able to essentially say “No, you got it wrong, look again”), but that’s another story.

** They may well have other issues, I don’t claim an encyclopaedic knowledge of the situation, but that is the reason I’ve seen given.

*** Which the system acknowledges in that—for example—the ball being shown to just clip the wickets is not enough to overrule a “not out” decision.

Malicious technology (Part 2)

Okay, so last time I suggested that even expert users have problems, because computers aren’t always predictable.

So, if we accept that computers will throw us for a loop occasionally, the issue becomes how we react to this. Blaming ourselves, falling prostrate before the almighty PC (or Mac, if you’re that way inclined 🙂 ) and begging that we be permitted to complete our terribly-important email is not going to cut it.

What is the alternative? Well, some take completely the opposite tack and assume the computer is entirely at fault, as it’s not doing exactly what they want it to, but I suspect that’s more related to ego; such people probably aren’t reading this, or aren’t going to take any notice of it. No, I suggest you imagine you are interacting not with a machine, but with a person.

This may not be all that difficult. People anthropomorphise objects all the time. Naming their car. Holding conversations with their pets. But that’s not what I’m talking about—you’re already doing that if you’re assuming the computer is smarter than you. Instead, recognise that the computer is just a machine following instructions, so when you’re using Microsoft Word (just as an example), you are navigating a bureaucratic maze defined by the person that created the program*.

Now, it’s quite reasonable to assume that this generic programmer you’re effectively interacting with is smarter than you. They certainly know more about computers. But only because they’re the combination of several brains. So how does this help? Because while you may not know much about programming a computer, you do know a lot about what it is you’re trying to do.

You see, every program is essentially a tool (or rather, a collection of tools, but lets keep it simple). A tool is designed to facilitate a particular task. If the person performing that task is finding the tool odd or frustrating, it’s probably because the way they think of the task is different to the way the tool designer thought of it.

For example, I have a ratchet screwdriver. It has a switch that enables it to be set to work clockwise (for tightening screws), counter-clockwise (for loosening screws), or both (acting like a standard screwdriver). One could conceivably add a fourth setting allowing the head to freely twist in both directions, but this isn’t of any use in practice. This is a silly example, but it’s the sort of thing that can happen with programs; a feature or setting is added (usually with the best of intentions) because it is easy to do, not because it is relevant or useful to the task.

If you don’t believe me, think of Clippy. And, when something goes wrong with your computer, ask yourself “does the programmer really understand what I’m trying to do?”.


* In general, it’s very unlikely a computer program was made entirely by one person. However, numerous decisions that went into its construction were made by people, and as you don’t know anything about them, you may as well assume a unified** Mr/Mrs Microsoft Word Programmer.

** If the team have been working together and communicating effectively, the program ought to feel consistent. If not, your generic programmer may come across as a bit… cuckoo.

Malicious technology (Part 1)

No, this is not related to the Heartbleed bug. There is already plenty of valuable information out there about it and what you should do about it without me sticking my oars in.


I’ve noticed a certain unconscious assumption some people have (myself included) when dealing with computers*, that primarily affects how we react to something going wrong. I feel it would remove certain roadblocks if this were addressed and refuted.

It’s the belief that the computer is smarter than we are.

Now, it sounds silly to say it out loud—computers are just machines, of course they’re not smarter than people. And I don’t think it’s a position that people hold intellectually, but rather emotionally. What’s the difference? Just ask someone on a strict diet what they feel about butterscotch pudding (or whatever sugary/salty/fatty banned food it is they’re craving). Their brain says “no, it’s bad for me”, but their heart says “yummy!”.

So what effect does this have with regards to computers? When something goes wrong—a program crashes; your data mysteriously disappears; something changes and you have no idea how to get it back; you find yourself in a maze of twisty little dialog boxes, all alike; etc.—your reaction is “what have I done?!”. Tech support often reinforces this**, both overtly (“What did you do?”) and implicitly (think about the underlying tone of a lot of the available help, especially online).

There are two components to the assumption.

If I was more savvy, this wouldn’t have happened

It’s easy to see why this belief persists. It’s certainly true in a wide variety of other situations. However, even expert computer users still run into problems. Partly this is because almost no-one is an expert in all aspects of computers (electrical physics, electronics, cpu and memory architecture, bios, kernel, network, software, etc.). The difference tends to be that more experienced users are better able to recover from a problem (either through knowing where to look, or being able to understand the jargon). Plus, there’s the second component…

The computer doesn’t make mistakes

Again, an understandable belief. Almost every mechanism we encounter will perform the same way every time (provided it’s not faulty). Computers don’t necessarily. Or rather, they do, but Chaos Theory. In practice, this means a correctly-behaving computer can be unpredictable. This makes it hard to get over the assumption that it did something unexpected because of something you did (wrong).

There’s more to say on the topic, but this is getting long enough, so I’ll save it for next time.


* And indeed, many other types of technology. I think the key factor is whether or not the user understands it or not—it’s not generally a problem with a toaster for example. Unless you’ve got a whizz-bang, multi-function, golden-brown-sensing, iPhone-compatible toaster. And if you’re rich enough to afford one, you probably don’t make your own toast, anyway.

** To be fair, there are people who genuinely shouldn’t be put in front of a computer, and who cause no end of stress and frustration to the people who have to work with/clean up after them. The vast majority of computer users, however, are reasonably capable.