WhAI?

Artificial Intelligence has become something of a hot topic of late, with many different people contributing to the discussion. Some, alas, mainly revealing that they don’t really understand the topic, but that’s probably true on most any subject (and something I’ve almost certainly been guilty of myself).

Philosophy Tube recently posted an interesting look at the ethics of AI; not getting bogged-down in trolley problems and the like (i.e. how the AI should behave), but talking about some of the less-considered ways it can be harmful (e.g. the environmental cost of materials, exploitation of workers, etc.).

Their other main focus was AI perpetuating existing inequalities (like algorithms developing a racist bias because their training data is biased), making the important point that any sort of AI that attempts to divide people into categories is implicitly assuming those categories are detectable. In some cases, that assumption is valid: AI can be very good at detecting disease in x-rays/scans. But as they point out, gender is more complicated, and cannot accurately1 be determined just by looking at someone2.

While I agree with the point they are making (that such algorithms are discriminatory to LGBTQ+ people), the example they give doesn’t illustrate it very well. They describe CCTV cameras using facial-recognition to locate a criminal, and the system dividing people by gender lines, as the criminal is male. In this context, it’s being used purely for computational efficiency3. Each of the people in the crowd are already on camera, and — unless they visually match the suspect — the process has no impact on them. It would be an issue in another context (like the TSA scenario they also talk about) where the categorisation affects how the person is subsequently treated.

Ultimately it all relates back to their point about the unthinking treatment of information as generic “data”. A decision that is accurate in a general (statistical) sense can be wrong in a human sense. To paraphrase an IBM presentation from the 1970’s: a computer cannot be held accountable, so should never make the final decision. AI is a tool, not a solution.


1 Due to the variety in people’s features, physiques, clothing, presentation, etc., the different categories will have significant overlap.

2 I saw a news article a few years ago talking about the use of AI in medicine. An interviewed expert was raving about how doctors couldn’t tell from a retina scan whether the patient was male or female, but AI could with near-perfect accuracy. Now, that’s a snazzy result that probably helped justify a research paper, but I fail to see the value to the patient. The AI should be focusing on detecting early signs of glaucoma, or something else that can’t be determined by just asking the patient.

3 I have my doubts that it would be used this way, as — given the complications noted earlier — processing a face to assign gender is probably not any quicker than processing a face to compare to a target.

Contribute your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.