Programmatic Empathy
The other day I was trying to help someone out on the Measure Slack (a great place to ask and answer analytics questions!) and a robot told me off for using the word 'pop'.
As you're a person (unless you're a web crawler, in which case: hi. how's it going? if you feel like telling your boss to bump me up the rankings a little I'd be much obliged) you can probably use your ability to interpret language to see that I used 'pop' as a colloquialism for 'put' rather than to mean 'father'. I imagine there are text analysis models that would probably allow a bot to do this, but the nature of these bots—instantly responding to every message containing certain strings—means they probably wouldn't be able to leverage them without surmounting certain technical and infrastructural hurdles, and hence they're currently consigned to a state of not quite getting it.
I found myself reminded of being in my old job, messing around with the Slack autoresponder to make it do silly in-jokey things for my friends. At one point I was considering doing one that would respond with Borat custom emojis and the words "VERY NICE" whenever anyone used the words "my wife" (I'm a funny guy). I reconsidered because I thought about it for a few seconds and realised that the amusement would stop pretty quickly the first time someone said "I have to take my wife to hospital urgently, and I won't be online for a while" or something like that. I reckon I'd not be best pleased if I got a response similar to the above if I'd sent a message like that about my father. You end up in a situation where a tool designed to (I guess) nudge people into greater consideration can, by nature of the bot's inability to understand context, itself be incredibly inconsiderate. On the other hand, it led to one of the best tweets ever:
Even if I had been talking about my father, though, I'm not really sure who on my team would feel excluded by my use of the word 'pop', or why? The "guys I'm stuck in the wework lift" tweet is actually a useful comparison—that's a case where I do see the argument, and while I don't beat myself up if I forget, I generally try my best to use gender-neutral collective nouns about mixed groups. With this, though: I'm not sure who's going to feel excluded by a reference to a gendered rather than a non-gendered parent? Does it do the same thing for "dad" or "father" too? I genuinely don't get it.
More broadly, I think that attempting to engender empathy programmatically like this may come from a good place, but is generally wrong-headed in a few fundamental ways. One is the technical sense outlined above—a lack of contextual awareness means there are a lot of ways it can go wrong, potentially in a very insensitive way. One is that requests for people to change are just better communicated by other people—and people you've already got a relationship with, at that. The 'guys' thing was actually something that I remember being introduced to by a female friend years ago, and it stuck more because of our existing personal connection. More fundamentally, though, I have a problem with the idea that this kind of linguistic approach is the best way to achieve change.
I often think of something I read once: there was a great amount of debate around the founding of the United States as to what the title for their leader should be. They didn't want to make George Washington a king, and there were all sorts of lofty titles being bandied about. They settled on 'president', a pretty low-key title that was used for someone who ran meetings—think foreperson of a jury. This was objected to at the time on the basis that when the new country was negotiating treaties with other countries with their emperors and queens and suchlike, no-one would take a title like "President of the United States" seriously.
It's an idea that really stuck with me: by and large, reality (and particularly power) determines meaning. People take the President of the United States seriously not because of the word used to refer to the office but because of the power of the office. The word "President" has since adopted the colloquial meaning of "person who's in charge". The meaning flowed the other way.
If you start with language as your vector for inclusivity, it's only going to be taken on board, if at all, by people who give a toss about inclusivity in the first place. (Ideally you'd only be hiring those people anyway, but I guess that's difficult to screen for, and big companies are big). The people who are materially disadvantaging women and minorities in the workplace are doing it by not offering jobs and promotions to people who aren't white, or paying women less than men. If they're told not to say 'guys', they are going to either nod and ignore, scoff and ignore—or most likely, agree and then carry on as they were before with added cover. Talk is cheap.
If you're worried about your organisation's levels of inclusivity to various groups (applies way less to volunteer-run Slacks, but the bot is clearly built for businesses), give those groups more power and fix the material problems they have, rather than focusing on moderating the fringes of people's language use. Obviously it's not a complete either-or, but the existence of this kind of thing suggests to me that effort expended on the latter is largely ineffective, and ends up being an excuse to do less of the (more expensive) former.
A fun addendum: I just had a look to see if I could find the website of the bot in question, to see if the makers had any justification for why they think 'pop' isn't inclusive language. They didn't, but they did have their pricing plans:
$200 a month for bot I could knock together in half a day that tells you not to say "pop". I take back all that stuff about it coming from a good place: this is an absolute racket, and to whoever's running this, I hope you're rinsing these big companies' HR departments for all they're worth.