"AI," my rear end; they're chatbots and related utilities. Very sophisticated, neural-network-driven chatbots, but they don't quite pass the Turing test, not yet. Thery may be able to sort your socks, but they stumble trying to sort your photographs And there's been an off-and-on flap, because when you release a chatbot into the wild -- or into the neticulture of the U.S. and European-dominated Internet -- it tends to come back sounding and behaving more than a little racist. It comes back able to tell white people from one another with ease, but struggles with women and non-white people, and if you're in both categories, well, good luck.
Chatbots (etc.) don't have a conscience; they don't know right from wrong and they can't read subtext or social cues. They don't know the difference between fourteen-year-old punks talking big and a symposium (ancient Greek for "drinking party," BTW) of Methodist pastors. Most normal people are aware of this, and aware that social media is dominated by the biggest mouths with the greatest amount of free time. Chatbots do not. They also don't know demographics; the general collection of software engineers is younger, whiter and more male than the general population, and it's very easy for them to write software and provide data sets that reflect the lab and not the country or the world and not notice.
Of course this has become politicized -- hasn't everything? The New York Times ran an article about it on 15 March (you're on your own with their paywall, kids, just know there is a way) and, predictably, the more conspiracy-theory-inclined pro-Trump crowd* has decided if The Times is agin it, they will be fer it -- and therefore, according to them, if the AI says an all-Black population is problematic, it must be true.
There are a number of defects to this, starting with the assumption that the NYT has less chance of being right than a blind sow does of finding an ear of corn. (I figure they're at least as lucky as an average female hog.) Then there's "The computer must be right," a flashback to attitudes of the late 1950s or early '60s when the machines were tantamount to magic for many people. They weren't then and they aren't now; at best, they're only as clever as the people who write the code that runs on them and no less subject to bias. But best of all, let's talk about The Bahamas: one of the richest countries in the Western hemisphere, right up there with the U. S. and Canada; they've got 95% literacy...and the population is 90% Black. Nope, sorry; it only takes a single example to demolish the assumption. The AI's wrong -- and so are you, if you agreed with it.
We can build fair-minded AI. Even a random sample of Silicon Valley software engineers, an overwhelmingly young, Caucasian, nerdy and male group, can do so -- if they're aware of the need for it and make the effort. If they don't write lazy code. If they pick their input data to model the general population, and not themselves and their co-workers.
It's not a culture war. It's good program-writing.
Hello, world.
___________________________
* Which is to say, "Q-adjacent," analogous to the "...but they sure had kewl tanks!" fanboys of WW II Germany or Stalin's USSR. I'm not impressed.
Update
3 days ago
No comments:
Post a Comment