There’s a lot to chew on in this short article (ht @ajsadauskas):
https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
“An AI resume screener…trained on CVs of employees already at the firm” gave candidates extra marks if they listed male-associated sports, and downgraded female-associated sports.
Bias like this is enraging, but completely unsurprising to anybody who knows half a thing about how machine learning works. Which apparently doesn’t include a lot of execs and HR folks.
1/
The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?
I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.
4/
AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.
Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.
9/
This is how competitive systems learn: the language of death. In this case corporate death.
Politics careens from one failure to the next. Movement death, often learning something, but it only lasts a while.
Biological evolution: same thing. Species death.
Medicine too, though we work very hard to deny it. Death.
Technology is wrong more often than right. Progress still happens because the failed bubbles guide us violently.