I may be biased – Rana was my fellow PhD student (and she mentions me in this book) – but I must say it’s a good read, although be warned Rana’s story does have its dark times. Don’t expect much technical content from this book: phrases like ‘dynamic Bayesian networks’ (in Rana's algorithm for interpreting facial expressions) are about as much detail as you’ll get, although there’s more of that in her thesis ‘Mind-reading machines: automated inference of complex mental states’, which is available online. Girl Decoded” is Rana’s memoir, showing how she became interested in the face’s role in communication and wanted to unlock the medical potential of automated emotion detectors, especially for training people with autism (she collaborated with Simon Baron-Cohen, who had popularised the observation that autism is a spectrum not a single condition). Although her team later had to put this idea ‘on hold’ to focus on getting funding from market-research departments measuring test-audience reactions to advertising, they’ve opened their SDK for medical and safety benefits – autism training, suicide prevention support, measuring outcome of reconstructive plastic surgery, early diagnosis of Parkinson’s disease (coded by a 15-year-old schoolgirl), healthcare-support robots, tools for self-training at speaking skills, and systems that remind inattentive drivers to stay safe. On the other hand, they refuse to let their SDK be used for surveillance purposes, turning down lucrative contracts even when the company was in trouble. The narrative is compelling and could be an inspiration to anyone wanting to get into coding, especially from a minority background.
One thing this book touches on only briefly is accuracy. No AI classifier will be right 100% of the time, and its limitations must be kept in mind to avoid ‘automation bias’ – the feeling that ‘it must be true because a computer said it’. In this area, 90% is considered good, but that’s still wrong one case out of every ten. But what surprised me in Rana’s research is that humans can be even worse – in one situation involving subtle differences, they were answering wrongly as much as 30% of the time, a figure I like to quote whenever somebody thinks their colleague hates them because they can ‘see it in their eyes’ (there’s a 30% chance you’re misinterpreting) although it didn’t end up in this book. If face-reading is to be used in job interviews, Rana would rather it be done by her algorithm than a human, and perhaps the most valuable piece of information in the book is the hint about what these algorithms will likely be set to look for.
While this is not the usual kind of technical book reviewed by ACCU, I think the ACCU readership would enjoy this and in some cases may be able to pass it on to inspire someone else to take up our craft.