techdirt reports on artificial intelligence security system flagging innocent parent as registered sex offender, despite numerous mismatches in data.
Schools and other bureaucracies are already run by people with little to no common sense, an infantile view that security can be provided by something that doesn’t require any thought or effort from them, and irrational fears that lead them to run around solving problems which don’t actually exist so they can act busy and look like they take security seriously. Bruce Schneir calls this security theater. It would be security comedy except innocent people get hurt. It’s not going to end well. This system is called Raptor.
Raptor says its system is reliable, stating it only returned one false positive in that county last year. (And now the number has doubled!) That’s heartening, but that number will only increase as system deployment expands. Raptor’s self-assessment may be accurate, but statements about the certainty of its search results are hardly useful.
The company’s sales pitch likely includes its low false positive rate, which, in turn, leads school personnel to believe the system rather than the person standing in front of them — one who bears no resemblance (physical or otherwise) to the registry search results. Mitchell still isn’t allowed into the building without a security escort and is hoping that presenting school admins with his spotless criminal background check will finally jostle their apparently unshakeable belief in Raptor’s search results.
Imagine. You’re standing at your child’s school. The security system says you’re a criminal. Your picture doesn’t match. Your birth date doesn’t match. Your physical description doesn’t match. The idiot running the system says you’re a criminal because the system says so.
This is how it’s going to go.