伊利诺伊州废料的儿童滥用预测软件不 ' 预测多 '
这篇文章来自 gizmodo.com。原始 url 是: https://gizmodo.com/illinois-scraps-child-abuse-prediction-software-for-not-1821080730
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
Chicago is ending an algorithmic child abuse prevention program after the data mining software failed to flag at-risk kids who died while swamping caseworkers with alerts that thousands of other children were at imminent risk of death.
As the Chicago Tribune reports, the Rapid Safety Feedback program regularly overestimated the likelihood of abuse in many cases while failing to predict actual deaths. The state partnered with the Florida-based non-profit Eckerd Connects to rank children’s risk of death within two years on a scale of 1 to 100 after an abuse allegation. The newspaper found that 4,100 children were assigned a “90 percent” probability of death or serious injury, while 369 were found to have a “100 percent” chance.
Not among them: Itachi Boyle or Semaj Crosby, both under two years old, and both dead following multiple abuse allegations. The software did not flag their cases.
“Predictive analytics [wasn’t] predicting any of the bad cases,” Illinois Department of Children and Family Services director Beverly Walker told the 论坛 报. “We are not doing the predictive analytics because it didn’t seem to be predicting much.”
论坛 报 reporters found multiple data entry errors in both the Boyle and Crosby cases. Additionally, neither of their files included information regarding the welfare of their siblings, nor pending investigations into the adults in their home.
Eckerd’s software predicts harm by analyzing closed child abuse cases and extracting data points that correlate with abuse and serious injury. These can include parents’ criminal or drug history and if the child lives in a single parent or two parent home. As is routinely the case, the algorithm itself is proprietary, meaning neither parents nor caseworkers know what math powers the algorithm, which exact data points affect risk score, or how they’re weighted in the overall prediction, with any specificity.
While algorithms have a veneer of objectivity, researchers have pinpointed how human decision-making can re-enforce biases. In this case, software predicts potential abuse based on closed child abuse cases. But landmark research in Pediatrics Hsa found a complicated relationship between race and child abuse. When investigating the disproportionately high number of child abuse cases against black families, researcher Brett Drake found no evidence of racial biases from case workers—neglect is legitimately more common in black households.
“The problem is not that [Child Protective Services] workers are racists,” Drake told Reuters Health in 2011. “The problem is that huge numbers of black people are living under devastating circumstances. Mitigating poverty, and the effects of poverty, would be the most powerful way to reduce child maltreatment.”
There’s no easy way to disentangle the harmful ways poverty manifests—as single parenthood, food and housing insecurity, or unemployment—from malicious neglect. It’s a complex, subjective process and thus, the closed cases powering the algorithm reflect the decisions and biases of their caseworkers, for better or worse.
Algorithmic prediction can be a useful way for processing multiple cases at scale—Eckerd offered scores on thousands of cases, many more than any single person could—but even at it’s best it’s still subject to faulty reasoning, overblown diagnoses, and, most heartbreakingly, ignoring crucial clues.