Child Protective Services to Use AI, Big Data To Predict Child Abuse

image_pdfimage_print
Pre-crime is taken to a new extreme to smoke out child abuse situations but to get there, millions of citizens will suffer privacy abuse.Many false positives will also be wrongfully abused. It this trading one kind of abuse for another? Indeed, Technocrats have created an ethical firestorm. ⁃ TN Editor

In an age of austerity, and a climate of fear about child abuse, perhaps it is unsurprising that social workers have turned to new technology for help.

Local authorities – which face spiralling demand and an £800m funding shortfall – are beginning to ask whether big data could help to identify vulnerable children.

Could a computer program flag a problem family, identify a potential victim and prevent another Baby P or Victoria Climbié?

Years ago, such questions would have been the stuff of science fiction; now they are the stuff of science fact.

Bristol is one place experimenting with these new capabilities, and grappling with the moral and ethical questions that come with them.

Gary Davies, who oversees the council’s predictive system, can see the advantages.

He argues that it does not mean taking humans out of the decision-making process; rather it means using data to stop humans making mistakes.

“It’s not saying you will be sexually exploited, or you will go missing,” says Davies. “It’s demonstrating the behavioural characteristics of being exploited or going missing. It’s flagging up that risk and vulnerability.”

Such techniques have worked in other areas for years. Machine learning systems built to mine massive amounts of personal data have long been used to predict customer behaviour in the private sector.

Computer programs assess how likely we are to default on a loan, or how much risk we pose to an insurance provider.

Designers of a predictive model have to identify an “outcome variable”, which indicates the presence of the factor they are trying to predict.

For child safeguarding, that might be a child entering the care system.

They then attempt to identify characteristics commonly found in children who enter the care system. Once these have been identified, the model can be run against large datasets to find other individuals who share the same characteristics.

The Guardian obtained details of all predictive indicators considered for inclusion in Thurrock council’s child safeguarding system. They include history of domestic abuse, youth offending and truancy.

More surprising indicators such as rent arrears and health data were initially considered but excluded from the final model. In the case of both Thurrock, a council in Essex, and the London borough of Hackney, families can be flagged to social workers as potential candidates for the Troubled Families programme. Through this scheme councils receive grants from central government for helping households with long-term difficulties such as unemployment.

Such systems inevitably raise privacy concerns. Wajid Shafiq, the chief executive of Xantura, the company providing predictive analytics work to both Thurrock and Hackney, insists that there is a balance to be struck between privacy rights and the use of technology to deliver a public good.

Read full story here…

Join our mailing list!


avatar
  Subscribe  
Notify of