Should Police Use Computers To Predict Crimes And Criminals?

Legal action brought against law enforcement agencies over Technocrat-minded pre-crime programs is gaining traction around the nation. The idea that man can accurately predict the future is nonsense, but the allure is too strong for police departments to say no. ⁃ TN Editor

Years of secrecy by America’s police departments about their use of computer programs predicting where crimes will occur, and who will commit them, are under fire in legal cases nationwide.

The largest departments — New York, Chicago and Los Angeles — are all being sued for not releasing information about their “predictive policing” programs, which use algorithms to crunch data and create lists of people and neighborhoods for officers to target. Some smaller departments also have been brought to court and before public records agencies.

A top concern, advocates say, is that the computer programs perpetuate the problem of minorities being arrested at higher rates than whites. If arrest and crime location data that show such biases are fed into the algorithms, they argue, police will continue targeting minorities and minority neighborhoods at higher rates.

Several groups and organizations have taken police agencies to court in an effort to find out what data is being fed into the programs, how the algorithms work and exactly what the end results are, including which people and areas are on the lists and how police are using the data.

“Everybody is trying to find out how it works, if it’s fair,” said Jay Stanley, a senior policy analyst for the American Civil Liberties Union. “This is all pretty new. This is all experimental. And there are reasons to think this is discriminatory in many ways.”

The programs are developed by private companies such as Palantir and PrePol and can tell police where and when crimes are likely to occur by analyzing years of crime location data. Other, more criticized programs produce lists of likely criminals and victims based on people’s criminal history, age, gang affiliation and other factors.

Some cities are spending hundreds of thousands of dollars, even millions, on predictive policing programs, with many of the costs paid for by state and federal law enforcement grants. Several dozen U.S. police departments use some form of predictive policing, and more than a hundred others are considering or planning to start such programs, according to counts and estimates by different groups.

Police officials say they can’t release some information about their predictive programs because of citizen privacy and safety concerns and because some data is proprietary. The programs are helping to reduce crime and better deploy officers in a time of declining budgets and staffing, they argue.

Some studies have arrived at conflicting conclusions about whether predictive policing is effective or biased, but there has not been definitive research yet, experts say.

Critics say they’ve already seen what they believe is evidence of biases in predictive policing, including increased arrests in neighborhoods heavily populated by blacks and Latinos and people on computer-generated lists being repeatedly harassed by police.

Mariella Saba believes predictive policing labeled her Los Angeles neighborhood, Rose Hill, as a crime hot spot, because she has seen heavy law enforcement activity. Friends and neighbors, many of them Latino, have been stopped by police multiple times, she said.

One friend, Pedro Echeverria, was shot three times by a police officer last year but survived. Prosecutors ruled the shooting justified, saying Echeverria had a gun and fought with officers. Police said they decided to stop him as he was walking on a street because he was in Rose Hill, a “known hangout” for gang members, according to a prosecutor’s report.

“It’s traumatic. It creates trauma,” Saba, 30, of the increased police activity. “I know better to never normalize this or see this as normal. I’m about to burst.”

Saba said she can’t be certain whether Rose Hill is the subject of predictive policing because police won’t release that information. A group she co-founded, the Stop LAPD Spying Coalition, sued the police department in February seeking data about its program.

The LAPD has released some data to the group but hasn’t hand over other information, including copies of “chronic offender bulletins” that list people of interest to police. The lawsuit remains pending.

Read full story here…

See also, Minority Report PreCrime Test Claims Success; Seeks Expansion Worldwide




Anonymous Users Can Be Identified On Twitter With 97.6% Accuracy

The meta-data argument claiming that identities are concealed, is a myth of epic proportions and always has been. Attempting to be anonymous is futile. In Twitter’s case, it is using meta-data analysis to prove which accounts are fake and which are real; fake accounts are being deleted em-masse. ⁃ TN Editor

Even if you think you’re browsing Twitter “anonymously,” machine learning algorithms can still pinpoint you in a crowd of 10,000 other users using metadata associated with your posts, according to a new study.

“Metadata” refers to data about other data. In the context of a Twitter post, this includes the date and time of the post, the number of characters in it, the device it was posted from, its grammatical style, the location it was posted from, and a host of other markers. The average tweet contains about 144 pieces of metadata.

Using machine learning, researchers at University College London and the Turing Institute have developed a method of identifying individual users with 96.7 accuracy using metadata alone. Even if your handle is “LibPwner2016,” the metadata can still reveal who you are. And most of that metadata is accessible through Twitter’s API.

The experiment was run on Twitter, but the researchers say that the same methods can be used to test privacy on other platforms.

“The methods described in this work can be applied to a vast class of platforms and systems that generate metadata with similar characteristics”, conclude the researchers.

This is bad news for Facebook, which has spent much of this year dealing with national scrutiny after repeated scandals involving the loss of sensitive user data to third parties.

Read full story here…




Witness Protection Scheme For Whistle-Blowers Exposing Big Tech ‘Wrongdoing’

As the old adage goes, “Necessity is the mother of invention”, and in this case, the need for Big Tech whistle-blowers to ‘come out’ has spawned a support organization that will help with support and legal fees. It will be interesting to see if any insiders actually turn whistle-blower. ⁃ TN Editor

Whistleblowers are being offered a “witness protection scheme” to expose “wrongdoing” in the technology industry. An American non-for-profit organisation founded by a French entrepreneur and philanthropist has said it will provide individuals working within “big data” financial and legal support if they are able provide information that shows how the public is being “harm[ed], exploited or misled”.

The Signals Network, which was set up last year, is working with a consortium of journalists around the world and aims to provide assistance to potential whistle-blowers to ensure that powerful corporations can be investigated.

Newspapers and websites in America and Europe, including The Telegraph, have issued a “call for information” to people working in “big data” who are able to show how the public are being misled or that the information they have provided is being misused.

Other organisations involved in the project include Mediapart, which was set up by the former editor of Le Monde, Die Viet in Germany, the Intercept and WikiTribune.

The reporters will work together to examine information that is provided and a committee will decide whether potential sources have provided sufficiently strong information to warrant support from the organisation.

In recent years, concerns have arisen about the role of technology companies and how “big data” may be being misused by firms.

The academic then allegedly passed the data to a company called Cambridge Analytica, in violation of Facebook’s rules and without the company knowing.

It also emerged that Cambridge Analytica harvested data on 50 million Americans without their permission and failed to ensure the data was deleted – it was allegedly used to develop an algorithm used in the US presidential election to target voters for the Trump campaign.

The controversy led to more than $36 billion (£26 billion) being wiped off the value of Facebook, as investors reacted to the revelations. The firm have denied that the data available to Cambridge Analytica constituted a data breach and any wrongdoing.

Earlier this month, experts said that social media and online gaming firms should have a “duty of care” to protect children from mental ill health, abuse and addictive behaviour, amid concerns that social media firms are cynically targeting children using addictive “hooks”.

Read full story here…