UK Pedestrian Fined $115 For Avoiding Facial Recognition Camera

Britain has privacy laws similar to the U.S, but that didn’t restrain police from stopping and fining a resident for trying to cover his face to avoid being photographed by an AI camera on a public street. Every pedestrian was being photo’d and compared to a master database of wanted persons. ⁃ TN Editor

Police fined a pedestrian £90 for disorderly behaviour after he tried to cover his face when he saw a controversial facial recognition camera on a street in London.

Officers set up the camera on a van in Romford, East London, which then cross-checked photos of faces of passers-by against a database of wanted criminals.

But one man was unimpressed about being filmed and covered his face with his hat and jacket, before being stopped by officers who took his picture anyway.

After being pulled aside, the man told police: ‘If I want to cover me face, I’ll cover me face. Don’t push me over when I’m walking down the street.’

It comes just weeks after it was claimed the new technology incorrectly identified members of the public in 96 per cent of matches made between 2016 and 2018.

he cameras have been rolled out in a trial in parts of Britain, with the Met making its first arrest last December when shoppers in London’s West End were scanned.

But their use has sparked a privacy debate, with civil liberties group Big Brother Watch branding the move a ‘breach of fundamental rights to privacy and freedom of assembly’. Police argue they are necessary to crack down on spiralling crime.

Officers previously insisted people could decline to be scanned, before later clarifying that anyone trying to avoid scanners may be stopped and searched.

It was first deployed by South Wales Police ahead of the Champions League final in Cardiff in 2007, but wrongly matched more than 2,000 people to possible criminals.

Police and security services worldwide are keen to use facial recognition technology to bolster their efforts to fight crime and identify suspects.

But they have been hampered by the unreliability of the software, with some trials failing to correctly identify a single person.

The technology made incorrect matches in every case during two deployments at Westfield shopping centre in Stratford last year, according to Big Brother Watch. It was also reportedly 96 per cent accurate in eight uses by the Met from 2016 to 2018

In Romford, the man was fined £90 at the scene by officers, who also arrested three other people during the day thanks to the technology, according to BBC Click.

After being stopped he asked an officer: ‘How would you like it if you walked down the street and someone grabbed your shoulder? You wouldn’t like it, would you?

The officer told him: ‘Calm yourself down or you’re going in handcuffs. It’s up to you. Wind your neck in.’ But the man replied: ‘You wind your neck in.’

After being fined, the man told a reporter: ‘The chap told me down the road – he said they’ve got facial recognition. So I walked past like that (covering my face).

‘It’s a cold day as well. As soon as I’ve done that, the police officer’s asked me to come to him. So I’ve got me back up. I said to him ‘f*** off’, basically.

‘I said ‘I don’t want me face shown on anything. If I want to cover me face, I’ll cover me face, it’s not for them to tell me not to cover me face.

‘I’ve got a now £90 fine, here you go, look at that. Thanks lads, £90. Well done.’

Silkie Carlo, the director of civil liberties group Big Brother Watch, was at the scene holding a placard saying ‘stop facial recognition’ – before she asked an officer about the man they had taken aside: ‘What’s your suspicion?’

The officer replied: ‘The fact that he’s walked past clearly masking his face from recognition and covered his face. It gives us grounds to stop him and verify.’

Ivan Balhatchet, the Metropolitan Police’s covert and intelligence lead, said: ‘We ought to explore all technology to see how it can make people safer, how it can make policing more effective.

Read full story here…




US Police Capture 117 Million In Facial Recognition Systems

Massive nationwide study in 2006 reveals that thirty-six percent of Americans are in a facial recognition database, and the number is growing rapidly. Law enforcement is mostly unregulated and agencies are free to drift toward a police state reality. ⁃ TN Editor

There is a knock on your door. It’s the police. There was a robbery in your neighborhood. They have a suspect in custody and an eyewitness. But they need your help: Will you come down to the station to stand in the line-up?

Most people would probably answer “no.” This summer, the Government Accountability Office revealed that close to 64 million Americans do not have a say in the matter: 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm.

But the FBI is only part of the story. Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities.

This report closes these gaps. The result of a year-long investigation and over 100 records requests to police departments around the country, it is the most comprehensive survey to date of law enforcement face recognition and the risks that it poses to privacy, civil liberties, and civil rights. Combining FBI data with new information we obtained about state and local systems, we find that law enforcement face recognition affects over 117 million American adults. It is also unregulated. A few agencies have instituted meaningful protections to prevent the misuse of the technology. In many more cases, it is out of control.

The benefits of face recognition are real. It has been used to catch violent criminals and fugitives. The law enforcement officers who use the technology are men and women of good faith. They do not want to invade our privacy or create a police state. They are simply using every tool available to protect the people that they are sworn to serve. Police use of face recognition is inevitable. This report does not aim to stop it.

Rather, this report offers a framework to reason through the very real risks that face recognition creates. It urges Congress and state legislatures to address these risks through commonsense regulation comparable to the Wiretap Act. These reforms must be accompanied by key actions by law enforcement, the National Institute of Standards and Technology (NIST), face recognition companies, and community leaders.

Key Findings

Our general findings are set forth below. Specific findings for 25 local and state law enforcement agencies can be found in our Face Recognition Scorecard, which evaluates these agencies’ impact on privacy, civil liberties, civil rights, transparency and accountability. The records underlying all of our conclusions are available online.

Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos. Roughly one in two American adults has their photos searched this way.

A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible. While some agencies, like the San Diego Association of Governments, limit themselves to more targeted use of the technology, others are embracing high and very high risk deployments.

Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic.

Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras, bought technology that can do so, or expressed a written interest in buying it. Nearly all major face recognition companies offer real-time software.

No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences. The Maricopa County Sheriff’s Office enrolled all of Honduras’ driver’s licenses and mug shots into its database. The Pinellas County Sheriff’s Office system runs 8,000 monthly searches on the faces of seven million Florida drivers—without requiring that officers have even a reasonable suspicion before running a search. The county public defender reports that the Sheriff’s Office has never disclosed the use of face recognition in Brady evidence.

There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech.

Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing. One major face recognition company, FaceFirst, publicly advertises a 95% accuracy rate but disclaims liability for failing to meet that threshold in contracts with the San Diego Association of Governments. Unfortunately, independent accuracy tests are voluntary and infrequent.

Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time. We found only eight face recognition systems where specialized personnel reviewed and narrowed down potential matches. The training regime for examiners remains a work in progress.

Police face recognition will disproportionately affect African Americans. Many police departments do not realize that. In a Frequently Asked Questions document, the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either.

Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy.

Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. The Pinellas County Sheriff’s Office system is almost 15 years old and may be the most frequently used system in the country. When asked if his office audits searches for misuse, Sheriff Bob Gualtieri replied, “No, not really.” Despite assurances to Congress, the FBI has not audited use of its face recognition system, either. Only nine of 52 agencies (17%) indicated that they log and audit their officers’ face recognition searches for improper use. Of those, only one agency, the Michigan State Police, provided documentation showing that their audit regime was actually functional.

Read full story here…




Experts: Latin America Using Chinese Tech To ‘Exert Social Control’

China expands its Technocracy as it simultaneously undermines free governments. Agressive Chinese suppliers with their sophisticated social control tech products have now penetrated Argentina, Bolivia, Ecuador, Panama and Venezuela. ⁃ TN Editor

The ongoing proliferation of Chinese surveillance and information technologies in Latin America can be used to exert social control, erode democratic governance, and challenge U.S. and regional strategic interest, expert witnesses told a House panel on Thursday.

Margaret Myers, the director of the Asia and Latin America program at the Inter-American Dialogue, told the House Foreign Affairs Subcommittee on the Western Hemisphere via written testimony that the governments of Argentina, Bolivia, Ecuador, Panama, and Venezuela have all implemented “Chinese-made intelligent monitoring” technologies.

Myers described the move as “exceedingly troubling,” adding:

These systems are described by Chinese suppliers as promoting of citizen safety and security, but if used to exert social control (as they are in China or currently in Venezuela through the ZTE- backed “fatherland card”), can have critical implications for privacy and democratic governance.

Most notably in Venezuela, Reuters recently reported that China’s ZTE technology had enabled socialist dictator Nicolás Maduro to use the so-called “fatherland card” to collect personal data and track the behavior of citizens.

Most notably in Venezuela, Reuters recently reported that China’s ZTE technology had enabled socialist dictator Nicolás Maduro to use the so-called “fatherland card” to collectpersonal data and track the behavior of citizens.

The socialist policies of Chinese-backed Maduro and his predecessor plunged Venezuela into a humanitarian, security, and political abyss. The United States and about 50 other countries have come out in support of interim President Juan Guaido.

In his written testimony, Christopher Walker, the vice president for studies and analysis at the National Endowment for Democracy (NED), noted that despite the risks, the use of Chinese technology is likely to continue growing in Latin America.

He testified:

For many countries in Latin America, as in other developing economies around the world, the opportunity to import advanced technologies can be highly attractive. We can anticipate that governments across the region will continue to pursue such opportunities and welcome investments from China in this sphere. However, the wider societies of countries throughout the region must approach such technology-related deals with open eyes and with the information necessary to make fully informed decisions.

Brian Fonseca, the director of the Jack D. Gordon Institute for Public Policy at Florida International University, noted in his prepared remarks that the proliferation of Chinese surveillance and IT technologies are challenging the interest of the United States and the Western Hemisphere as a whole.

Read full story here…




surveillance

The Future Of Surveillance Is About Behaviors, Not Faces

With lack of regulations and legislation, ubiquitous surveillance is way beyond simple biometric identification and is now focussing on behaviors, including pre-crime analysis. Facial expressions, eye movements, gait, respiration, etc., are fed to AI algorithms to sense mood, personality and emotions. ⁃ TN Editor

In 1787, English philosopher Jeremy Bentham came up with an idea for a prison that would cost a fraction of the cost of other contemporary jails to run with virtually no internal crime. His theoretical prison, the panopticon, was curved, the cells facing inward toward a center point where a guard tower would stand. The windows in the guard tower were to be darkened on one side. This way, a single guard would be able to observe the behavior of all the prisoners. But more importantly, the prisoners would never know whether the guard had his or her gaze trained on them. The end result, every individual within the prison internalizes a sense of being watched all the time and behaves accordingly.

This idea of the panopticon has become a stand-in for the threat of ubiquitous surveillance, due mostly to Bentham’s choice of setting — a prison. But Bentham aimed not to frighten people, but to furnish a way to manage a scarce resource: the attention of law enforcement.

A new trend in video surveillance technology is turning Bentham’s panopticon into reality, but not in the way he imagined. Instead of a prison, the new panopticon would focus the attention of law enforcement on a person when her behavior becomes relevant to the guard tower. Imagine it were possible to recognize not the faces of people who had already committed crimes, but the behaviors indicating a crime that was about to occur.

Multiple vendors and startups attending ISC West, a recent security technology conference in Las Vegas, sought to serve a growing market for surveillance equipment and software that can find concealed guns, read license plates and other indicators of identity, and even decode human behavior.

A company called ZeroEyes out of Philadelphia markets a system to police departments that can detect when a person is entering a given facility carrying a gun. It integrates with any number of closed-circuit surveillance systems. But machine learning algorithms don’t just come out of a box knowing how to recognize a firearm any more than a drug dog arrives from the breeder knowing the difference between marijuana and oregano. To teach the algorithm, a team from the company shows up on location and proceeds to stage mock attacks. Slowly, the algorithm begins to learn what a gun looks like in that specific setting, depending on light, angles, and other conditions. They’re currently working with New York City Schools and have a contract with U.S. Customs and Border Patrol, but are not yet deployed to the border, said Kenny Gregory, a software engineer at the company.

Automated firearm detection is one solution to a growing problem that has no clear policy cure: curbing mass shootings and gun violence. While some polls show that 70 percent of Americans support stricter gun laws, that number is far lower, about 31 percent, among conservatives. And so the political debate, while fiery, has stalled. Gun-detection algorithms that alert security personnel when an armed person arrives might reduce the number of victims — though likely not as much as if there were no armed shooter in the first place.

It’s a predictive indicator of potential violence, rather than a lagging indicator, such as facial recognition, and carries less political baggage. More and more, cities and police departments are experimenting with facial recognition to detect the presence of suspects in real time. They’re meeting with stiff resistance from privacy advocates in San Francisco, where some lawmakers are looking to block deployment, and elsewhere.

Read full story here…



Warrantless: US Searches Of Phones, Laptops At Airports Rising

Where specific regulations are unclear or missing, TSA/DHS screeners will overstep the larger considerations like the Fourth Amendment that limits warrantless search and seizure and without probable cause. Just because you are in an airport doesn’t mean the Constitution is totally suspended.  ⁃ TN Editor

U.S. government searches of travelers’ cellphones and laptops at airports and border crossings nearly quadrupled since 2015 and were being done for reasons beyond customs and immigration enforcement, according to papers filed Tuesday in a federal lawsuit that claims scouring the electronic devices without a warrant is unconstitutional.

The government has vigorously defended the searches, which rose to 33,295 in fiscal 2018, as a critical tool to protect America. But the newly filed documents claim the scope of the warrantless searches has expanded to assist in enforcement of tax, bankruptcy, environmental and consumer protection laws, gather intelligence and advance criminal investigations.

Agents with U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement consider requests from other government agencies in determining whether to search travelers’ electronic devices, the court papers said. They added that agents are searching the electronic devices of not only targeted individuals but their associates, friends and relatives.

The new information about the searches was included in a motion the Electronic Frontier Foundation and the American Civil Liberties Union filed Tuesday in U.S. District Court in Massachusetts.

“The evidence we have presented the court shows that the scope of ICE and CBP border searches is unconstitutionally broad,” said Adam Schwartz, senior staff attorney for the EFF, based in San Francisco.

“ICE and CBP policies and practices allow unfettered, warrantless searches of travelers’ digital devices and empower officers to dodge the Fourth Amendment when rifling through highly personal information contained on laptops and phones,” he said.

The Department of Homeland Security did not respond to a request for comment. Both ICE and CBP said they did not comment on pending litigation.

When the suit was filed against the government in 2017, DHS officials said U.S. citizens and everyone else are subject to examination and search by customs officials, unless exempted by diplomatic status. The department has contended that no court has concluded that border searches of electronic devices require a warrant. Searches, some random, have uncovered evidence of human trafficking, terrorism, child pornography, visa fraud, export control breaches and intellectual property rights violations, according to the department.

The original case was filed on behalf of 10 American citizens and a lawful permanent resident from seven states — including two journalists, a NASA engineer and a former Air Force captain — who alleged the searches violated their constitutional rights. They asked the court to rule that the government must have a warrant based on probable cause before searching electronic devices at U.S. ports of entry.

The plaintiffs also are demanding the government expunge from investigatory databases information obtained in past searches. ICE and CBP share information taken from travelers’ electronic devices with other agencies and there is control to prevent them from impermissibly retaining it, they argue.

A year ago, U.S. District Judge Denise Casper in Boston rejected the government’s request to dismiss the lawsuit, allowing the case to move forward. The ACLU and the foundation began gathering documents and deposition testimony. Based on the new information, they filed a motion Tuesday asking the judge to rule in their favor without a trial. “Travelers’ devices contain an extraordinary amount of highly personal information that the government can easily search, retain, and share,” it argues.

“This new evidence reveals that the government agencies are using the pretext of the border to make an end run around the First and Fourth Amendments,” said ACLU attorney Esha Bhandari. “The border is not a lawless place. ICE and CBP are not exempt from the Constitution and the information on our electronic devices is not devoid of Fourth Amendment protections. We are asking the court to stop these unlawful searches and require the government to get a warrant.”

Read full story here…




Amazon’s Facial-Recognition Tech ‘Supercharges’ Local Police

Amazon’s says its Rekognition AI system “indistinguishable from magic” but it now offers it for nominal fees to law enforcement at all levels, from local city police and sheriff offices to the FBI. There are currently no national laws regulating the use of facial recognition. ⁃ TN Editor

When workers at an Ace Hardware here reported that a woman had walked out of the store with an $11.99 tank of welding gas that she hadn’t paid for in her tote bag, an elaborate high-tech crime-fighting operation sprang into action.

A Washington County sheriff’s detective, working with the agency’s Special Investigations Unit, ran the store’s surveillance footage through an internal facial-recognition program built by Amazon, revealing a possible match.

That woman’s license plate was flagged and, three months later, a narcotics officer in an unmarked SUV saw it and radioed other patrol deputies to stop her. A deputy clapped a pair of handcuffs around her wrists, and arrest report states. She said she’d needed the gas to fix her car.

Deputies in this corner of western Oregon outside ultraliberal Portland used to track down criminals the old-fashioned way, faxing caught-on-camera images of a suspect around the office in hope that someone might recognize the face.

Then, in late 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition, transforming this thicket of forests and suburbs into a public testing ground for a new wave of experimental police surveillance techniques.

Almost overnight, deputies saw their investigative powers supercharged, allowing them to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. A grainy picture of someone’s face – captured by a security camera, a social media account or a deputy’s smartphone – can quickly become a link to their identity, including their name, family and address. More than 1,000 facial-recognition searches were logged last year, said deputies, who sometimes used the results to find a suspect’s Facebook page or visit their home.

But Washington County also became ground zero for a high-stakes battle over the unregulated growth of policing by algorithm. Defense attorneys, artificial-intelligence researchers and civil rights experts argue that the technology could lead to the wrongful arrest of innocent people who bear only a resemblance to a video image. Rekognition’s accuracy is also hotly disputed, and some experts worry that a case of mistaken identity by armed deputies could have dangerous implications, threatening privacy and people’s lives.

Some police agencies have in recent years run facial-recognition searches against state or FBI databases using systems built by contractors such as Cognitec, IDEMIA and NEC. But the rollout by Amazon has marked perhaps the biggest step in making the controversial face-scanning technology mainstream. Rekognition is easy to activate, requires no major technical infrastructure and is offered to virtually anyone at bargain-barrel prices. Washington County spent about $700 to upload its first big haul of photos, and now, for all its searches, it pays about $7 a month.

It’s impossible to tell, though, just how accurate or effective the technology has been during its first 18 months of real-world tests. Deputies don’t have to note in arrest reports when a facial-recognition search was used, and the exact number of times it has resulted in an arrest is unclear. Sheriff’s officials said the software has led to dozens of arrests for theft, violence or other crimes, but a public-records request turned up nine case reports in which facial recognition was mentioned.

“Just like any of our investigative techniques, we don’t tell people how we catch them,” said Robert Rookhuyzen, a detective on the agency’s major crimes team who said he has run “several dozen” searches and found it helpful about 75% of the time. “We want them to keep guessing.”

Sheriff’s officials say face scans don’t always mark the end of the investigation: Deputies must still establish probable cause or find evidence before charging a suspect with a crime. But the Sheriff’s Office sets its own rules for facial-recognition use and allows deputies to use the tool to identify bodies, unconscious suspects and people who refused to give their name.

The search tool’s imperfect results raise the risk of an innocent person being flagged and arrested, especially in cases of the scanned images being blurred, low-quality or partially concealed. Deputies are also allowed to run artist sketches through the search, an unusual use that AI experts said could more often lead to a false match.

Amazon’s guidelines for law enforcement say officials should use Rekognition’s results only when the system is 99% confident in a match. But deputies here are not shown that search-confidence measurement when they use the tool. Instead, they are given five possible matches for every search, even if the system’s certainty in a match is far lower.

After fielding questions from The Washington Post, Amazon added language to those guidelines, stating that officers should manually review all matches before detaining a suspect and that the search “shouldn’t be used as the sole determinant for taking action.”

The relationship between Amazon and Oregon’s third-largest law enforcement agency is mutually beneficial: The Sheriff’s Office is helping to refine the system, which Amazon hopes to sell across the country. But Amazon’s push into law-enforcement sales has alarmed some legal advocates who say the system poses too many risks to civil liberties. (Amazon founder and CEO Jeff Bezos owns The Post.)

“The government is incredibly powerful, and they bring a lot to bear against an individual citizen in a case,” said Mary Bruington, the director of the Washington County Public Defender’s Office, which represents defendants who can’t afford an attorney. “You couple that with Amazon? That’s a powerful partnership.”

Matt Wood, the general manager of artificial intelligence for the company’s cloud-computing division, Amazon Web Services, said in a statement that Rekognition is just “another input among many other leads for a 100 percent human-driven investigation.”

Still, the company faces criticism on many fronts: Top AI researchers, members of Congress and civil rights groups – as well as some of Amazon’s own investors and employees – have urged the company to stop providing the technology to law enforcement, pointing to studies that have found that the system is less accurate with dark-skinned faces. Amazon has disputed that research.

Some of Amazon’s rivals have spurned similar contracts. Microsoft President Brad Smith said in April that the company had recently declined to provide its facial-recognition software to a California law enforcement agency that wanted to run a face scan anytime its officers pulled someone over, but that it had approved a deal putting the technology in a U.S. prison. Microsoft declined to provide details.

Amazon investors will vote in May on a proposal, backed by a group of activist shareholders, that would prevent the company from selling Rekognition to government agencies unless the company’s board determines that it doesn’t pose a risk to human rights.

The Sheriff’s Office allowed Post journalists to spend two days in March in its squad cars, detective’s offices and county jail, observing how deputies have folded the technology into their daily caseload. Most of those interviewed said the software had saved them time, boosted their arrest numbers and helped them process the growing glut of visual evidence. To date, no legal challenge has been made to an arrest on the grounds that the photo match was mistaken, both deputies and public defenders said.

But lawyers in Oregon said the technology should not be, as many see it, an imminent step forward for the future of policing, and they frame the system not as a technical milestone but a moral one: Is it OK to nab more bad guys if more good guys might get arrested, too?

“People love to always say, ‘Hey, if it’s catching bad people, great, who cares,’ ” said Joshua Crowther, a chief deputy defender in Oregon, “until they’re on the other end.”

‘Indistinguishable from magic’

When Amazon revealed Rekognition in 2016, the company called it a breakthrough for a potent style of deep-learning artificial intelligence that showed results “indistinguishable from magic.” In a blog post illustrated with a photo of an executive’s dog, the company offered some general ideas for how people could begin using it, including for security checkpoints or billboards wired to gather data from a viewer’s face.

The unveiling caught the eye of Chris Adzima, a former eBay programmer who had been hired at the Washington County Sheriff’s Office to work on an iPhone app that deputies use to track inmates’ behavior. His agency had hundreds of thousands of facial photos already online and no real way to analyze them. Using Amazon’s AI, he got a system up and running in less than three weeks.

“They didn’t really have a firm idea of any type of use cases in the real world, but they knew that they had a powerful tool that they created,” said Adzima, a senior information systems analyst who works in a small cubicle at the sheriff’s headquarters. “So, you know, I just started using it.”

Deputies immediately began folding facial searches into their daily beat policing, and Adzima built a bare-bones internal website that let them search from their patrol cars. He dropped the search-confidence percentages and designed the system to return five results, every time: When the system returned zero results, he said, deputies wondered whether they’d messed something up. To spice it up, he also added an unnecessary purple “scanning” animation whenever a deputy uploaded a photo – a touch he said was inspired by cop shows like “CSI.”

Read full story here…




To Hide Surveillance Tech, Feds Drop Child Porn Cases

It is deeply disturbing that courts are dropping criminal cases in order to shield the government’s clandestine surveillance software that was used to bring charges in the first place. The Feds have created public-private partnerships in order to create a wall of secrecy to hide behind. ⁃ TN Editor

The Department of Justice has been dismissing child pornography cases in order to not reveal information about the software programs used as the basis for the charges.

An array of cases suggest serious problems with the tech tools used by federal authorities. But the private entities who developed these tools won’t submit them for independent inspection or hand over hardly any information about how they work, their error rates, or other critical information. As a result, potentially innocent people are being smeared as pedophiles and prosecuted as child porn collectors, while potentially guilty people are going free so these companies can protect “trade secrets.”

The situation suggests some of the many problems that can arise around public-private partnerships in catching criminals and the secretive digital surveillance software that it entails (software that’s being employed for far more than catching child predators).

With the child pornography cases, “the defendants are hardly the most sympathetic,” notes Tim Cushing at Techdirt. Yet that’s all the more reason why the government’s antics here are disturbing. Either the feds initially brought bad cases against people whom they just didn’t think would fight back, or they’re willing to let bad behavior go rather than face some public scrutiny.

An extensive investigation by ProPublica “found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both,” writes Jack Gillum. Many more cases raised issues with the software as a defense.

“Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them,” notes Gillum. “But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.”

Courts have sought to overcome concerns that scrutiny would diminish the effectiveness of the software for law enforcement or infringe on intellectual property rights by ordering only secret and monitored third-party review processes. But federal prosecutors have rejected even these compromises, drawing worry that it’s not legitimate concerns driving their secrecy but a lack of confidence in the software’s efficacy or some other more nefarious reason.

Human Rights Watch (HRW) has raised questions about how much data (not just on defendants but on all Americans) these programs have been accessing and storing.

In February, HRW sent a letter to Justice Department officials expressing concerns about one such program, called the Child Protection System (CPS). TLO, the company behind the CPS system, has intervened in court cases to prevent disclosure of more information about the program or independent testing of it.

“Since the system is designed to flag people as suspected of having committed crimes, both its error rates and its potential to exceed constitutional bounds have implications for rights,” HRW states. Yet “it is unclear what information the Justice Department has about CPS’ potential for error (and on what basis).”

Read full story here…




Police State: Trashing The Fourth Amendment

The next time you get stopped by a patrol car, a police blood draw van might roll up behind and demand to take your blood, and you might be strip-searched. All personal data harvested from the incident becomes the property of the Federal government, whether you did anything wrong or not.

The Fourth Amendment states, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” . ⁃ TN Editor

“The Fourth Amendment was designed to stand between us and arbitrary governmental authority. For all practical purposes, that shield has been shattered, leaving our liberty and personal integrity subject to the whim of every cop on the beat, trooper on the highway and jail official.”—Herman Schwartz, The Nation

You think you’ve got rights? Think again.

All of those freedoms we cherish—the ones enshrined in the Constitution, the ones that affirm our right to free speech and assembly, due process, privacy, bodily integrity, the right to not have police seize our property without a warrant, or search and detain us without probable cause—amount to nothing when the government and its agents are allowed to disregard those prohibitions on government overreach at will.

This is the grim reality of life in the American police state.

Our so-called rights have been reduced to technicalities in the face of the government’s ongoing power grabs.

Consider a case before the U.S. Supreme Court (Mitchell vs. Wisconsin) in which Wisconsin police officers read an unconscious man his rights and then proceeded to forcibly and warrantlessly draw his blood while he was still unconscious in order to determine if he could be charged with a DUI.

To sanction this forced blood draw, the cops and the courts have hitched their wagon to state “implied consent” laws (all of the states have them), which suggest that merely driving on a state-owned road implies that a person has consented to police sobriety tests, breathalyzers and blood draws.

More than half of the states (29 states) allow police to do warrantless, forced blood draws on unconscious individuals whom they suspect of driving while intoxicated.

Seven state appeals courts have declared these warrantless blood draws when carried out on unconscious suspects are unconstitutional. Courts in seven other states have found that implied consent laws run afoul of the Fourth Amendment. And yet seven other states (including Wisconsin) have ruled that implied consent laws provide police with a free pass when it comes to the Fourth Amendment and forced blood draws.

With this much division among the state courts, a lot is riding on which way the U.S. Supreme Court rules in Mitchell and whether it allows state legislatures to use implied consent laws as a means of allowing police to bypass the Fourth Amendment’s warrant requirement in relation to forced blood draws and unconscious suspects.

Mind you, this is the third time in as many years that the Supreme Court has taken up the issue of warrantless blood draws.

In 2016, the Court ruled 7-1 in Birchfield v. North Dakota that states may not prosecute suspected drunken drivers for refusing warrantless blood draws when they are arrested. However, the Court also tossed the cops a bone by giving them a green light to require a warrantless breath test incident to arrest. Writing for the majority, Justice Samuel Alito rightly recognized the danger of allowing the government to warrantlessly take possession of—and preserve indefinitely—one’s biological and genetic material.

In 2013, a divided Supreme Court held in Missouri v. McNeely that people suspected of drunken driving can’t automatically be subjected to blood tests without a warrant and without their consent.

The differences between McNeely, Birchfeld and Mitchell are nuanced, but it is in these nuances that the struggle to preserve the Fourth Amendment can best be seen.

The Fourth Amendment has been on life support for a long time.

Our freedoms—especially the Fourth Amendment—continue to be strangulated by a prevailing view among government bureaucrats that they have the right to search, seize, strip, scan, spy on, probe, pat down, taser, and arrest any individual at any time and for the slightest provocation.

Forced cavity searches, forced colonoscopies, forced blood draws, forced breath-alcohol tests, forced DNA extractions, forced eye scans, forced inclusion in biometric databases: these are just a few ways in which Americans are being forced to accept that we have no control over our bodies, our lives and our property, especially when it comes to interactions with the government.

Worse, on a daily basis, Americans are being made to relinquish the most intimate details of who we are—our biological makeup, our genetic blueprints, and our biometrics (facial characteristics and structure, fingerprints, iris scans, etc.)—in order to clear the nearly insurmountable hurdle that increasingly defines life in the United States: we are now guilty until proven innocent.

Such is life in America today that individuals are being threatened with arrest and carted off to jail for the least hint of noncompliance, homes are being raided by police under the slightest pretext, property is being seized on the slightest hint of suspicious activity, and roadside police stops have devolved into government-sanctioned exercises in humiliation and degradation with a complete disregard for privacy and human dignity.

Remember what happened to Utah nurse Alex Wubbels after a police detective demanded to take blood from a badly injured, unconscious patient without a warrant?

Wubbels refused to go along with the cop’s order, citing hospital policy that requires police to either have a warrant or permission from the patient in order to draw blood.

The detective had neither.

Irate, the detective threatened to have Wubbels arrested if she didn’t comply. Backed up by her supervisors, Wubbels respectfully stood her ground only to be roughly grabbed, shoved out of the hospital, handcuffed and forced into an unmarked car while hospital police looked on and failed to intervene (take a look at the police body camera footage, which went viral, and see for yourself).

Michael Chorosky didn’t have an advocate like Wubbels to stand guard over his Fourth Amendment rights. Chorosky was surrounded by police, strapped to a gurney and then had his blood forcibly drawn after refusing to submit to a breathalyzer test. “What country is this? What country is this?” cried Chorosky during the forced blood draw.

What country is this indeed?

Unfortunately, forced blood draws are just the tip of the iceberg when it comes to the indignities and abuses being heaped on Americans in the so-called name of “national security.”

For example, 21-year-old Charnesia Corley was allegedly being pulled over by Texas police for “rolling” through a stop sign. Claiming they smelled marijuana, police handcuffed Corley, placed her in the back of the police cruiser, and then searched her car for almost an hour. No drugs were found in the car.

As the Houston Chronicle reported:

Returning to his car where Corley was held, the deputy again said he smelled marijuana and called in a female deputy to conduct a cavity search. When the female deputy arrived, she told Corley to pull her pants down, but Corley protested because she was cuffed and had no underwear on. The deputy ordered Corley to bend over, pulled down her pants and began to search her. Then…Corley stood up and protested, so the deputy threw her to the ground and restrained her while another female was called in to assist. When backup arrived, each deputy held one of Corley’s legs apart to conduct the probe.

The cavity search lasted 11 minutes. This practice is referred to as “rape by cop.”

Corley was eventually charged with resisting arrest and with possession of 0.2 grams of marijuana. Those charges were subsequently dropped.

David Eckert was forced to undergo an anal cavity search, three enemas, and a colonoscopy after allegedly failing to yield to a stop sign at a Wal-Mart parking lot. Cops justified the searches on the grounds that they suspected Eckert was carrying drugs because his “posture [was] erect” and “he kept his legs together.” No drugs were found.

During a routine traffic stop, Leila Tarantino was subjected to two roadside strip searches in plain view of passing traffic, while her two children—ages 1 and 4—waited inside her car. During the second strip search, presumably in an effort to ferret out drugs, a female officer “forcibly removed” a tampon from Tarantino. No contraband or anything illegal was found.

Thirty-eight-year-old Angel Dobbs and her 24-year-old niece, Ashley, were pulled over by a Texas state trooper on July 13, 2012, allegedly for flicking cigarette butts out of the car window. Insisting that he smelled marijuana, the trooper proceeded to interrogate them and search the car. Despite the fact that both women denied smoking or possessing any marijuana, the police officer then called in a female trooper, who carried out a roadside cavity search, sticking her fingers into the older woman’s anus and vagina, then performing the same procedure on the younger woman, wearing the same pair of gloves. No marijuana was found.

Sixty-nine-year-old Gerald Dickson was handcuffed and taken into custody (although not arrested or charged with any crime) after giving a ride to a neighbor’s son, whom police suspected of being a drug dealer. Despite Dickson’s insistence that the bulge under his shirt was the result of a botched hernia surgery, police ordered Dickson to “strip off his clothes, bend over and expose all of his private parts. No drugs or contraband were found.”

Meanwhile, four Milwaukee police officers were charged with carrying out rectal searches of suspects on the street and in police district stations over the course of several years. One of the officers was accused of conducting searches of men’s anal and scrotal areas, often inserting his fingers into their rectums and leaving some of his victims with bleeding rectums.

It’s gotten so bad that you don’t even have to be suspected of possessing drugs to be subjected to a strip search.

A North Carolina public school allegedly strip-searched a 10-year-old boy in search of a $20 bill lost by another student, despite the fact that the boy, J.C., twice told school officials he did not have the missing money. The assistant principal reportedly ordered the fifth grader to disrobe down to his underwear and subjected him to an aggressive strip-search that included rimming the edge of his underwear. The missing money was later found in the school cafeteria.

Suspecting that Georgia Tech alum Mary Clayton might have been attempting to smuggle a Chik-Fil-A sandwich into the football stadium, a Georgia Tech police officer allegedly subjected the season ticket-holder to a strip search that included a close examination of her underwear and bra. No contraband chicken was found.

What these incidents show is that while forced searches may span a broad spectrum of methods and scenarios, the common denominator remains the same: a complete disregard for the dignity and rights of the citizenry.

In fact, in the wake of the U.S. Supreme Court’s ruling in Florence v. Burlison, any person who is arrested and processed at a jail house, regardless of the severity of his or her offense (i.e., they can be guilty of nothing more than a minor traffic offense), can be subjected to a strip search by police or jail officials without reasonable suspicion that the arrestee is carrying a weapon or contraband.

Examples of minor infractions which have resulted in strip searches include: individuals arrested for driving with a noisy muffler, driving with an inoperable headlight, failing to use a turn signal, riding a bicycle without an audible bell, making an improper left turn, engaging in an antiwar demonstration (the individual searched was a nun, a Sister of Divine Providence for 50 years).

Police have also carried out strip searches for passing a bad check, dog leash violations, filing a false police report, failing to produce a driver’s license after making an illegal left turn, having outstanding parking tickets, and public intoxication. A failure to pay child support can also result in a strip search.

As technology advances, these searches are becoming more invasive on a cellular level, as well.

For instance, close to 600 motorists leaving Penn State University one Friday night were stopped by police and, without their knowledge or consent, subjected to a breathalyzer test using flashlights that can detect the presence of alcohol on a person’s breath.

These passive alcohol sensors are being hailed as a new weapon in the fight against DUIs. (Those who refuse to knowingly submit to a breathalyzer test are being subjected to forced blood draws. Thirty states presently allow police to do forced blood draws on drivers as part of a nationwide “No Refusal” initiative funded by the federal government.

Not even court rulings declaring such practices to be unconstitutional in the absence of a warranthave slowed down the process. Now police simply keep a magistrate on call to rubber stamp the procedure over the phone.)

The National Highway Safety Administration, the same government agency that funds the “No Refusal” DUI checkpoints and forcible blood draws, is also funding nationwide roadblocks aimed at getting drivers to “voluntarily” provide police with DNA derived from saliva and blood samples, reportedly to study inebriation patterns.

In at least 28 states, there’s nothing voluntary about having one’s DNA collected by police in instances where you’ve been arrested, whether or not you’re actually convicted of a crime.

All of this DNA data is being fed to the federal government.

Read full story here…

Police are now taking roadside blood samples to catch impaired drivers

It was about 6:30 on a Friday night in January when Phoenix Police Det. Kemp Layden pulled over a white Jeep Cherokee that was speeding and weaving in and out of its lane.

The 47-year-old driver spoke slowly, his eyes were red and watery, and his pupils were dilated. The inside of the Jeep reeked of marijuana, and the driver failed a field sobriety test, which includes walking heel-to-toe and standing on one leg.

He told the officer he had smoked marijuana a few hours earlier and taken a prescription sedative the night before, police say. The man passed a portable breath test — he wasn’t drunk. But Layden suspected he was impaired by drugs, which the test can’t detect.

A DUI police van equipped with a special chair and table for blood testing pulled up. The man refused to submit to a blood draw. So Layden grabbed his laptop and filled out an electronic warrant, or e-warrant, which was transmitted directly to a judge.

Within 10 minutes, Layden had a search warrant. Another officer drew the man’s blood. A lab report later confirmed he had active THC and a sedative in his blood.

Police photographed and fingerprinted the driver and issued him a citation for DUI. It took 79 minutes from the time he was stopped until he was picked up by an Uber.

Drugged driving is a growing concern as more states legalize marijuana and the opioid epidemic rages on. To fight it, more communities are training police officers to draw drivers’ blood at police stations or in vans, as in Arizona. And on-call judges are approving warrants electronically, often in a matter of minutes at any time of day or night.

Together, the blood tests and e-warrants “could be a game-changer in law enforcement,” said Buffalo Grove, Illinois, Police Chief Steven Casstevens, the incoming president of the International Association of Chiefs of Police.

While it’s easy for police to screen drivers for alcohol impairment using breath-testing devices to get a blood alcohol concentration level, there’s no such machine to screen for drug impairment

That’s why blood tests are so important, traffic safety experts say. And alcohol and drugs such as heroin and the psychoactive compound in marijuana are metabolized quickly in the body, so the more time that elapses, the lower the concentration.

Having an officer draw the suspect’s blood soon after he is stopped gives a truer picture of his impairment because he doesn’t have to be taken to a health center for a blood draw after he is arrested, they say. Police departments also save money because they don’t need to pay phlebotomists and hospitals for blood draws.

And having a system in which a judge can sign off quickly on an electronic warrant for a blood test streamlines the process.

Whether or not a state has legalized marijuana for medical or recreational use, you can’t get behind the wheel while you’re impaired. Police make that determination based on your driving pattern, physical appearance, interaction with the officer and roadside sobriety tests. The blood test identifies which substances, if any, are causing that impairment.

A 2016 U.S. Supreme Court ruling found that police don’t need a warrant if a driver suspected of impairment refuses to take a breath test, but they do for a blood test, which pierces the skin. But critics say blood draws outside of a traditional medical setting are unhygienic and that e-warrants could infringe on an individual’s rights.

“There’s an absolute potential for a dilution of a citizen’s constitutional protections against unreasonable search and seizure when it’s done that way,” said Donald Ramsell, a Wheaton, Illinois, DUI attorney and Illinois Association of Criminal Defense Lawyers board member. “A judge can just wake up in his bedroom and hit ‘accept’ [on his device] and go back to sleep.”

Read full story here…




Public-Private Partnership: Police Spy On Suspicious People At ‘Potential Crime Scenes’

This spy scheme was implemented as a Public-Private Partnership between the city police department and Paladin Drones, which claims to “deploy drones to 911 calls for instant situational awareness.” This technology will fit into GEOINT and Smart City collaborations for total awareness. ⁃ TN Editor

For years, law enforcement has been claiming that drones will only be used for natural disasters, crime scene investigations, car accidents and rescue operations.

That is the bill of goods, being sold to the public but it is all a lie.

A perfect example of how law enforcement promises the public one thing and after time passes, uses it for something else is taking place in Texas at the Memorial Villages Police Department (MVPD).

Two years ago, Click2Houston reported how the MVPD claimed that they would only use drones for “better emergency response during disasters.” They also used police officer and UAV pilot, Larry Boggus to solidify their claim that drones would only be used for natural disasters saying, “drones are a huge asset for us because very quickly we were able to see the amount of houses that were damaged” during a 2018 storm.

I love it when police department’s provide comic relief to prove my point.

It only took one year for the MVPD to prove that their “Boggus” claim that drones would only be used during emergencies was a lie. (Pun intended.)

Last week, Click2Houston revealed that the MVPD is using drones to respond to home alarms and to identify suspicious people.

“Mark Kobelan, the mayor of Piney Point Village, recently had to call police for a possible suspicious person. Within seconds, a drone was overhead.”

When police use drones to respond to home alarms, don’t think for one second that they will only use them to fly over that particular home. When police respond to a possible break-in using a patrol car, they will typically drive around the neighborhood looking for suspicious people and possible signs of forced entry.

So what do you think police will use drones for?

All across the country there are numerous examples of police using drones to spy on crowds but a Harvard Law School article titled “Drones as Crime-Fighting Tools in 2020: Legal and Normative Considerations” warned that the Boston Police Department’s plan to use “observation warrants” to justify spying on the public is a harbinger of what is to come.

“As drones become ubiquitous, people’s expectations about potential drone surveillance will shift, and their Fourth Amendment protections from drones might, too: the contours of what is reasonable under the Fourth Amendment may adjust as people’s expectations of privacy evolve.”

As the video above explains, sending police drones to investigate “potential crimes” means that everyone is a potential suspect. Police will use drones to spy on people’s homes, backyards and streets, looking for suspicious people, essentially turning our Bill of Rights into a paper tiger.

A perfect example of our bill of Rights being turned into a paper tiger is police, Stingrays. Law enforcement has been using Stingrays to intercept millions of innocent phone conversations and texts without a warrant.

Which is exactly how law enforcement plans to use drones.

Police Stingray’s and drones are designed for mass surveillance and do not care if they spy on innocent people.

As Andrew Ferguson, a David A. Clarke School of Law professor warned, “once you have flying cameras available, they will likely be used beyond the pilot project,” Ferguson said. “They will fly more and capture more data. This is a perfect example of how big data surveillance will change the power balance between citizens and police and erode community trust.”

Read full story here…




biometric

EU Moving Toward Universal Biometric National ID Cards

The trend for universal biometric identification is racing forward. Europe will soon add its population to the estimated 3.6 billion citizens globally that will carry a biometric national ID card by 2021. In addition, the UN has mandated that all humanity be identified by 2030. This is an essential element to implement global Technocracy. ⁃ TN Editor

The European Commission has proposed a host of new measures aimed at “denying terrorists the means to act” which include the mandatory inclusion of two biometrics – fingerprints and a facial image – in all ID cards and residence documents for Union citizens and their family members issued by EU Member States.

According to the Commission’s proposal: “Up to 370 of the 440 million citizens in 26 Member States (DK [Denmark] and UK do not issue ID cards) could hold national ID cards,” although “identity card ownership is common and compulsory in 15 Member States” and there are five other Member States in which citizens “are obliged to hold a non-specific document for identification purposes. In practice this very frequently is an identity card.”

The measure essentially aims at fingerprinting the majority of EU citizens – which will compliment the fingerprinting of non-EU citizens as required by the Visa Information System (VIS), for those who require a visa to enter the bloc, and as foreseen by the Entry/Exit System, which will hold the fingerprints on almost all non-EU nationals exempt from visa requirements.

A document released alongside the proposals states (emphasis added):

It is estimated that 80 million Europeans currently have non-machine readable ID cards without biometric identifiers. As many of the EU’s security measures rely on secure travel and identity documents – such as the systematic checks carried out at the external borders on all citizens using the Schengen Information System – this creates a security gap, because of the increased risk of falsification and identity fraud. It also leads to practical difficulties for citizens when travelling or moving to another Member State.

The Commission is therefore proposing measures to strengthen the security features of ID cards and residence documents of EU citizens and their non-EU family members. More secure documents will enhance EU external border management, increase the protection against falsification and document fraud and make it more difficult to misuse or copy such documents. This will benefit the security of all citizens, public authorities and businesses.

Proposals for new rules on national ID cards have been put forth alongside proposed measures to ease cross-border access to financial information for law enforcement authorities; to make the acquisition of explosives precursors more difficult; and for stricter controls on the import and export of firearms.

The Commission also published yesterday a proposal for new rules allowing easier cross-border access to “e-evidence” for police and judicial authorities, with the measures described by civil society organisations EDRi as “maximising risks for fundamental rights violations.”

Alongside the new proposals came the latest progress report on the Security Union: Fourteeneth progress report towards an effective and genuine Security Union (COM(2018) 211 final, pdf)

Some official documentation (such as impact assessments) is currently unavailable. This page will be updated when it is.

Read full story here…