Boise City Hall Using Facial Recognition To Blacklist And Ban Residents

Boise City Hall will use cameras and AI to identify anyone who has been banned from entry. This most certainly violates the First Amendment right “to petition the Government for a redress of grievances.” ⁃ TN Editor

Local governments and private corporations are using facial recognition to secretly blacklist people, despite having no valid reason to do so.

A recent article in the Idaho Statesman and in the video above, prove that local governments are using facial recognition to ban residents from accessing city-owned buildings.

The Boise city council claimed that they need facial recognition cameras to protect public employees from disgruntled or unwelcome residents.

Wow, there must be a lot of disgruntled or unhappy residents for the city council to justify putting facial recognition cameras in Boise City Hall and City Hall West, right?

Would you believe both city halls have banned just one single resident because he opposed using city funds to pay for an NBA developmental league team.

His ban apparently did not last very long, because Mike Journee a spokeman for Mayor David Bieter said, “no one is currently banned from City Hall.”

So if no one is currently banned from City Hall why install facial recognition cameras? Could the government use facial recognition to quell dissent? How many people will refuse to complain or speak out because law enforcement could use it to identify them?

The Statesman revealed that “the technology would be used to identify people who are banned from City Hall for any reason.”

Journee also admitted that there is no “specific instance” that could justify installing facial recognition cameras on city property. So if there is no “specific instance” or reason to use facial recognition to identify unwelcome people, why use it?

To create secret blacklists and quell dissent of course.

The government already uses facial recognition to create a secret “No-Fly List” (blacklist) of citizens and now watchlists and blacklists are creeping into public transit. Private corporations, bars and nightclubs are also using facial recognition to identify and blacklist customers and sports fans

Everyone should take notice when the mass-media admits what I have been saying for years: facial recognition is being used to blacklist people for any reason.

It is time to face facts, the Feds and law enforcement are secretly using facial recognition to track and identify anyone of interest.

Read full story here…

San Diego City Council Rules Guns Must Be Locked Up, Stowed Away

San Diego leaders override the U.S. Constitution and the Second Amendment by passing an ordinance requiring that all guns in every home be securely locked up and stowed. Other cities having similar laws include Seattle and New York. ⁃ TN Editor

The San Diego City Council Monday tentatively approved an ordinance that would require gun owners to store guns in a locked container or disable them with a trigger lock when not in use or being worn on their person.

City Attorney Mara Elliott proposed the ordinance last month with the intention of reducing accidental shootings, children’s access to guns and suicides. According to Elliott, 46% of gun owners in the U.S. who have children do not secure their guns and 73% of youngsters aged 9 and under know where their parents keep their guns.

Since 2002, the state has mandated that all guns sold in California have an accompanying trigger lock approved by the state Department of Justice’s Bureau of Firearms. Elliott said the ordinance is a “common-sense approach” to building on current state requirements.

The proposal’s supporters and opponents gave public comment on the proposal for nearly two hours before the council’s 6-2 vote. The measure’s supporters included gun control advocacy groups like San Diegans for Gun Violence Prevention and Never Again California as well as Assemblyman Todd Gloria, D-San Diego.

Wendy Wheatcroft, founder of San Diegans for Gun Violence Prevention, framed the ordinance as a way to keep military veterans, first responders and law enforcement officers from committing suicide in addition to keeping guns away from children. As of June 19, 97 police officers and 46 firefighters have killed themselves in the U.S. since the beginning of this year, according to Wheatcroft.

“We need to attack this from all angles, not just locking up firearms but also making sure they have adequate mental health support,” Wheatcroft said. “But we also need to be reducing the means and the access to a firearm because in a moment of crisis, even having that gun locked up can be the difference between life and death.”

The proposal’s opponents said it infringes on their Second Amendment rights, particularly for gun owners who do not have children living with them. Under current state law, gun owners are required to keep firearms in a secure container or disabled with a device like a trigger lock only if they live with a person who cannot legally have a weapon under state or federal law.

Opponents also argued the proposal is unenforceable and that locking a gun in a safe would make it difficult to access and use in a moment of self- defense. Wendy Hauffen, executive administrator for the San Diego County Gun Owners political action committee, suggested breaking the law should be an infraction rather than a felony or misdemeanor.

“This will allow the storage conversation to be had, which is stated as one of the purposes for passing this regulation, while not filling jails full of violators,” Hauffen said. “Criminalizing normal activity has the potential to ruin a lot of innocent lives and this is especially a concern in a city like San Diego, where we have so many residents such as active-duty military, who come from so many other states where this kind of law doesn’t and would never exist.”

Read full story here…


Ripped: Pre-Crime AI Bigger Scam Than Psychic Detectives

Technocrats believed their own twisted logic and it came around to bite them. For all hype, promises and millions spent, predictive policing is finally being recognized as a total scam. ⁃ TN Editor

Law enforcement agencies around the world have recently begun extricating themselves from expensive, ineffective predictive policing systems. The machine learning equivalent of psychic detectives, it turns out, simply doesn’t work.

AI can’t predict crime

In Palo Alto, California a three-year long program using predictive policing is finally being shuttered. Police spokesperson Janine De la Vega told the LA Times: “We didn’t get any value out of it. It didn’t help us solve crime.” In nearby Mountain View as well, a spokesperson for the police department said “we tested the software and eventually subscribed to the service for a few years, but ultimately the results were mixed and we discontinued the service.”

Predictive policing is a black box AI technology purported to take years of historical policing data (maps, arrest records, etc.) and convert it into actionable insights which predict “hot spots” for future criminal activity. The big idea here is that the AI tells law enforcement leaders where and when to deploy officers in order to prevent crimes from happening.

Another way of putting it: an AI determines that locations where crimes have already happened is a good place for cops to hang out in order to deter more crimes from happening.

We could point out that cops should already be deploying officers to high-crime areas as a method of proactively policing in between reacting to calls, but many agencies are getting hip to that on their own.

In Rio Rancho, New Mexico, for example, according to the LA Times, police realized the system was ineffective and redundant. Captain Andrew Rodriguez said:

It never panned out. It didn’t really make much sense to us. It wasn’t telling us anything we didn’t know.

AI flavored snake oil

Psychic detectives don’t have real psychic abilities. They’re a scam. Whether intentional or not, the perpetrators of these ridiculous claims waste taxpayer dollars, police resources, and valuable time that could be spent on actual investigations. They were all the rage as recently as the early 2000′s. Most police departments, thankfully, now recognize that humans absolutely don’t have psychic powers.

But, even as frauds, human psychics are still better than AI-powered predictive policing systems. It would almost certainly be more cost-effective and equally as accurate if police were to outsource crime prediction to psychics for a reasonable salary rather than continue paying companies like PredPol for their predictive policing products, installations, updates, and subscriptions (yes, subscriptions).

This is because AI can’t predict crime, it just analyses risk. Furthermore, it’s a confirmation bias scam. If, for example, the AI says a particular area is at the highest possible risk for crime, and officers deploy there but find no crime: the AI’s working because the police presence deterred crime. And if the officers do spot crime? Of course, the algorithms are working because it knew there’d be crime. It can’t lose!

Black box AI can’t explain why it makes a given prediction, and its ability to predict crime cannot be measured in any meaningful way.

Read full story here…

lie detection

Face-Reading AI Tells Police When You Are Lying

Law enforcement has little restraint in using technology that is known to have serious accuracy problems, as is the case with AI performing pre-crime predictive analysis, lie detection and facial recognition. ⁃ TN Editor

American psychologist Paul Ekman’s research on facial expressions spawned a whole new career of human lie detectors more than four decades ago. Artificial intelligence could soon take their jobs.

While the U.S. has pioneered the use of automated technologies to reveal the hidden emotions and reactions of suspects, the technique is still nascent and a whole flock of entrepreneurial ventures are working to make it more efficient and less prone to false signals.

Facesoft, a U.K. start-up, says it has built a database of 300 million images of faces, some of which have been created by an AI system modeled on the human brain, The Times reported. The system built by the company can identify emotions like anger, fear and surprise based on micro-expressions which are often invisible to the casual observer.

“If someone smiles insincerely, their mouth may smile, but the smile doesn’t reach their eyes — micro-expressions are more subtle than that and quicker,” co-founder and Chief Executive Officer Allan Ponniah, who’s also a plastic and reconstructive surgeon in London, told the newspaper.

Facesoft has approached police in Mumbai about using the system for monitoring crowds to detect the evolving mob dynamics, Ponniah said. It has also touted its product to police forces in the U.K.

The use of AI algorithms among police has stirred controversy recently. A research group whose members include Facebook Inc., Microsoft Corp., Alphabet Inc., Inc. and Apple Inc published a report in April stating that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

Read full story here…


Experts: European Nations Should Ban Social Credit Scoring

Technocrats in the EU are bent on total surveillance and controlling people, but a high-level, independent advisory group has warned against using AI for mass surveillance and social credit scoring. ⁃ TN Editor

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.

This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.

The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.

The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.)

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.

As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer  the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will draw to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no or very light touch regulation, prior to now.

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.

“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.

“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”

Read full story here…

Bills To Require Breathalyzers In Cars, Mandatory Mental Health Assessments

Progressive liberal Democrat Rep. Kathleen Rice (D-NY) introduced two bills that have serious Constitutional implications. Her bills will encourage law enforcement into a police-state mentality that will be impossible to undo. ⁃ TN Editor

If you thought the TAPS Act was bad, wait until you read this.

Our politicians are hard at work introducing bills that promise to turn America into a mirror image of China.

Rep. Kathleen Rice’s H.R. 3374 bill, otherwise known as the End Drunk Driving Act, would put breathalyzers and ignition interlock devices in every new car.

According to the LI Herald, Rice wants to force auto manufacturers to put breathalyzers and ignition interlock devices in new cars by 2029.

“The End Drunk Driving Act would mandate that, within 10 years, all cars sold nationwide come equipped with technology that would detect a driver’s blood alcohol content and prevent the vehicle from moving if it is above the legal limit.”

With close to 280 million cars in the U.S. and over 226 million drivers licenses, nearly every American would be affected by this and other bills.

If Rice’s bill is passed it would effectively turn 226 million people into suspects.

SEC. 3. Advanced Technology To End Drunk Driving:

(a) In General.–Not later than 10 years after the date of enactment of this Act, the Secretary shall issue a final rule prescribing or amending a Federal motor vehicle safety standard that would prevent operation of a motor vehicle when the operator is under the influence of alcohol. The standard–

(1) shall prevent the operation of the motor vehicle if the operator’s blood alcohol content is above the legal limit; and

(2) shall require such technology to be–

(A) reliable and accurate;

(B) set at the legal limit;

(C) unobtrusive and fast;

(D) tamper-proof/circumvention-resistant;

(E) functionary in extreme environments (both hot and cold); and

(F) easily maintained.

If Rice’s bill becomes law, then motorists in Utah, who have consumed one alcoholic beverage would not be allowed to drive.

As I reported last year, at least six states are considering criminalizing consuming one alcoholic beverage. Earlier this year, Michigan expressed their desire to do the same, bringing the total to seven states.

Rice’s plans for law enforcement and accused drunk drivers will leave your head spinning in disbelief.

Rice’s H.R. 7099 bill, otherwise known as the Prevent Impaired Driving Child Endangerment Act would force anyone convicted of a DWI with a child in the car, to undergo a mental health assessment.

The LI Herald states that every state would force law enforcement to give drivers mental health assessments or lose federal funding.

“Offenders would also be required to undergo an alcohol, substance abuse and mental health assessment and receive treatment if necessary. The case would be added to state child abuse registries and states that fail to comply could have certain federal funding withheld by the U.S. Secretary of Transportation.”

Rice’s claim that only motorists who have been convicted of DWI will be given mental health assessments is a lie. Why? Because it is already happening.

For the past three years, Police officers in Texas have been giving everyone they arrest, a mental health assessment.

“Once at jail, anyone who is arrested will be screened for mental illness. The jail will send those results to judges to consider when setting bond. The county will also start using a risk assessment tool to arrive at an estimated level of danger and flight risk posed by each defendant.”

H.R. 7099 would turn America’s police into psychologists with unimaginable powers.

“(D) A law that—

“(i) requires the individual, if convicted of the offense, to undergo an alcohol abuse, substance abuse, or mental health assessment; and

“(ii) if the assessment indicates a need for treatment, authorizes the appropriate court (or monitoring agency) to require the individual to undergo treatment as part of the individual’s sentence in connection with the conviction or as a condition for reissuance of a State driver’s license to the individual.

And if that wasn’t bad enough, Rice’s bill would turn drunk drivers into instant child abusers.

“(E) A law that requires, if the individual is convicted of the offense and is the parent, guardian, or custodian of the child passenger or is otherwise legally responsible for the child passenger, the law enforcement agency that charges the individual with the offense or the appropriate court to file with the appropriate State register of child abuse a report on the individual concerning the offense.

Turning people into instant child abusers for having one alcoholic beverage or being judged mentally unfit by law enforcement is appalling.

The TAPS Act, the End Drunk Driving Act and the Prevent Impaired Driving Child Endangerment Act are all indicators that America is headed down a dark and dangerous road.

Read full story here…


Police: Give Up Your Phone Password Or Go To Jail

If you are ever stopped and demanded to turn over your phone’s password, do not comply. Tell the officer that he must get a legitimate court-issued warrant, and then you will comply. Always be polite, but firm. but be ready to pay the price of non-compliance.

Fourth Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” ⁃ TN Editor

William Montanez is used to getting stopped by the police in Tampa, Florida, for small-time traffic and marijuana violations; it’s happened more than a dozen times. When they pulled him over last June, he didn’t try to hide his pot, telling officers, “Yeah, I smoke it, there’s a joint in the center console, you gonna arrest me for that?”

They did arrest him, not only for the marijuana but also for two small bottles they believed contained THC oil — a felony — and for having a firearm while committing that felony (they found a handgun in the glove box).

Then things got testy.

As they confiscated his two iPhones, a text message popped up on the locked screen of one of them: “OMG, did they find it?”

The officers demanded his passcodes, warning him they’d get warrants to search the cellphones. Montanez suspected that police were trying to fish for evidence of illegal activity. He also didn’t want them seeing more personal things, including intimate pictures of his girlfriend.

So he refused, and was locked up on the drug and firearms charges.

Five days later, after Montanez was bailed out of jail, a deputy from the Hillsborough County Sheriff’s Office tracked him down, handed him the warrants and demanded the phone passcodes. Again, Montanez refused. Prosecutors went to a judge, who ordered him locked up again for contempt of court.

“I felt like they were violating me. They can’t do that,” Montanez, 25, recalled recently. “F— y’all. I ain’t done nothing wrong. They wanted to get in the phone for what?”

He paid a steep price, spending 44 days behind bars before the THC and gun charges were dropped, the contempt order got tossed and he pleaded guilty to a misdemeanor pot charge. And yet he regrets nothing, because he now sees his defiance as taking a stand against the abuse of his rights.

“The world should know that what they’re doing out here is crazy,” Montanez said. The police never got into his phones.

While few would choose jail, Montanez’s decision reflects a growing resistance to law enforcement’s power to peer into Americans’ digital lives. The main portals into that activity are cellphones, which are protected from prying eyes by encryption, with passcodes the only way in.

As police now routinely seek access to people’s cellphones, privacy advocates see a dangerous erosion of Americans’ rights, with courts scrambling to keep up.

“It’s becoming harder to escape the reach of police using technology that didn’t exist before,” said Riana Pfefferkorn, the associate director of surveillance and cybersecurity at the Center for Internet and Society at Stanford Law School. “And now we are in the position of trying to walk that back and stem the tide.”

Read full story here…

First Public Schools To Engage Facial Recognition

A Technocrat’s response to recent school shootings? Use facial recognition on everyone on the school property. In the case of New York schools, there has been a temporary delay in full implementation of the Aegis technology. ⁃ TN Editor

Next week, a school district in western New York will become the first in the United States to pilot a facial recognition system on its students and faculty. On Monday, June 3, the Lockport City School District will light up its Aegis system as part of a pilot project that will make it broadly operational by Sept. 1, 2019. The district has eight schools.

Superintendent Michelle Bradley announced the move on Tuesday, as first reported by The Lockport Union-Sun and Journal. Bradley described the test as an “initial implementation phase” meant to troubleshoot the system, train district officials on its use, and discuss proper procedures with local law enforcement in the event of an alert triggered by the facial recognition tech.

The Lockport pilot comes amid increased scrutiny of facial recognition’s efficacy across the US, including growing civil rights concerns and worries that the tech may serve to further entrench societal biases. Earlier this month, San Francisco banned police from using facial recognition, and similar bills in the US hope to do the same. Amazon has endured persistent pressure — including from its own shareholders — for its aggressive salesmanship of its facial Rekognition system to law enforcement agencies. Rep. Alexandria Ocasio-Cortez expressed concern that facial recognition could be used as a form of social control in a congressional hearing on the technology last week.

At the same time, reports and studies of facial recognition’s inaccuracies and mistakes — especially on women and people of color — continue to emerge.

Other schools have considered implementing facial recognition systems, but Lockport will be the first public school district to begin using the tech, the American Civil Liberties Union told BuzzFeed News.

BuzzFeed News has reached out to the Lockport City School District and the maker of the Aegis system for comment.

Lockport resident Jim Shultz, a vocal critic of the school district’s plan to use facial recognition, described the upcoming pilot as “a dicey move”: “I think the district is desperate not to begin another school year with their expensive system just sitting there,” he told BuzzFeed News.

In March 2018, Lockport announced its plans to install a facial recognition security system, which it funded through the New York Smart Schools Bond Act — an act meant to help state schools augment their instructional tech. But instead of buying laptops and iPads, Lockport submitted a proposal for a high-tech security system, and allocated much of the $4.2 million it was given toward adding dozens of surveillance cameras in the school and installing the facial recognition system Aegis, which is provided by Canada-based SN Technologies. To date, Lockport has spent $1.4 million to get the system up and running.

“Aegis is an early warning system that informs staff of threats including guns or individuals who have been identified as not allowed in our buildings,” stated an FAQ distributed to the school’s parents and obtained by BuzzFeed News. “Aegis has the ability [to screen] every door and throughout buildings to identify people or guns. Early detection of a threat to our schools allows for a quicker and more effective response.”

Read full story here…


On Thursday, the New York State Education Department told BuzzFeed News it has asked Lockport to delay its use of facial recognition technology on students.

In an emailed statement, a spokesperson said, “The Department is currently reviewing the Lockport CSD’s privacy assessment to ensure that student data will be protected with the addition of the new technology. The Department has not come to the conclusion that the District has demonstrated the necessary framework is in place to protect the privacy of data subjects and properly secure the data. As such, it is the Department’s continued recommendation that the District delay its use of facial recognition technology.

Read full update…

Liberty Defense

Police Adopting TSA-Style Scanners For Public Use

As this technology falls into the hands of local police agencies, police state practices will be able to scan civilians without their permission or awareness to produce “actionable items” for intervention. This would be an egregious violation of the Fourth Amendment prohibiting illegal search and seizure. ⁃ TN Editor

TSA-style body scanners are coming to public spaces, and that should scare the hell out of everyone.

If you thought the NYPD’s Z-Backscatter vans and police mini-Z’s were intrusive, you have not seen anything yet.

Soon, nowhere will be safe from Big Brother’s prying eyes, as police prepare to use HEXWAVE to spy on people in public spaces.

Last week the Salt Lake Tribune revealed that the Utah Attorney General and law enforcement are partnering with Liberty Defense, a 3D image scanning company that makes its money from scanning the public in real-time. (3D means capturing rich information (size, shape, depth) about the detection space. It can detect any material that has a physical form.)

Let’s start with their name — calling yourself Liberty Defense is an affront to liberty-minded Americans who do not want to be secretly spied on by Big Brother. Their tag line “Protecting Communities And Preserving Peace of Mind” is the exact opposite of what this device does.

Any device that is used to spy on the public is just that: a surveillance device. It is not a Defense of our Liberty.

As Fox Now 13 reported, police will use Liberty Defense’s, HEXWAVE to spy on people at mass gatherings like concerts, malls and stadiums.

“HEXWAVE could be deployed at mass gatherings like concerts, malls, stadiums, public transit stops and government buildings” Bill Riker, Liberty Defense’s CEO, said.

Over the past two years, I have warned people that TSA-style body scanners were turning public transit into mirror images of our airports by watchlisting and flagging suspicious people. But I could never have imagined that law enforcement would be putting them in malls and places of worship.

If you do not believe Fox News, then perhaps you will believe Liberty Defense, which openly admits that they want governments and businesses to put their 3D scanners in every public venue.

“Their challenge: efficiently securing high traffic areas with multiple entry points, such as hotels, schools, airports, public transit systems, entertainment venues and outdoor pedestrian locations in a secure, non-intrusive manner.”

If you are still not sure about law enforcement’s plans to scan the public, then perhaps you will take the Utah AG’s office word for it.

According to the AG’s “Memorandum of Understanding” police plan to use HEXWAVE to scan the public for two years, in but not limited to:

1.  Sporting & Concert Arenas, Stadiums and Olympic Venues; 
2.  Primary, Secondary and Higher Education Facilities; 
3.  Places of Worship, Facilities and Property Owned by or Affiliated with Faith Entities; 
4.  Government Offices, Buildings and Facilities; 
5.  Amusement Parks; and 
6.  Entertainment Events, Conventions, Shows & Festivals

Police will also use HEXWAVE to spy on the public during “non-business hours to get system exposure to the full range of potential operating conditions to include environmental, frequency/volume of use or other operating conditions to which HEXWAVE would be subjected.”

What does that mean? It means that law enforcement will be measuring public resistance to being scanned 24/7.

Liberty Defense CEO Bill Riker, worked for the Department of Defense and General Dynamics which speaks volumes about their desire to put 3D scanners everywhere.

It is unclear if Liberty Defense is a Homeland Security/DoD front, but one thing is certain: their desire to turn public venues into extensions of the police state could not not any clearer.

The spread of surveillance devices helps private corporations and law enforcement track and identify everyone; it does absolutely nothing to stop terrorism.

We must stop the spread of TSA-style body scanners before they are put in public transportation, convenient stores, public parks, etc.

Read full story here…

UK Pedestrian Fined $115 For Avoiding Facial Recognition Camera

Britain has privacy laws similar to the U.S, but that didn’t restrain police from stopping and fining a resident for trying to cover his face to avoid being photographed by an AI camera on a public street. Every pedestrian was being photo’d and compared to a master database of wanted persons. ⁃ TN Editor

Police fined a pedestrian £90 for disorderly behaviour after he tried to cover his face when he saw a controversial facial recognition camera on a street in London.

Officers set up the camera on a van in Romford, East London, which then cross-checked photos of faces of passers-by against a database of wanted criminals.

But one man was unimpressed about being filmed and covered his face with his hat and jacket, before being stopped by officers who took his picture anyway.

After being pulled aside, the man told police: ‘If I want to cover me face, I’ll cover me face. Don’t push me over when I’m walking down the street.’

It comes just weeks after it was claimed the new technology incorrectly identified members of the public in 96 per cent of matches made between 2016 and 2018.

he cameras have been rolled out in a trial in parts of Britain, with the Met making its first arrest last December when shoppers in London’s West End were scanned.

But their use has sparked a privacy debate, with civil liberties group Big Brother Watch branding the move a ‘breach of fundamental rights to privacy and freedom of assembly’. Police argue they are necessary to crack down on spiralling crime.

Officers previously insisted people could decline to be scanned, before later clarifying that anyone trying to avoid scanners may be stopped and searched.

It was first deployed by South Wales Police ahead of the Champions League final in Cardiff in 2007, but wrongly matched more than 2,000 people to possible criminals.

Police and security services worldwide are keen to use facial recognition technology to bolster their efforts to fight crime and identify suspects.

But they have been hampered by the unreliability of the software, with some trials failing to correctly identify a single person.

The technology made incorrect matches in every case during two deployments at Westfield shopping centre in Stratford last year, according to Big Brother Watch. It was also reportedly 96 per cent accurate in eight uses by the Met from 2016 to 2018

In Romford, the man was fined £90 at the scene by officers, who also arrested three other people during the day thanks to the technology, according to BBC Click.

After being stopped he asked an officer: ‘How would you like it if you walked down the street and someone grabbed your shoulder? You wouldn’t like it, would you?

The officer told him: ‘Calm yourself down or you’re going in handcuffs. It’s up to you. Wind your neck in.’ But the man replied: ‘You wind your neck in.’

After being fined, the man told a reporter: ‘The chap told me down the road – he said they’ve got facial recognition. So I walked past like that (covering my face).

‘It’s a cold day as well. As soon as I’ve done that, the police officer’s asked me to come to him. So I’ve got me back up. I said to him ‘f*** off’, basically.

‘I said ‘I don’t want me face shown on anything. If I want to cover me face, I’ll cover me face, it’s not for them to tell me not to cover me face.

‘I’ve got a now £90 fine, here you go, look at that. Thanks lads, £90. Well done.’

Silkie Carlo, the director of civil liberties group Big Brother Watch, was at the scene holding a placard saying ‘stop facial recognition’ – before she asked an officer about the man they had taken aside: ‘What’s your suspicion?’

The officer replied: ‘The fact that he’s walked past clearly masking his face from recognition and covered his face. It gives us grounds to stop him and verify.’

Ivan Balhatchet, the Metropolitan Police’s covert and intelligence lead, said: ‘We ought to explore all technology to see how it can make people safer, how it can make policing more effective.

Read full story here…