Boise

Boise City Hall Using Facial Recognition To Blacklist And Ban Residents

Boise City Hall will use cameras and AI to identify anyone who has been banned from entry. This most certainly violates the First Amendment right “to petition the Government for a redress of grievances.” ⁃ TN Editor

Local governments and private corporations are using facial recognition to secretly blacklist people, despite having no valid reason to do so.

A recent article in the Idaho Statesman and in the video above, prove that local governments are using facial recognition to ban residents from accessing city-owned buildings.

The Boise city council claimed that they need facial recognition cameras to protect public employees from disgruntled or unwelcome residents.

Wow, there must be a lot of disgruntled or unhappy residents for the city council to justify putting facial recognition cameras in Boise City Hall and City Hall West, right?

Would you believe both city halls have banned just one single resident because he opposed using city funds to pay for an NBA developmental league team.

His ban apparently did not last very long, because Mike Journee a spokeman for Mayor David Bieter said, “no one is currently banned from City Hall.”

So if no one is currently banned from City Hall why install facial recognition cameras? Could the government use facial recognition to quell dissent? How many people will refuse to complain or speak out because law enforcement could use it to identify them?

The Statesman revealed that “the technology would be used to identify people who are banned from City Hall for any reason.”

Journee also admitted that there is no “specific instance” that could justify installing facial recognition cameras on city property. So if there is no “specific instance” or reason to use facial recognition to identify unwelcome people, why use it?

To create secret blacklists and quell dissent of course.

The government already uses facial recognition to create a secret “No-Fly List” (blacklist) of citizens and now watchlists and blacklists are creeping into public transit. Private corporations, bars and nightclubs are also using facial recognition to identify and blacklist customers and sports fans

Everyone should take notice when the mass-media admits what I have been saying for years: facial recognition is being used to blacklist people for any reason.

It is time to face facts, the Feds and law enforcement are secretly using facial recognition to track and identify anyone of interest.

Read full story here…




Mission Impossible: Getting Rid Of Chinese Security Cameras

Even the most sensitive Government offices are blanketed with security cameras made in China, and the task of removing them is next to impossible. China’s Technocracy is comparable to the Borg race in the Star Trek series: “We will assimilate!” ⁃ TN Editor

U.S. federal agencies have five weeks to rip out Chinese-made surveillance cameras in order to comply with a ban imposed by Congress last year in an effort to thwart the threat of spying from Beijing.

But thousands of the devices are still in place and chances are most won’t be removed before the Aug. 13 deadline. A complex web of supply chain logistics and licensing agreements make it almost impossible to know whether a security camera is actually made in China or contains components that would violate U.S. rules. 

The National Defense Authorization Act, or NDAA, which outlines the budget and spending for the Defense Department each year, included an amendment for fiscal 2019 that would ensure federal agencies do not purchase Chinese-made surveillance cameras. The amendment singles out Zhejiang Dahua Technology Co. and Hangzhou Hikvision Digital Technology Co., both of which have raised security concerns with the U.S. government and surveillance industry.

Hikvision is 42% controlled by the Chinese government. Dahua, in 2017, was found by cybersecurity company ReFirm Labs to have cameras with covert back doors that allowed unauthorized people to tap into them and send information to China. Dahua said at the time that it fixed the issue and published a public notice about the vulnerability. The U.S. government is considering imposing further restrictions by banning both companies from purchasing American technology, people familiar with the matter said in May. 

“Video surveillance and security equipment sold by Chinese companies exposes the U.S. government to significant vulnerabilities,” said Representative Vicky Hartzler, a Republican from Missouri, who helped draft the amendment. Removing the cameras will “ensure that China cannot create a video surveillance network within federal agencies,” she said at the time.

Dahua declined to comment on the ban. In a company statement, Hikvision said it complies with all applicable laws and regulations and has made efforts to ensure its products are secure. A company spokesman added that the Chinese government is not involved in the day-to-day operations of Hikvision. “The company is independent in business, management, assets, organization and finance from its controlling shareholders,” the spokesman said.

Despite the looming deadline to satisfy the NDAA, at least 1,700 Hikvision and Dahua cameras are still operating in places where they’ve been banned, according to San Jose, California-based Forescout Technologies, which has been hired by some federal agencies to determine what systems are running on their networks. The actual number is likely much higher, said Katherine Gronberg, vice president of government affairs at Forescout, because only a small percentage of government offices actually know what cameras they’re operating. The agencies that use software to track devices connected to their networks should be able to comply with the law and remove the cameras in time, Gronberg said. “The real issue is for organizations that don’t have the tools in place to detect the banned devices,” she added. 

Several years ago the Department of Homeland Security tried to force all federal agencies to secure their networks by tracking every connected device. As of December, only 35% of required agencies had fully complied with this mandate, according to a 2018 report by the Government Accountability Office. As a result, most U.S. federal agencies still don’t know how many or what type of devices are connected to their networks and are now left trying to identify the cameras manually, one by one.

Those charged with complying with the ban have discovered it’s much more complicated than just switching off all Hikvision or Dahua-labeled cameras. Not only can Chinese cameras come with U.S. labels, but many of the devices, including those made by Hikvision, are likely to contain parts from Huawei Technologies Co., the target of a broad government crackdown and whose chips power about 60% of surveillance cameras

“There are all kinds of shadowy licensing agreements that prevent us from knowing the true scope of China’s foothold in this market,” said Peter Kusnic, a technology writer at business research firm The Freedonia Group. “I’m not sure it will even be possible to ever fully identify all of these cameras, let alone remove them. The sheer number is insurmountable.” 

Read full story here…




Sidewalk Labs

Public Concern Grows Over Sidewalk Labs’ Smart City Surveillance

Smart City designer Sidewalk Labs is owned by Alphabet and is a sister company to Google. The more they try to defend themselves over data collection, the more skepticism arises in the public eye. ⁃ TN Editor

Alphabet’s plans to develop a Toronto neighborhood could set a dangerous precedent for the future of data-driven cities, according to data governance experts.

Last month, Sidewalk Labs, the urban innovation arm of Google’s parent company Alphabet, released a 1,524-page report detailing plans for developing a portion of Toronto’s waterfront.

The report, weighing more than 14 pounds, exhaustively detailed the perks of Alphabet’s vision, including streets without traffic congestion and air pollution, as well as inventive ways of dealing with harsh weather.

But when it came to discussing the handling of people’s data, Alphabet offered only a handful of pages with few new details. Sidewalk Labs describes the creation of an independent agency to manage data collection agreements with companies and make sure the collection is beneficial for the community.

Pedestrians walking in the neighborhood shortly after it launches will likely be tracked as they walk down streets, enter certain stores and spend time in parks.

But it’s not just about the data that will be collected about any given visitor on day one. It’s the risks we don’t even know about yet, the ones that may accrue over time as data collection broadens and gets more powerful. Innovations such as self-driving cars and drones will create new ways to collect data. Businesses, including Sidewalk Labs and others, will want even more data, and it’s difficult to predict what all of the new, data-collecting innovations will be.

Recent scandals, from the Equifax hack to Facebook’s Cambridge Analytica debacle, have highlighted the importance of protecting data. Sidewalk Labs plans to build a neighborhood “from the Internet up,” adding sensors that will turn streets and sidewalks into a digital space, increasing the opportunity for privacy issues, discriminatory algorithms and data breaches. Sidewalk Labs describes data being collected everywhere from building lobbies and retail stores to ride-hail vehicles, parks and markets, but no way to opt out entirely.

Sidewalk Labs outlines a list of initial data plans, including transportation data from cars, bikes and pedestrians to optimize parking prices, traffic lights and vehicle speeds. It also plans to collect data on foot traffic in retail stores, weather, air quality, waste classification, and other uses it expects can benefit the neighborhood. Sidewalk Labs would dot the neighborhood with a standardized physical mount, what it calls an “urban USB port,” to make it easy and affordable for other companies to deploy additional sensors such as cameras for digital innovations. The independent agency governing data would approve companies to do so.

Critics say this data could create hardships for marginalized groups if used in certain ways, such as law enforcement more easily tracking and deporting undocumented immigrants.

They also warn there’s a wide gap in technological expertise and financial resources between Alphabet (GOOG) and Toronto. The city risks agreeing to a neighborhood that favors Alphabet’s business interests and will feel more like a surveillance state than a smart city.

Alphabet needs to win the approval of the city council and Waterfront Toronto, the government agency overseeing the development. To do so Sidewalk Labs committed more than $50 million to creating its massive plan in the last 18 months. Toronto expects to spend up to $800,000 evaluating the plan, including only $50,000 on data governance.

“Civil society is outgunned,” Alex Ryan, vice president of the Mars Solutions Lab, a Toronto technology incubator, told CNN Business. “There is this asymmetry in being able to approach this topic, and as a result, we’re ending up with a very unbalanced conversation.”

Sidewalk Labs said in a statement to CNN Business that the project is not about data, and that the governing body handling data must be independent and set its own policies and rules.

Read full story here…




FBI

FBI, ICE Turned DMV Photos Into Surveillance Behemoth

DHS implemented a national network of Fusion Centers between 2003 and 2007, that opened government access to hundreds of millions of driver photos without consent or legal authorization. ⁃ TN Editor
 

Agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement have turned state driver’s license databases into a facial-recognition gold mine, scanning through hundreds of millions of Americans’ photos without their knowledge or consent, newly released documents show.

Thousands of facial-recognition requests, internal documents, and e-mails over the past five years, obtained through public-records requests by Georgetown University researchers and provided to The Washington Post, reveal that federal investigators have turned state Department of Motor Vehicles databases into the bedrock of an unprecedented surveillance infrastructure.

Police have long had access to fingerprints, DNA, and other ‘‘biometric data’’ taken from criminal suspects. But the DMV records contain the photos of the majority of a state’s residents, most of whom have never been charged with a crime.

Neither Congress nor state legislatures have authorized the development of such a system, and growing numbers of Democratic and Republican lawmakers are criticizing the technology as a dangerous, pervasive, and error-prone surveillance tool.

‘‘Law enforcement’s access of state databases,’’ particularly DMV databases, is ‘‘often done in the shadows with no consent,’’ House Committee on Oversight and Reform chairman Elijah Cummings, Democrat of Maryland, said in a statement.

Representative Jim Jordan of Ohio, the Oversight Committee’s ranking Republican, seemed particularly incensed during a hearing into the technology last month at the use of driver’s license photos in federal facial-recognition searches without the approval of state legislators or individual license holders.

‘‘They’ve just given access to that to the FBI,’’ he said. ‘‘No individual signed off on that when they renewed their driver’s license, got their driver’s licenses. They didn’t sign any waiver saying, ‘Oh, it’s OK to turn my information, my photo, over to the FBI.’ No elected officials voted for that to happen.’’

Despite those doubts, federal investigators have turned facial recognition into a routine investigative tool. Since 2011, the FBI has logged more than 390,000 facial-recognition searches of federal and local databases, including state DMV databases, the Government Accountability Office said last month, and the records show that federal investigators have forged daily working relationships with DMV officials. In Utah, FBI and ICE agents logged more than 1,000 facial-recognition searches between 2015 and 2017, the records show. Names and other details are hidden, though dozens of searches are marked as having returned a ‘‘possible match.’’

San Francisco and Somerville have banned their police and public agencies from using facial-recognition software, citing concerns about governmental overreach and a breach of public trust, and the subject is being hotly debated in Washington. On Wednesday, officials with the Transportation Security Administration, Customs and Border Protection, and the Secret Service are expected to testify at a hearing of the House Committee on Homeland Security about their agencies’ use of the technology.

The records show that the technology already is tightly woven into the fabric of modern law enforcement. They detailed the regular use of facial recognition to track down suspects in low-level crimes, including cashing a stolen check and petty theft. And searches are often executed with nothing more formal than an e-mail from a federal agent to a local contact, the records show.

‘‘It’s really a surveillance-first, ask-permission-later system,’’ said Jake Laperruque, a senior counsel at the watchdog group Project on Government Oversight. ‘‘People think this is something coming way off in the future, but these [facial-recognition] searches are happening very frequently today. The FBI alone does 4,000 searches every month, and a lot of them go through state DMVs.’’

Read full story here…




somerville

Somerville, MA: Second City Bans Facial Recognition Technology

TN predicts that the city rush is on to ban facial recognition tech. In this case, Somerville got this right: “the public use of face surveillance can chill the exercise of constitutionally protected free speech.” ⁃ TN Editor

Is it a movement? Or just a couple of outliers that will forever remain on the periphery of the surveillance state? It’s too early to say, but at least we can now say San Francisco isn’t an anomaly.

Somerville, Massachusetts just became the second U.S. city to ban the use of facial recognition in public space.

The “Face Surveillance Full Ban Ordinance,” which passed through Somerville’s City Council on Thursday night, forbids any “department, agency, bureau, and/or subordinate division of the City of Somerville” from using facial recognition software in public spaces. The ordinance passed Somerville’s Legislative Matters Committee on earlier this week.

Last month, San Francisco became the first city in the nation to ban the use of facial recognition tech by city government agencies. While it can’t keep the federales from rolling in and deploying the software against city residents, it does prevent local law enforcement from deciding this is the tech toy it can’t live without.

The ordinance passed in Somerville is pretty much the same thing. No local use, but federal-level use is OK. To be fair, the city can’t regulate the activities of the federal government. It could have forbidden local agencies from working with federal agencies using facial recognition tech, but it didn’t go quite that far.

This is a solid move, one that certainly looks smarter than allowing local cops to load up on tech that’s been roasted by Congress and (still!) sports a pretty gaudy failure rate.

If other cities are interested in joining the very short list of facial recognition banners, activists have created a few road maps for governments to use. At the moment, the greatest chance for success appears to be at the hyper-local level. The ACLU says it all comes down to cities making the most of their limited power.

Kade Crockford, director of the technology for Liberty Program at the ACLU of Massachusetts, said in a phone call that at the state level, the ACLU is advocating for a moratorium or pause of facial recognition technology, while at the local level, the ACLU is advocating for bans.

“At the municipal level, it’s different,” Crockford said. “State governments have the capacity to regulate, whereas local governments really don’t. They don’t have the ability, for example, to create new institutions that could oversee, with sufficient care and attention, the implementation of an oversight or accountability system to guard against civil rights and civil liberties abuses.”

Generating momentum at the state level may be difficult until more cities are on board. If bans like these become more common, state legislators may respond favorably to wind direction changes and finally push back a bit against entrenched interests with an inordinate amount of power, like police unions and incumbent politicians with an authoritarian bent.

Read full story here…

Ordinance: Banning the usage of facial technology surveillance in Somerville

(link)

WHEREAS, the broad application of face surveillance in public spaces is the functional equivalent of requiring every person to carry and display a personal photo identification card at all times.

WHEREAS, face surveillance technology has been shown to be far less accurate in identifying the faces of women, young people, and people of color, and that such inaccuracies place certain persons at an elevated risk of harmful “false positive” identifications.

WHEREAS many of the databases to which face surveillance technology is applied are plagued by racial and other biases, which generate copycat biases in face surveillance data.

WHEREAS, the public use of face surveillance can chill the exercise of constitutionally protected free speech.

WHEREAS, the broad application of face surveillance in public spaces is the functional equivalent of requiring every person to carry and display a personal photo identification card at all times.

WHEREAS, the benefits of using face surveillance, which are few and speculative, are greatly outweighed by its harms, which are substantial.

THEREFORE, BE IT RESOLVED, THAT THE SOMERVILLE CITY COUNCIL ADOPTS THE FOLLOWING:

Section 1. Definitions.

(A) “Face surveillance” shall mean an automated or semi-automated process that assists in identifying an individual, capturing information about an individual, based on the physical characteristics of an individual’s face.

(B) “Face surveillance system” shall mean any computer software or application that performs face surveillance.

(C) “Somerville” shall mean any department, agency, bureau, and/or subordinate division of the City of Somerville.

(D) “Somerville official” shall mean any person or entity acting on behalf of the Somerville, including any officer, employee, agent, contractor, subcontractor, or vendor.

SECTION 2. Ban on Government Use of Face Surveillance.

(A) It shall be unlawful for Somerville or any Somerville official to obtain, retain, access, or use:

(1) Any face surveillance system; or

(2) Any information obtained from a face surveillance system.

SECTION 3. Enforcement.

(A) Suppression. No data collected or derived from any use of face surveillance in violation of this Somerville and no evidence derived therefrom may be received in evidence in any trial, hearing, or other proceeding in or before any court, grand jury, department, officer, agency, regulatory body, legislative committee, or other authority subject to the jurisdiction of the City of Somerville. Face surveillance data collected or derived in violation of this Ordinance shall be considered unlawfully obtained, and shall be deleted upon discovery.

(B) Cause of Action. Any violation of this Ordinance constitutes an injury and any person may institute proceedings for injunctive relief, declaratory relief, or writ of mandate in any court of competent jurisdiction to enforce this Ordinance. An action instituted under this paragraph shall be brought against the respective City department, and the City and, if necessary to effectuate compliance with this Ordinance, any other governmental agency with possession, custody, or control of data subject to this Ordinance.

(C) Statutory Damages. Any person who has been subjected to face recognition in violation of this Ordinance, or about whom information has been obtained, retained, accessed, or used in violation of this Ordinance, may institute proceedings in any court of competent jurisdiction against the City and shall be entitled to recover actual damages, but not less than liquidated damages of $1,000 or $100 for each violation, whichever is greater.

(D) Fees. A court shall award costs and reasonable attorneys’ fees to a plaintiff who is the prevailing party in an action brought under Section 3(B) or (C).

(E) Training. Violations of this Ordinance by a City employee shall result in consequences that may include retraining, suspension, or termination, subject to due process requirements.

###




Unique Heartbeat Signature Detected From 250 Yards Away

Heartbeats are entirely unique to each person and can now be ‘read’ remotely by infrared laser up to 220 yards away. The biggest problem? Building a database large enough to cross-reference. ⁃ TN Editor

U.S. special forces are taking a more ‘intimate’ approach to remotely identifying targets, using lasers to sense their heartbeat.

According to MIT Technology Review, the Pentagon has developed a prototype of the technology, code-named ‘Jetson,’ that uses infrared lasers to read a person’s cardiac signature.

Though far less obvious than fingerprints or faces, people’s heartbeats have a distinct profile, making them among the most useful biometrics for uniquely identifying a person.

What separates the signature from others like it, however, is the fact that unlike a face, which may bear many similar features to another, heartbeats are entirely distinct.

As noted by MIT, companies like Nymi are already using cardiac signatures — taken via a wrist-mounted pulse sensor — to identify people for security purposes.

Another advantage that has made the type of detection particularly desirable for the U.S. military is the use of lasers, which allows for a relatively long-range.

Current prototypes work from about 200 meters (219 yards) and with further modifications that range could be extended.

‘I don’t want to say you could do it from space,’ Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office told MIT. ‘But longer ranges should be possible.’

Heartbeat patterns which are gleaned by detecting the changes in infrared light cause by a person’s blood flow, are not only highly accurate — about 95 to 98 percent — but also versatile.

Unlike facial recognition, which may require a clear view of someone’s face or be obfuscated by facial hair or other conditions, laser-detected heartbeats can be captured through normal clothing and at a number of angles.

The method does require an invisible laser to be pointed at a subject for about 30 seconds to get a sufficient read, meaning the technology can only be viably used on someone who is standing still.

Read full story here…




europe

Experts: European Nations Should Ban Social Credit Scoring

Technocrats in the EU are bent on total surveillance and controlling people, but a high-level, independent advisory group has warned against using AI for mass surveillance and social credit scoring. ⁃ TN Editor

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.

This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.

The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.

The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.)

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.

As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer  the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will draw to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no or very light touch regulation, prior to now.

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.

“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.

“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”

Read full story here…




NSA Busted Again For Collecting U.S. Phone Records

The NSA is run by Technocrats for the benefit of Technocrats to build a total surveillance society. The agency, which reports to the Director of National Intelligence, has been rogue for many years, unwilling to be controlled by courts or Congress. ⁃ TN Editor

The National Security Agency collected records about U.S. calls and text messages that it wasn’t authorized to obtain last year, in a second such incident, renewing privacy concerns surrounding the agency’s maligned phone-surveillance program, according to government documents and people familiar with the matter.

The previously undisclosed error, which took place last October, occurred several months after the NSA said it had purged hundreds of millions of metadata records it had amassed since 2015 due to a separate overcollection episode. Metadata include the numbers and time stamps of a call or text message but not the contents of the conversation.

The American Civil Liberties Union obtained the documents, which were reviewed by The Wall Street Journal, as part of a Freedom of Information Act lawsuit involving the surveillance program. They are heavily redacted internal NSA memos that discuss oversight of intelligence-collection activities.

These documents only confirm that this surveillance program is beyond redemption and should be shut down for good,” Patrick Toomey, an ACLU staff attorney, said in a statement. “The NSA’s collection of Americans’ call records is too sweeping, the compliance problems too many, and evidence of the program’s value all but nonexistent. There is no justification for leaving this surveillance power in the NSA’s hands.”

It wasn’t clear from the documents how many records the NSA improperly collected in October. The NSA’s media relations chief, Greg Julian, declined to comment specifically on the episode, but referred to the previously acknowledged incident of overcollection, disclosed last summer, in which telecommunications firms supplied information the NSA hadn’t been authorized to obtain.

“While NSA lawfully sought data pertaining to a foreign power engaged in international terrorism, the provider produced inaccurate data and data beyond which NSA sought,” Mr. Julian said.

The documents obtained by the ACLU suggest a similar situation, where a telecommunications firm, whose name is redacted, furnished call-data records the NSA hadn’t requested and weren’t approved by orders of the secretive U.S. Foreign Intelligence Surveillance Court. The company told the NSA it began delivering those records on Oct. 3, 2018, until that Oct. 12, when the agency asked it investigate the “anomaly.”

The ACLU said the documents also suggest an individual may have been targeted for surveillance as a result of the first overcollection episode, which led to the deletion of the program’s entire database in June 2018. The documents reveal that violation involved “targeting requests” that were approved by the surveillance court.

The revelation of another compliance issue is the latest hurdle for the once-secret surveillance program that began under the George W. Bush administration following the Sept. 11, 2001, attacks. As initially designed, the program sought to collect the metadata of all domestic calls in the U.S. to hunt for links among potential associates of terrorism suspects.

Edward Snowden, a former intelligence contractor, leaked the existence of the program—along with a trove of documents exposing other surveillance operations carried out by the NSA—to journalists six years ago. The disclosures ignited an international uproar over the scope of the U.S.’s electronic-spying capabilities.

Read full story here…




Everseen Walmart

Walmart Using AI-Powered Cameras In 1,000 Stores To Track Shoppers

Walmart has hired a social engineering company, Everseen, to control and change behavior of its shoppers through the use of AI-based surveillance systems. By surveilling everyone, Walmart hopes to catch a few cheaters.

According to Everseen’s website, “process mining” “amplifies awareness of scenes unfolding daily in retail, pinpointing the “moments that matter”, in order to nudge a behavior one seeks to change, and/or transform the underlying process.” ⁃ TN Editor

 

Walmart is using computer vision technology to monitor checkouts and deter potential theft and other causes of shrink in more than 1,000 stores, the company confirmed to Business Insider.

The surveillance program, internally called Missed Scan Detection, uses cameras to help identify and correct checkout scanning errors and failures.

Ireland-based Everseen is one of several companies supplying Walmart with the technology for its Missed Scan Detection program.

“We are continuously investing in people, programs and technology to keep our stores and communities safe,” a Walmart spokeswoman said.

Walmart is using computer vision technology to monitor checkouts and deter potential theft in more than 1,000 stores, the company confirmed to Business Insider.

The surveillance program, which Walmart refers to internally as Missed Scan Detection, uses cameras to help identify checkout scanning errors and failures.

The cameras track and analyze activities at both self-checkout registers and those manned by Walmart cashiers. When a potential issue arises, such as an item moving past a checkout scanner without getting scanned, the technology notifies checkout attendants so they can intervene.

The program is designed to reduce shrinkage, which is the term retailers use to define losses due to theft, scanning errors, fraud, and other causes.

US retailers lost an estimated 1.33% of revenues to shrinkage in 2017, totalling an estimated $47 billion, according to the National Retail Federation. If Walmart’s shrink rates match the industry average, the company’s US business would have lost more than $4 billion last year to theft and other related losses.

“Walmart is making a true investment to ensure the safety of our customers and associates,” Walmart spokeswoman LeMia Jenkins said. “Over the last three years, the company has invested over half a billion dollars in an effort to prevent, reduce and deter crime in our stores and parking lots. We are continuously investing in people, programs and technology to keep our stores and communities safe.”

Walmart began rolling out Missed Scan Detection technology to stores two years ago, and it appears to be working successfully so far. Shrink rates have declined at stores where it’s deployed, Jenkins said.

Ireland-based Everseen is one of several companies supplying Walmart with the technology for the program.

“Everseen overcomes human limitations. By using state of the art artificial intelligence, computer vision systems, and big data, we can detect abnormal activity and other threats,” an Everseen video advertises. “Our digital eye has perfect vision and it never needs a coffee break or a day off.”

Read full story here…




Amazon Cloud Will Host DHS Biometric Data On Hundreds Of Millions Of People

Home grown Surveillance State: “The department seeks a new platform to identify people using fingerprints, irises and faces, and eventually DNA, palm prints, scars and tattoos.” ⁃ TN Editor

The Homeland Security Department is looking to upgrade the software it uses to analyze biometric data on hundreds of millions of people around the globe, and it plans to store that information in Amazon’s cloud.

The agency’s Office of Biometric Identification Management will replace its legacy biometric analysis platform, called the Automated Biometric Identification System, or IDENT, with a new, more robust system hosted by Amazon Web Services, according to a request for information released Monday.

IDENT essentially serves as an enterprisewide clearinghouse for troves of biometric and biographic data collected by the Transportation Security Administration, Customs and Border Protection, Secret Service and other Homeland Security components. The system links fingerprint, iris and face data to biographic information, allowing officials to quickly identify suspected terrorists, immigration violators, criminals and anyone else included in their databases.

In total, IDENT contains information on more than 250 million people, a Homeland Security spokesperson told Nextgov.

According to the solicitation, Homeland Security is in the process of replacing IDENT with the Homeland Advanced Recognition Technology System, or HART. The new system will include the same biometric recognition features as its predecessor, and potentially additional tools that could identify individuals based on DNA, palm prints, scars, physical markings and tattoos.

Whereas IDENT stores records in government-run data centers, the Homeland Security solicitation states “HART will reside in the Amazon Web Services (AWS) FedRAMP certified GovCloud.” Further, “biometric matching capabilities for fingerprint, iris, and facial matching will be integrated with HART in the Amazon Web Services GovCloud.” Amazon Web Services will also store HART’s biometric image data.

Amazon Web Services’ GovCloud US-East and US-West regions are data centers specifically built by the company to house some of the government’s most restricted information. AWS is no stranger to hosting sensitive government data, having already claimed the CIADefense DepartmentNASA and other federal agencies as customers in part because of perceived security improvements over government legacy systems.

When reached for comment, an AWS spokesperson referred inquiries to DHS.

In 2018, Northrop Grumman won a $95 million contract to develop the first two stages of the HART system, and its contract is set to expire in 2021. The department plans to use responses to the latest solicitation to inform its strategy for further developing the platform, the DHS spokesperson said.

Specifically, officials are asking vendors for ideas on how to build those multiple identification functions into the new system, while leaving room to add any new recognition “modalities” as they arise. Officials also want input on developing a handful of general reporting, analytics and search tools, as well as desktop and mobile web portals where Homeland Security employees can access the system.

Read full story here…