U.S. Surveillance Tech Is Propping Up Authoritarian Regimes

This article is correct to state: “But if human rights concerns aren’t enough to move U.S. policymakers, there’s another reason to act: Exporting surveillance equipment enables digital authoritarianism and hurts U.S. national interests.” However, TN readers know that Technocrats don’t care about U.S. national interests! ⁃ TN Editor

NSO Group, an Israeli cyberintelligence firm, makes spyware that it sells to a variety of government clients around the world. It has denied that those surveillance products were involved in the torture and murder of Washington Post journalist Jamal Khashoggi, although it has neither confirmed nor denied selling its products to the Saudi government — elements of which, the CIA has concluded, ordered the killing.

That may raise eyebrows, but this intermingling of privately sold technology and authoritarian regimes is hardly an outlier. Throughout the world, despots are also probably monitoring Internet traffic, communications and behavior — in many cases using surveillance technology supplied by U.S. and other Western companies.

Take, for instance, recent reporting: The U.S. firm Gatekeeper Intelligent Security sold facial-recognition technology to the Saudi government. The system identifies the faces of drivers and passengers in cars, even with blacked-out or tinted windows. The technology has also been sold to regimes in the United Arab Emirates, and “when combined with facial recognition and number-plate readers,” Forbes wrote, “it’s designed to help authorities track individuals of interest.” This is only the latest in reports about Western firms selling surveillance technology to authoritarian regimes.

From facial recognition software to GPS trackers to computer hacking tools to systems that monitor and redirect flows of Internet traffic, contemporary surveillance technologies enable “high levels of social control at a reasonable cost,” as Nicholas Wright puts it in Foreign Affairs. But these technologies don’t just aid and enable what Wright and other policy analysts have called “digital authoritarianism.” They also promote a sovereign and controlled model of the Internet, one characterized by frequent censorship, pervasive surveillance and tight control by the state. The United States could be a world leader in preventing the spread of this Internet model, but to do so, we must reevaluate the role U.S. companies play in contributing to it.

One way to address the spread of these tools head on is the use of export controls. Such policies have been in the news more than usual recently, not least because the Trump administration has pushed to tighten regulations on American export of emerging technologies such as the chips used in supercomputers that develop artificial intelligence. The administration’s proposed controls would place new limits on what kinds of technology can be sold and to whom. But when it comes to preventing export of surveillance technology to human rights abusers, the United States lags behind, particularly when it comes to Internet-based surveillance equipment.

Initial movement to prevent the spread of this type of surveillance equipment came through the 2013 Wassenaar Arrangement, a 41-member multilateral arms-control agreement in which the United States participates. The primary goal of the Wassenaar Arrangement was and still is to limit the sale and trafficking of dual-use technologies — those that could have both a civilian and military use. For instance, network penetration software — digital tools used to break into a wireless or physical network — is used by security researchers to probe for vulnerabilities just as it’s used by governments and militaries to intercept enemy communications. The Wassenaar Arrangement is not a treaty and therefore lacks binding power, but member states agree to establish and enforce export controls on items on the arrangement control list, which is updated every December.

One of the December 2013 additions to the control list was “IP network communications surveillance systems.” These are systems that classify, collect and can inspect all the digital traffic flowing through a network — what a hacker might use to intercept your email login at a coffee shop, or what a government might use to track the online activities of activists, at scale. Governments entered into negotiations on the Wassenaar Arrangement with a clearly defined human rights goal in mind: preventing despots and bad actors from obtaining technology that they could use to commit abuses domestically. Most Wassenaar participants, including every country in the European Union, have restricted the distribution of this technology. The United States, on the other hand, has not.

The sale of technologies such as spyware and facial-recognition systems to human rights abusers — which Wassenaar ventured to stop — enables insidious social control and encroachment on basic civil liberties. But if human rights concerns aren’t enough to move U.S. policymakers, there’s another reason to act: Exporting surveillance equipment enables digital authoritarianism and hurts U.S. national interests.

Read full story here…

Judge: Cops Can’t Force You To Unlock iPhone With Finger or Face

Although this is a lower court, it sets a precedent for privacy that is urgently overdue now that fingerprint and facial scanning are commonly used unlock tools on smart phones. Police will now have to secure warrants before searching a suspects phone. ⁃ TN Editor

A California judge has ruled that American cops can’t force people to unlock a mobile phone with their face or finger. The ruling goes further to protect people’s private lives from government searches than any before and is being hailed as a potentially landmark decision.

Previously, U.S. judges had ruled that police were allowed to force unlock devices like Apple’s iPhone with biometrics, such as fingerprints, faces or irises. That was despite the fact feds weren’t permitted to force a suspect to divulge a passcode. But according to a ruling uncovered by Forbes, all logins are equal.

The order came from the U.S. District Court for the Northern District of California in the denial of a search warrant for an unspecified property in Oakland. The warrant was filed as part of an investigation into a Facebook extortion crime, in which a victim was asked to pay up or have an “embarassing” video of them publicly released. The cops had some suspects in mind and wanted to raid their property. In doing so, the feds also wanted to open up any phone on the premises via facial recognition, a fingerprint or an iris.

While the judge agreed that investigators had shown probable cause to search the property, they didn’t have the right to open all devices inside by forcing unlocks with biometric features.

On the one hand, magistrate judge Kandis Westmore ruled the request was “overbroad” as it was “neither limited to a particular person nor a particular device.”

But in a more significant part of the ruling, Judge Westmore declared that the government did not have the right, even with a warrant, to force suspects to incriminate themselves by unlocking their devices with their biological features. Previously, courts had decided biometric features, unlike passcodes, were not “testimonial.” That was because a suspect would have to willingly and verbally give up a passcode, which is not the case with biometrics. A password was therefore deemed testimony, but body parts were not, and so not granted Fifth Amendment protections against self-incrimination.

That created a paradox: How could a passcode be treated differently to a finger or face, when any of the three could be used to unlock a device and expose a user’s private life?

And that’s just what Westmore focused on in her ruling. Declaring that “technology is outpacing the law,” the judge wrote that fingerprints and face scans were not the same as “physical evidence” when considered in a context where those body features would be used to unlock a phone.

“If a person cannot be compelled to provide a passcode because it is a testimonial communication, a person cannot be compelled to provide one’s finger, thumb, iris, face, or other biometric feature to unlock that same device,” the judge wrote.

“The undersigned finds that a biometric feature is analogous to the 20 nonverbal, physiological responses elicited during a polygraph test, which are used to determine guilt or innocence, and are considered testimonial.”

There were other ways the government could get access to relevant data in the Facebook extortion case “that do not trample on the Fifth Amendment,” Westmore added. They could, for instance, ask Facebook to provide Messenger communications, she suggested. Facebook has been willing to hand over such messages in a significant number of previous cases Forbes has reviewed.

Read full story here…

Police Now Have AI Tool To Catch Liars

Local police are supposed to be under the control of local citizens. In the past 20 years, police departments have been federalized and militarized throughout the world. Society is looking more like Brave New Word every day. ⁃ TN Editor

British scientists have developed a new computer programme that can spot if someone has lied to police about being robbed.

The groundbreaking software analyses the wording of a victim’s statement in order to identify tell tale signs of fake reports.

Spanish police, who have been using the tool, found it was successful in more than 80 per cent of cases helping them to identify 64 false reports in just one week.

Developed by experts at Cardiff University, VeriPol, uses a combination of automatic text analysis and artificial intelligence to recognise when somebody has been lying or exaggerating to the police.

Thousands of false reports are submitted to the police each year with many perpetrators hoping to receive inflated insurance payouts or claims for crimes that never happened in the first place.

But according to the researchers there are certain tell-tale signs contained in written statements which can be quickly identified by the new software.

Using algorithms the machine is able to carefully analyse various features in the text, such as adjectives, acronyms, verbs, nouns, punctuation marks and numbers.

Experts claim a false statement is more likely to contain certain traits and giveaway signs, that can be spotted using artificial intelligence.

It is thought false statements are more likely to be shorter than genuine ones and focus on the details of the stolen property rather than the incident itself.

They are also thought to be lacking in certain details about the perpetrator and the attack itself.

The team behind the technology believe it could help police forces save valuable time and effort in investigating fake crimes and will also help deter people from filing false reports in the first place.

Read full story here…

FBI Seeks ‘Rapid DNA’ Database To Check Perps

The barriers to ubiquitous DNA scanning and identification are being rapidly broken down as law enforcement adopts new devices being sold without regulations attached. Who will have access to the results and what will they do with them? ⁃ TN Editor

Though DNA has revolutionized modern crime fighting, the clues it may hold aren’t revealed quickly. Samples of saliva, or skin, or semen are sent to a crime lab by car (or mail), and then chemists get to work. Detectives are accustomed to waiting days or weeks, or more, for the results. Some labs are so backed up, they only take the most serious crimes. Some samples never get tested.

But a portable machine about the size of a large desktop printer is changing that. A “Rapid DNA” machine can take a swab of DNA, analyze it and produce a profile of 20 specific loci on the DNA strand in less than two hours. Some local police departments and prosecutors have been using Rapid DNA machines for about five years to solve crimes.

In Orange County, California, recently, police investigating a stabbing found a trail of blood from the suspect. The Rapid DNA machine was able to produce a profile that matched someone already in the Orange County database, but who was “not on the radar” of investigators, Assistant District Attorney Jennifer

Contini said. He was arrested. “The speed with which you can give law enforcement these clues is critical,” Contini said. “When you are out on these suspects fast, they confess. We’ve had tremendous success.”

And last month, one of the two manufacturers of Rapid DNA machines, ANDE of Waltham, Massachusetts, shipped six of the machines to California for use in trying to identify victims of the massive wildfires there, using DNA from family members to create a temporary searchable database.

However, the machines currently aren’t connected to CODIS, the FBI’s combined national DNA database. So the FBI is launching a Rapid DNA initiative to place the machines in police and sheriffs’ booking stations around the country, hoping to enable law enforcement to check arrestees against the CODIS database and, when a match is made to DNA from an unsolved crime, capture the person before they’re released.

In testifying to Congress about the Rapid DNA network in 2015, then-FBI Director James Comey said the technology “would help us change the world in a very, very exciting way.” Comey said it would allow “booking stations around the country, if someone’s arrested, to know instantly — or near instantly — whether that person is the rapist who’s been on the loose in a particular community before they’re released on bail and get away or to clear somebody, to show that they’re not the person.”

Thirty states and the federal government currently allow DNA to be taken at the time of arrest. Sixteen states allow it to be analyzed immediately, and in the other 14 states, DNA may be taken at arrest but not analyzed until after arraignment on charges. The FBI expects that a Rapid DNA network will not only enable more identifications of crime suspects, but also drastically reduce the time investigators spend waiting for DNA results, and lessen the burden on crime labs.

Congress approved legislation authorizing the Rapid DNA network last year, and the FBI plans to slowly roll it out beginning next year. “Our goal in 2019,” said Thomas Callaghan, chief biometric scientist for the FBI Laboratory, “is to be able to have a pilot project done where we actually develop a DNA profile in a booking station, with no human review, and have it electronically enrolled and searched in the national database. We have to ensure that the quality that’s done in a lab can be done in a booking station,” which are often jails where fingerprints and mugshots are usually taken.

The FBI program will not allow the submission of unknown crime scene from the Rapid DNA machines to the CODIS database. “The machines were initially developed,” Callaghan said, “for large amounts of DNA from a single person, soon after it’s collected.” Crime scene DNA could have a mixture of DNA from different sources, or be contaminated by its surroundings, and the machines have not proved “robust enough to handle crime scene samples,” Callaghan said, so the Rapid DNA submissions to CODIS may only come from known individuals.

Read full story here…

Technocracy In China Completely Intolerant Of Christian Churches

Scientism believes that truth is only discovered through scientific inquiry, not by faith or religious books. Thus, there is zero tolerance for practicing Christianity and Christians are branded as being anti-government. ⁃ TN Editor

About 100 worshipers at an unofficial church in southwestern China were snatched from their homes or from the streets in coordinated raids which began on Sunday evening.

Australia Sets Legal Precedent For Cyber Snooping

Governments are cracking down on cyber-security and privacy by demanding access to all electronic devices at will. There is no clear definition of who is a “suspected” terrorist or criminal. ⁃ TN Editor

Australia Thursday passed controversial laws allowing spies and police to snoop on the encrypted communications of suspected terrorists and criminals, as experts warned the “unprecedented powers” had far-reaching implications for global cybersecurity.

There has been extensive debate about the laws and their reach beyond Australia’s shores in what is seen as the latest salvo between global governments and tech firms over national security and privacy.

Under the legislation, Canberra can compel local and international providers — including overseas communication giants such as Facebook and WhatsApp — to remove electronic protections, conceal covert operations by government agencies, and help with access to devices or services.

Australian authorities can also require that those demands be kept secret.

The conservative government had pushed for the bill to be passed before parliament rises for the year this week, saying the new powers were needed to thwart terror attacks during the festive period.

A last-minute deal was struck with the opposition Labor Party over its demands for more oversight and safeguards when the laws are used, with a review of the legislation to take place in 18 months.

The government also agreed to consider further amendments to the bill early next year.

National cyber security adviser Alastair MacGibbon said police have been “going blind or going deaf because of encryption” used by suspects.

Brushing off warnings from tech giants that the laws would undermine internet security, MacGibbon said they would be similar to traditional telecommunications intercepts, just updated to take in modern technologies.

Global communications firms, including Google and Twitter, have repeatedly said the legislation would force them to create vulnerabilities in their products, such as by decrypting messages on apps, which could then by exploited by bad actors.

A central protection in the laws to block authorities from forcing companies to build a “systemic weakness” into their product remains poorly defined, critics say.

The Law Council of Australia, the peak body for the legal profession, said it had “serious concerns” about the changes.

“We now have a situation where unprecedented powers to access encrypted communications are now law, even though parliament knows serious problems exist,” it said in a statement.

Experts such as the UN special rapporteur on the right to privacy Joseph Cannataci have described the bill as “poorly conceived” and “equally as likely to endanger security as not”.

Read full story here…

Meet IARPA: Intelligence Advanced Research Projects Activity

The Pentagon has its Technocrat scientist operation with DARPA, the Defense Advanced Research Projects Agency. The intelligence community has its own version in IARPA, the Intelligence Advanced Research Projects Activity. 

Imagine having a beefy budget, no project restraints, and you can try to invent anything your mind can brainstorm into existence. The only requirement is that it has to somehow support the various Intel agencies that sit underneath the Office of the Director of National Intelligence (ODNI) and reports directly to the Director of National Intelligence. 

 IARPA’s stated mission is to “Lead and support Intelligence Community integration; delivering insights, driving capabilities, and investing in the future.” Sounds simple enough. Then it gets interesting:

  • We innovate, imagine, seek out, and pursue new possibilities
  • We speak truth to power
  • We consider all aspects of risk
  • We anticipate, embrace, and drive change

IARPA claims that they do not implement the things they invent, which effectively removes them from responsibility for their actions and inventions. It’s a Technocrat’s dream come true.

The ODNI was created by President George Bush in February 2005 in order to consolidate all 17 Intel agencies underneath a central authority and manager. This necessitated a complete restructuring and repurposing of America’s Intelligence apparatus, a feat that had never been done before.  

The person President Bush chose for this reorganization was John Negroponte, a member of the Trilateral Commission. During his two year tenure, Negroponte created IARPA  to be a technological brainstorming arm to invent surveillance and analysis systems for his department. 

In January, 2009, another Trilateral Commission member , Adm. Dennis Blair, served as DNI under Obama.  

What type of projects is IARPA working on?

  • Speech recognition (i.e., universal translators, transcriptions)
  • Quantum machine learning
  • Geospatial Intel
  • Artificial Intelligence
  • Event forecasting (i.e., pre-crime)
  • Facial recognition, biometrics
  • Mapping the Brain using AI

For those who think that the National Security Agency and CIA come up with crazy Intel technologies, now you know where most of it comes from. And, don’t forget that the master fingerprint of the whole enterprise belongs to the Trilateral Commission. 

Western Auto Giants Accused Of Spying On Citizens For Chinese Government

Ford, Tesla, Volkswagen, BMW and 200 more carmakers are feeding location data to the Chinese government. That this data would not be used to augment citizen oppression is inconceivable. The fact is, electric autos already transmit location data to the automaker; they are merely turning it over to control-freak Technocrats. Americans should take a very hard look at Tesla and Ford, remembering the Elon Musk’s grandfather was the Canadian leader of Technocracy, Inc. in Canada during the 1930s and 1940s. Ford has always leaned toward Technocracy, playing to a key theme in Huxley’s Brave New World, where time was measured as AF, or ‘Anno Ford’. Apparently neither Ford nor Tesla care that they are helping enslave 20% of the human race in China, and if true, would they do it to the other 80%? The answer is self-evident! ⁃ TN Editor

Global car giants are feeding real-time information from electric vehicles to China’s ‘big brother’ government, it’s been reported.

The revelations come as President Xi Jinping is accused of stepping up the use of tech to track the movements of his citizens.

More than 200 car makers – including Tesla, Volkswagen, BMW, Ford, Nissan, and Mitsubishi – have been passing on the info.

The Associated Press has revealed they have been sending at least 61 data points to government-backed monitoring platforms.

However, the auto manufacturers say they are merely complying with local laws, which only apply to alternative energy vehicles.

And Chinese officials say the data is only used for analytics to improve public safety, infrastructure planning and prevent fraud.

But critics say the information collected exceeds those goals and could even be used for surveillance purposes.

Under Xi, China has been accused of using tech to “police” anything perceived threats to the ruling Communist Party.

Ding Xiaohua, of the Shanghai Electric Vehicle Public Data Collecting, Monitoring and Research Centre said it is not facilitating state surveillance.

However, data can be shared with police, prosecutors or courts, if a formal request is made.

The centre is registered as a non-profit but is tightly aligned with and funded by the government.

“To speak bluntly, the government doesn’t need to surveil through a platform like ours,” Ding said.

Many vehicles in the US, Europe and Japan already transmit position information back to automakers

They then feed it to tracking apps, maps that pinpoint nearby amenities etc, but the data stops there.

Read full story here…

Beijing Sets Plan To Monitor Behavior Of Every Resident By 2020

More journalists and news sites are sounding the warning over China’s Orwellian social engineering dictatorship to micromanage every citizen in their country – All 1.4 billion of them! ⁃ TN Editor

Beijing announced an “action plan” this week for monitoring residents’ behavior, adding that the city expects to have its social credit system fully implemented by the end of 2020.

Beijing plans to reward and punish its residents based on data that will be collected from various departments monitoring citizens’ social behavior, according to a detailed “action plan” posted on Monday to the city’s municipal website.

By the beginning of 2020, the announcement declares, China’s capital city will have all residents officially locked into the permanent surveillance program, part of a broader effort to have every Chinese citizen rated on a “social credit system” decreeing what public services a person can use based on their obedience to laws and loyalty to the communist regime.

The government will use the data collected to assess citizens’ behavior to decide if an individual is law-abiding and “trustworthy” to the Communist Party.

Residents who behave properly in the eyes of the Chinese government will receive high credit scores, while residents who misbehave will receive low scores, causing them to lead more difficult lives.

“Efforts will be made to build a market supervision mechanism with corporate credit as the core,” states Beijing’s municipal website, adding that it will explore the implementation of what it calls “the personal integrity project,” which will utilize residents’ credit scores for “market access, public services, tourism,” and “fields such as entrepreneurship and job hunting.”

Higher scores can also open the “green channel,” which will expedite residents’ applications for higher quality “education and medical resources.”

“Those who violate the law and lose trust will pay a heavy price,” adds the government website, stating that it will “improve the blacklist system” and that residents will find themselves “limited everywhere, and difficult to move” if they are deemed untrustworthy by the Communists in control.

Last month, the Chinese government removed 10,000 social media accounts belonging to its citizens.

Read full story here…

Police Using Mysterious New Tool To Unlock Cellphones

Technocrats have no moral or ethical boundaries when inventing any device to collect more data. There is no probability that they will stop even if there is a specific law in place forbidding the data collection from cellphones. ⁃ TN Editor

Law enforcement agencies are using a mysterious new tool to unlock the cellphones of criminal suspects and access their text messages, emails and voice messages. Some agencies around the country, concerned about security, are not even acknowledging use of the devices. Adding to the secrecy surrounding the new tool, one of the companies that makes and sells the devices to police and government agencies does not display their products on the company website.

Still, the I-Team was able to get access to a device used by the Montgomery County Sheriff’s Office. Isaiah Kellar, a detective in the Special Investigations Unit, was willing to show us how it works.

“Pretty much anything you do on your cellphone, we’re able to look at through our computer software,” Kellar said.

The device is about the size of a small tablet computer with USB connections to plug in a cellphone. Kellar said bypassing the passcode may take hours or days, allowing the phone data to be downloaded.

Dave Langos, Director of the sheriff’s Criminal Intelligence Unit, said the unlocking of the phone can be critical to a criminal investigation, revealing where that phone has been and potentially placing a suspect at the scene of a crime.

“With the proper search warrant and so forth for the phone the information you retrieve from those is very valuable in solving cases and that has been proven over and over again,” Langos said.

Two major companies produce devices that can hack passcodes and phone security systems. The Montgomery County Sheriff’s Office uses one from the company Cellebrite. Another firm, Grayshift, has a model called the Gray Key.

While the devices are helpful in gathering vital information in police investigations, privacy advocates are concerned about the possibility of abuse by government agencies, employers, or anyone else with access to a hacking device.

Dale Carson, a Florida-based attorney and former FBI agent said Congress should step in and regulate the use of the devices to guard against abuse.

Read full story here…