Police Across The US Are Training Crime-Predicting AIs On Falsified Data

The entire criminal justice system across America is being corrupted by the blatant misuse of AI technology. Police are not ignorant when they seek the results they want instead of the objective facts of a matter. This is comparable to the false global warming science community.  ⁃ TN Editor

In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.

It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were “serious, wide-ranging, systemic and deeply rooted within the culture of the department.”

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.

Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions.

But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study. “If the data itself is incorrect, it will cause more police resources to be focused on the same over-surveilled and often racially targeted communities. So what you’ve done is actually a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”

The researchers examined 13 jurisdictions, focusing on those that have used predictive policing systems and been subject to a government-commissioned investigation. The latter requirement ensured that the policing practices had legally verifiable documentation. In nine of the jurisdictions, they found strong evidence that the systems had been trained on “dirty data.”

The problem wasn’t just data skewed by disproportionate targeting of minorities, as in New Orleans. In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates. In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints. Some police officers even planted drugs on innocent people to meet their quotas for arrests. In modern-day predictive policing systems, which rely on machine learning to forecast crime, those corrupted data points become legitimate predictors.

Read full story here…




Facebook’s Data Scandals Multiply Like Rabbits

Now Facebook is caught in yet another outrageous data scandal that pays unsuspecting teenagers to install a background app that sucks all data out of their phones: messages, emails, browser histories, phone calls, etc. This may be in violation of both Apple and Android app store regulations. ⁃ TN Editor

Desperate for data on its competitors, Facebook  has been secretly paying people to install a “Facebook Research” VPN that lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity, a TechCrunch investigation confirms.

Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits, and it has no plans to stop.

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

The strategy shows how far Facebook is willing to go and how much it’s willing to pay to protect its dominance — even at the risk of breaking the rules of Apple’s iOS platform on which it depends. Apple could seek to block Facebook from continuing to distribute its Research app, or even revoke it permission to offer employee-only apps, and the situation could further chill relations between the tech giants. Apple’s Tim Cook has repeatedly criticized Facebook’s data collection practices. Facebook disobeying iOS policies to slurp up more information could become a new talking point. TechCrunch has spoken to Apple and it’s aware of the issue, but the company did not provide a statement before press time.

“The fairly technical sounding ‘install our Root Certificate’ step is appalling,” Strafach tells us. “This hands Facebook continuous access to the most sensitive data about you, and most users are going to be unable to reasonably consent to this regardless of any agreement they sign, because there is no good way to articulate just how much power is handed to Facebook when you do this.”

Facebook’s surveillance app

Facebook first got into the data-sniffing business when it acquired Onavo for around $120 million in 2014. The VPN app helped users track and minimize their mobile data plan usage, but also gave Facebook deep analytics about what other apps they were using. Internal documents acquired by Charlie Warzel and Ryan Mac of BuzzFeed News reveal that Facebook was able to leverage Onavo to learn that WhatsApp was sending more than twice as many messages per day as Facebook Messenger. Onavo  allowed Facebook to spot WhatsApp’s meteoric rise and justify paying $19 billion to buy the chat startup in 2014. WhatsApp has since tripled its user base, demonstrating the power of Onavo’s foresight.

Read full story here…




Russia’s ‘Hi-Tech Robot’ Was An Actor In A Suit

Some  magic acts fail miserably, as in the case of this fake robot from Russia. More disturbing is that there is a magic act in the first place, which Technocrats design to purposively deceive the population. ⁃ TN Editor

A supposed “hi-tech robot” displayed at a youth robotics forum on Russian state television turned out to be a man in a suit, reports confirmed Wednesday.

Footage broadcast live on Russia-24 showed television anchors praising the ersatz android during coverage of a technology forum held in Yaroslavl, a city about 150 miles north-east of Moscow, boasting that “Robot Boris has already learned to dance and he’s not that bad.”

“It’s entirely possible one of these [students] could dedicate himself to robotics,” one anchor said. “Especially as at the forum they have the opportunity to look at the most modern robots.”

“I know mathematics well but I also want to learn to draw,” Boris responded, before dancing to the Little Big song Skibidi.

Other footage from the event showed Boris taking part in banter with people on stage and performing various dances. His level of sophistication was used to encourage children to develop an interest in robotics and as proof of the country’s technological breakthroughs.

However, many viewers were left unconvinced by the robot, with the Russian website TJournal posing a series of questions about the robot’s performance, including the location of his external sensors and why he made so many “unnecessary movements” while dancing. Later on, Some photographs posted on social media even showed the robot’s very visible neckline.

photograph published by MBKh Media, an anti-Kremlin agency founded by the Vladimir Putin opponent Mikhail Khodorkovsky, appeared to show the man wearing the robot suit ahead of the forum.

As the questions mounted, it emerged that Boris was a £3000 costume made by the company Show Robots designed to give the “near total illusion that before you stands a real robot,” equipped with microphone and tablet display.

The event’s organizers did not claim that the android was anything other than a man in a suit, according to local reports, so it is not clear why the state television station chose to do so. On Wednesday morning, the television report was removed from Russia-24’s YouTube channel but was uploaded again by early afternoon.

Read full story here…




Tim Ball: Al Gore Just Can’t Escape His Loopy Scientific Ignorance

For the sake of refuting ignorance and purposeful deception, only the facts will suffice. Dr. Tim Ball schools the world on Al Gore’s frenetic claims about loopy and wavy jet streams: alas, it is Gore himself who is loopy. ⁃ TN Editor

Nobody does a better job of proving Mark Twain’s maxim than Al Gore. 

“It’s better to keep your mouth shut and appear stupid than open it and remove all doubt.”

His latest example was reported in an article here at Technocracy News titled,

Al Gore Declares ‘We Have A Global Emergency’ Over Jet Stream ‘Getting Loopier And Wavier’

The only thing getting “Loopier and Wavier” is Gore’s pronouncement. Every time he makes them he proves how little he knows about climatology and the way the Earth’s atmosphere and weather systems work. We have known about the mechanisms he doesn’t know about for his entire 70-year lifetime.

Just a few minutes looking at any first-year climatology text would explain. It would show that he is talking about Rossby Waves in the Circumpolar Vortex (Jet Stream) and they are normal events. Carl-Gustaf Rossby identified them in the late 1940s. This came after pilots trying to bomb Japan during WWII discovered the Jet Stream. They didn’t think they could get from the Pacific Islands to Tokyo and back. Luckily, they flew one of the first pressurized bombers, the B29 Superfortress, an aircraft that could reach the level of these high altitude winds. They flew low one way and high the other to complete a mission that did limited damage but was of enormous consequence to the belief of invulnerability of the Japanese people and a huge boost of morale for the Allies.

The following is a brief explanation of the atmospheric mechanisms Gore claims are new and unnatural. The atmosphere is divided into two by a boundary that marks the point at which the amount of energy coming in from the Sun equals the amount of energy going back to space (Figure 1). It and the other diagrams were in my university textbook first published in 1989. This boundary is known as the point of Zero Energy Balance (ZEB). Notice the pattern is different between the Northern and Southern Hemispheres, primarily because the North is 39% land and 61% ocean.  In the South, the ratio is 19% land and 81% ocean.

The ZEB moves with the seasons, so the boundary moves toward or away from the Equator as the dome of cold air expands and contracts (Figure 2).

Figure 1

Figure 3 shows that in very simple terms the Earth’s atmosphere divides into two areas. With a dome of cold over the poles and a warm air in between. The boundary between the two is called the Polar Front.

Figure 2

The temperature difference across the Polar Front is the most dramatic of any latitude and is called the Zonal Index. The difference in temperature between two locations creates the greatest pressure difference, and that creates the greatest wind speeds. Figure 3 puts these factors together and shows the relationship between the domes of air and the Jet Stream. Originally, and still, correctly, the strong wind circling the planet is the Circumpolar Vortex.  The portion with the highest wind speeds was called the Jet Stream, but over time it became the name for the entire process.

Figure 3

When water or air flows through a uniform medium, it develops a waveform. The medium can be soil, water, or air. When the river of fast air flows around the globe through the atmosphere, these Rossby Waves develop. The historical record show two basic patterns dominate.

The first is called Zonal Flow (Figure 4a) and the second Meridional Flow.

The weather patterns associated with each Flow are distinctive. With Zonal Flow the winds in the middle latitudes (30° to 65°) are from the northwest in winter and southwest in summer, temperature and precipitation variations are relatively low. The Wave moves along the Front from west to east causing a general weather pattern at a station to change every 4 to 6 weeks.

With Meridional Flow the increases in amplitude allow warm air to push further north and cold air further south. The winds become northerly and southerly, temperature and precipitation patterns become more extreme. The Waves continue to migrate from west to east, but if the amplitude becomes great, the system will stall, creating a blocking to occur and weather patterns such as drought or heavy rains to occur. Often the weather forecast will talk about an “Omega Block” to describe the Greek letter (Ω) shape of the Polar Front on the weather map (Figure 5).

Figure 5

After 1998, as global temperatures stopped rising, the Wave pattern shifted from Zonal to Meridional. It was accompanied by the increased variability of temperatures and weather that we had seen before. The historical record show similar periods of varying length related to global climate cooling.

The pattern of Rossby Waves varies between Zonal and Meridional with one or the other dominating, sometimes for long periods such as during the Medieval Warm Period and the Little Ice Age. There were two distinctive centuries the 14th and 17th. Barbara Tuchman documented the pattern of the 14th century and its impact on the human condition in her book “A Distant Mirror”  with the telling subtitle, “The Calamitous 14th Century.” Tuchman took as the hero, a man whose lifespan of a man born in 1300 who died in 1399.

The century was mostly Meridional with seasonal changes so that there were cool, wet summers and warmer drier winters and seasonal differentiation became blurred. These conditions were ideal for the survival of pathogens and disease-causing crop failures and major plagues. A malnourished people succumbed to them by the millions. The pattern repeated in the 17th century with plagues and diseases decimating the population. Estimates from church records that two-thirds of the Finnish population died in the winter of 1688. Samuel Pepys’ recorded the consternation in his diaries and noted that the King ordered a holiday on condition that citizens go to church and pray for cold weather to make things healthier again. It didn’t work: the plague struck London in 1665. Fortuitously, another disaster in 1666, the Great Fire of London, helped. It removed the narrow, fetid streets that were ideal for breeding and spreading the plague that exacerbated the situation.

Praying won’t help Al Gore’s solution either. He is trying to resurrect the climate deception in conjunction with the phony IPCC Report. He is also compelled by the fact the public show no interest or concern for the alarmist use of weather and climate for his political agenda; hence his ill-informed “loopier and wavier” comment.  You might remember another exploiter of the loopy and wavy weather for a political agenda. John Holdren, as Obama’s Science advisor, pronounced from the White House the occurrence of a Polar Vortex as if it was something new. He was talking about one Wave of advancing cold air.

The good news for Gore is that his carbon footprint indicates his foot is too big to put in his political mouth.




Dr. Tim Ball On Climate: Lies Wrapped In Deception Smothered With Delusion

Technocrats have darkened hearts just like everyone else, but they soon discovered how to use the mantra of ‘science’ to trick and deceive. In similar fashion of the 1970s Chiffon Margarine ad, “It’s not nice to fool mother nature”, Technocrats follow with “97% of scientists agree…”. ⁃ TN Editor

The Washington swamp displayed all its corruption skills with lies, deceptions, misrepresentations, and deliberate creation of deceit, during the Kavanaugh hearings. The willingness of politicians on the left to destroy everything America stands for was frightening. We watched Senator Blumenthal, who lied about serving in Vietnam when he never left the United States, remind Judge Kavanaugh of a legal maxim “Falsus in Uno, Falsus in Omnibus,” false in one thing, false in everything.

The only difference between these and previous similar tactics was boldness – the left was forced to show their hand more than normal. There are few silver linings to this cloud because if it succeeds, it is the end of America. Everything the left did and said undermines core values of a civilized society, correctly and uniquely identified as American exceptionalism.

One sliver of silver lining is in the level of corruption exposed to achieve a political agenda. Now it is easier for people to grasp the extent of corruption on the greatest deception in history, human-caused global warming (AGW). It is easier now to get them to understand that the left will do anything to achieve their goal. The significant differences between AGW and the Kavanaugh debacle were time and extent. The AGW deception has evolved slowly and insidiously since the late 1960s. It began as the objective of David Rockefeller’s Club of Rome’s (COR) to control energy and thereby political power. It is just as corrupting and devastating an attack on American exceptionalism but worse because it is global. The COR say they are

 “a group of world citizens, sharing a common concern for the future of humanity.

Compare this claim with H. L. Mencken’s observation that,

“The urge to save humanity is almost always a false front for the urge to rule.”

America was seen as the greatest threat to their objective, so it became a major target, but it was still only a part of the global control.

COR member Maurice Strong took the urge to rule to the UN where he put it into action. After spending five days with Strong at the UN, Elaine Dewar summarized his goal in her book Cloak of Green.

Strong was using the U.N. as a platform to sell a global environment crisis and the Global Governance Agenda.

He did this by creating the political monster known as Agenda 21 and creating the science to support the politics through the Intergovernmental Panel on Climate Change (IPCC). Like all deceptions, there are lies within lies and deceptions within deceptions. Even the selection of terminology and words was deliberately planned to deceive. For example, the Earth’s atmosphere does not work like a greenhouse. The analogy was only valuable because it automatically triggers the concept of heat for the public. The deceivers knew this type of misrepresentation worked because the same people created the term “holes-in-the-ozone.” They knew there were no holes, but the term implied a leak, a break in the atmosphere, with all the “Chicken Little” the sky is falling fears that engenders.

The next example was the word skeptic, which as Michael Shermer explained.

“Scientists are skeptics. It’s unfortunate that the word ‘skeptic’ has taken on other connotations in the culture involving nihilism and cynicism. Really, in its pure and original meaning, it’s just thoughtful inquiry.”

After 1998 the evidence did not fit the AGW theory anymore so by 2004 they changed it from the global warming theory to the climate change theory. They also changed the slur from skeptics to deniers, with its holocaust connotations. They ignored the fact that these scientists do nothing but educate people to the amount and extent of natural climate change.

The most effective deception was the claim that 97% of scientists agree. It is as false as the whole claim and was also deliberately created. It was a major part of the confusion created and exploited by the difference in meaning of words between different segments of society. It is why Voltaire said,

“If you wish to converse with me, define your terms.”

That sounds arrogant and condescending, but it is essential for any chance of accurate understanding.

RealClimate was the website created to manipulate the global warming story. Most of the people involved with its creation were members of the Climatic Research Unit (CRU) and the IPCC. The need for a propaganda vehicle was revealed in November 2009 when thousands of emails were leaked (Climategate) and exposed their tactics and activities.  A book by Mosher and Fuller listed some of them.

  • Actively worked to evade Freedom of Information requests, deleting emails, documents, and even climate data.
  • Tried to corrupt the peer-review principles that are the mainstay of modern science, reviewing each other’s work, sabotaging efforts of opponents trying to publish their own work, and threatening editors of journals who didn’t bow to their demands
  • Changed the shape of their own data in materials shown to politicians charged with changing the shape of our world.

RealClimate explained on 22 December 2004 why they started to use the word consensus. It illustrates how political it was and how they knew it didn’t apply to science, but the goal was deception.

We’ve used the term “consensus” here a bit recently without ever really defining what we mean by it. In normal practice, there is no great need to define it – no science depends on it. But it’s useful to record the core that most scientists agree on, for public presentation. The consensus that exists is that of the IPCC reports, in particular the working group I report (there are three WG’s. By “IPCC”, people tend to mean WG I).

In short, we agree therefore there is a consensus.

The academic source of the 97% claim came from John Cook et al., in 2013 under the titled “Quantifying the consensus on anthropogenic global warming in the scientific literature.”  Lord Monckton dissected the claim in his comment titled, “0.3% consensus, not 97.1%.”  He explains how the authors took divided 11,944 abstracts of articles into three categories using their own definitions. Monckton used,

The authors’ own data file categorized 64 abstracts, or only 0.5% of the sample, as endorsing the consensus hypothesis as thus defined. Inspection shows only 41 of the 64, or 0.3% of the entire sample, actually endorsed their hypothesis.

The penultimate comment comes from Harvard graduate, medical doctor, and world-famous science fiction writer, Michael Crichton.

“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

The ultimate comment comes from Albert Einstein.

No amount of experimentation can ever prove me right: a single experiment can prove me wrong.

The left was fully revealed in the Kavanaugh debacle. It was so extreme that it made exposure of their methods and tactics clear to people who found them hard to believe. Now, it is easier for them to grasp the AGW deception.




Corrupt Scientists Driven To Resign, but Not Climate Scientists?

Top researchers and scientists are being forced to resign over egregious ethical violations and conflicts of interest, but it appears that climate scientists are still getting a free pass. Climate science has been rife with purposely falsified data and the same kind of ethical breaches as found in this story.  ⁃ TN Editor

Three prominent US scientists have been pushed to resign over the past 10 days after damning revelations about their methods, a sign of greater vigilance and decreasing tolerance for misconduct within the research community.

The most spectacular fall concerned Jose Baselga, chief medical officer at Memorial Sloan Kettering Cancer Center in New York. He authored hundreds of articles on cancer research.

Investigative journalism group ProPublica and The New York Times revealed on September 8 that Baselga failed to disclose in dozens of research articles that he had received millions of dollars from pharmaceutical and medical companies.

Such declarations are generally required by scientific journals.

Links between a doctor leading a clinical trial and manufacturers of drugs or medical equipment used in the study can influence the methodology and ultimately the results.

But journals don’t themselves verify the thoroughness of an author’s declarations.

Caught up in the scandal, Baselga resigned on September 13.

Next came the case of Brian Wansink, director of the Food and Brand Lab at the prestigious Cornell University.

He made his name thanks to studies that garnered plenty of media attention, including on pizza, and the appetites of children.

His troubles began last year when scientific sleuths discovered anomalies and surprisingly positive results in dozens of his articles.

In February, BuzzFeed published messages in which Wansink encouraged a researcher to extract from her data results more likely to go “viral.”

After a yearlong inquiry, Cornell announced on Thursday that Wansink committed “academic misconduct in his research and scholarship,” describing a litany of problems with his results and methods.

He is set to resign at the end of the academic year, but from now on will no longer teach there.

Wansink denied all fraud, but 13 of his articles have already been withdrawn by journals.

In the final case, Gilbert Welch, a professor of public health at Dartmouth College, resigned last week.

The university accused him of plagiarism in an article published in The New England Journal of Medicine, the most respected American medical journal.

“The good news is that we are finally starting to see a lot of these cases become public,” said Ivan Oransky co-founder of the site Retraction Watch, a project of the Center for Scientific Integrity that keeps tabs on retractions of research articles in thousands of journals.

Oransky told AFP that what has emerged so far is only the tip of the iceberg.

The problem, he said, is that scientists, and supporters of science, have often been unwilling to raise such controversies “because they’re afraid that talking about them will decrease trust in science and that it will aid and abet anti-science forces.”

But silence only encourages bad behavior, he argued. According to Oransky, more transparency will in fact only help the public to better comprehend the scientific process.

“At the end of the day, we need to think about science as a human enterprise, we need to remember that it’s done by humans,” he said. “Let’s remember that humans make mistakes, they cut corners, sometimes worse.”

Read full story here…




Hackers Now Deploying AI To Break The Best Computer Defenses

Technocracy promises Scientific Dictatorship in the end. If IBM now admits that their engineers can do it, you can be certain that U.S. Intelligence agencies have been doing it for some time. Targeted hacking is potentially the most malicious and destructive type of surveillance in history. In the hands of rogue intel, all details are laid bare before them. ⁃ TN Editor

The nightmare scenario for computer security – artificial intelligence programs that can learn how to evade even the best defenses – may already have arrived.

That warning from security researchers is driven home by a team from IBM Corp. who have used the artificial intelligence technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it’s only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc’s Google and others, and the ideas work all too well in practice.

“I absolutely do believe we’re going there,” said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. “It’s going to make it a lot harder to detect.”

The most advanced nation-state hackers have already shown that they can build attack programs that activate only when they have reached a target. The best-known example is Stuxnet, which was deployed by U.S. and Israeli intelligence agencies against a uranium enrichment facility in Iran.

The IBM effort, named DeepLocker, showed that a similar level of precision can be available to those with far fewer resources than a national government.

In a demonstration using publicly available photos of a sample target, the team used a hacked version of videoconferencing software that swung into action only when it detected the face of a target.

“We have a lot of reason to believe this is the next big thing,” said lead IBM researcher Marc Ph. Stoecklin. “This may have happened already, and we will see it two or three years from now.”

Read full story here…




Monsanto Technocrats Bullied Scientists To Hide Glyphosate Cancer Risks

Monsanto knows how to bully anyone who gets in their way, whether farmers, middlemen or even other scientists. They have used the same methodology to corrupting science and data as seen in the global warming agenda. ⁃ TN Editor

A lawyer on Monday argued that Roundup creator Monsanto hid the cancer-causing effects of their weedkiller and bullied scientists into making claims it was safe. In a landmark lawsuit against the global chemical corporation, the lawyer didn’t hold back in his accusations against Monsanto.

“Monsanto has specifically gone out of its way to bully … and to fight independent researchers,” said the attorney Brent Wisner, who presented internal Monsanto emails that he said showed how the agrochemical company rejected critical research and expert warnings over the years while pursuing and helping to write favorable analyses of their products. “They fought science,” added Wisner, who is representing Dwayne Johnson.  Johnson alleges Monsanto is to blame for the cancer that has been aggressively spreading throughout his entire body.

According to The Guardian, Johnson (also known as Lee) is a father of three and a former school groundskeeper, who doctors say may have just mere months to live. He is the first person to take Monsanto to trial over allegations that the chemical sold under the Roundup brand is linked to cancer although thousands have made similar legal claims across the United States.  This lawsuit focuses on the chemical glyphosate, the world’s most widely used herbicide, which Monsanto began marketing as Roundup in 1974.  The company began by presenting it as a “technological breakthrough” that could kill almost every weed without harming humans or the environment.

Monsanto lawyer George Lombardi alleged that the body of research over the past decades was on the company’s side. “The scientific evidence is overwhelming that glyphosate-based products do not cause cancer and did not cause Mr. Johnson’s cancer,” Lombardi claimed in his opening statements.

Unfortunately for Lombardi, many studies have shown his statements as fallacious. There is a mountain of scientific data working against Monsanto, including a 2015 declaration by the World Health Organization’s international agency for research on cancer (IARC), which classified glyphosate as “probably carcinogenic to humans.”  Carcinogenic means “potentially cancer-causing.”

Johnson worked as a groundskeeper for the school district in Benicia, just north of San Francisco, in California. He was responsible for applying the weedkiller Roundup, Monsanto’s glyphosate product to the grounds.   According to The Guardian, lawyers for Johnson showed the jury photos of lesions and rashes on Johnson’s skin after he was regularly exposed to the chemical. Johnson was eventually diagnosed with non-Hodgkin lymphoma (NHL) in 2014, at age of 42. “The simple fact is he’s going to die. It’s just a matter of time,” Wisner said in court. “Between now and then, it’s just nothing but pain.”

A strategic corporate document also revealed Monsanto’s public relations plan to “orchestrate outcry” in advance of the IARC glyphosate classification, Wisner told the jury.

Wisner further cited Monsanto emails from decades prior, in which the company was working with a genotoxicity expert who reviewed a series of 1990s studies. He raised concerns about Roundup impacts on humans and suggested further areas of research. After the expert’s analyses, Monsanto representatives began considering finding a different expert and also started working on a press statement saying the product carried no risk, according to Johnson’s lawyer.

Wisner also read documents that he said showed how Monsanto strategized plans to “ghostwrite” favorable research. –The Guardian

The lawyer for Monsanto disputes the claims saying Wisner is “cherrypicking” studies in favor of his client. Regardless of the outcome, however, Wisner said, “so much of what Monsanto has worked to keep secret is coming out.” Hopefully, the public will soon know just how dangerous glyphosate can be so people can be effectively warned before using it.

Read full story here…




Scientists: Humans Cause 1000x Extinction Rate Of Animals, Plants

Activist scientists who make sweeping accusations against humans for being the cause of mass extinctions have no proof to back up their assertions, but rather speculate behind their PhD degrees and call it ‘science’. Real science is observable and repeatable, speculation is not. ⁃ TN Editor

When Sudan the white rhino was put down by his carers earlier this year, it confirmed the extinction of one of the savannah’s most iconic subspecies.

Despite decades of effort from conservationists, including a fake Tinder profile for the animal dubbed ‘the most eligible bachelor in the world’, Sudan proved an unwilling mate and died – the last male of his kind.

His daughter and granddaughter remain – but, barring some miraculously successful IVF, it is only a matter of time.

The northern white rhino will surely be mourned, as would other stalwarts of picture books, documentaries and soft toy collections.

But what about species of which of which we are less fond – or perhaps even entirely unaware?

Would we grieve for obscure frogs, bothersome beetles or unsightly fungi?

Extinction is, after all, inevitable in the natural world – some have even called it the ‘engine of evolution’. So should extinction matter to us?

First of all, there are strong practical arguments against biodiversity loss.

Variation, from individual genes to species, gives ecosystems resilience in the face of change.

Ecosystems, in turn, hold the planet steady and provide services essential to human welfare.

Forests and wetlands prevent pollutants entering our water supplies, mangroves provide coastal defence by reducing storm surges, and green spaces in urban areas lower city-dwellers’ rates of mental illness.

A continued loss of biodiversity will disrupt these services even further.

Seen in this light, the environmental damage caused by resource extraction and the vast changes that humans have wrought on the landscape seem extremely high risk.

The world has never before experienced these disturbances all at the same time, and it is quite a gamble to assume that we can so damage our planet while at the same time maintaining the seven billion humans that live on it.

Although the unregulated plundering of the Earth’s natural resources should certainly worry those brave enough to examine the evidence, it is worth specifying that extinction is an issue in its own right.

Some environmental damage can be reversed, some failing ecosystems can be revived. Extinction is irrevocably final.

Uneven losses

Studies of threatened species indicate that, by looking at their characteristics, we can predict how likely a species is to become extinct.

Animals with larger bodies, for example, are more extinction-prone than those of smaller stature – and the same holds true for species at the top of the food chain.

For plants, growing epiphytically (on another plant but not as a parasite) leaves them at greater risk, as does being late blooming.

This means that extinction does not occur randomly across an ecosystem, but disproportionately effects similar species that perform similar functions.

Given that ecosystems rely on particular groups of organisms for particular roles, such as pollination or seed dispersal, the loss of one such group could cause considerable disruption.

Imagine a disease that only killed medical professionals – it would be far more devastating for society than one which killed similar numbers of people at random.

This non-random pattern extends to the evolutionary ‘tree-of-life’.

Some closely related groups of species are restricted to the same threatened locations (such as lemurs in Madagscar) or share vulnerable characteristics (such as carnivores), meaning that the evolutionary tree could lose entire branches rather than an even scattering of leaves.

Read full story here…




Beware: Researcher Says Most Scientific Studies Are Wrong

The glaring fallacy in Technocracy is that the data are bent to justify their theories. This crept in when early Technocrats realized that their ‘Science of Social Engineering’ could not be supported with legitimate data. Today’s news plays out like the childhood game ‘Simon Says” but substituted with ‘Science Says’. ⁃ TN Editor

A few years ago, two researchers took the 50 most-used ingredients in a cook book and studied how many had been linked with a cancer risk or benefit, based on a variety of studies published in scientific journals.

The result? Forty out of 50, including salt, flour, parsley and sugar. “Is everything we eat associated with cancer?” the researchers wondered in a 2013 article based on their findings.

Their investigation touched on a known but persistent problem in the research world: too few studies have large enough samples to support generalized conclusions.

But pressure on researchers, competition between journals and the media’s insatiable appetite for new studies announcing revolutionary breakthroughs has meant such articles continue to be published.

“The majority of papers that get published, even in serious journals, are pretty sloppy,” said John Ioannidis, professor of medicine at Stanford University, who specializes in the study of scientific studies.

This sworn enemy of bad research published a widely cited article in 2005 entitled: “Why Most Published Research Findings Are False.”

Since then, he says, only limited progress has been made.

Some journals now insist that authors pre-register their research protocol and supply their raw data, which makes it harder for researchers to manipulate findings in order to reach a certain conclusion. It also allows other to verify or replicate their studies.

Because when studies are replicated, they rarely come up with the same results. Only a third of the 100 studies published in three top psychology journals could be successfully replicated in a large 2015 test.

Medicine, epidemiology, population science and nutritional studies fare no better, Ioannidis said, when attempts are made to replicate them.

“Across biomedical science and beyond, scientists do not get trained sufficiently on statistics and on methodology,” Ioannidis said.

Too many studies are based solely on a few individuals, making it difficult to draw wider conclusions because the samplings have so little hope of being representative.

“Diet is one of the most horrible areas of biomedical investigation,” professor Ioannidis added — and not just due to conflicts of interest with various food industries.

“Measuring diet is extremely difficult,” he stressed. How can we precisely quantify what people eat?

In this field, researchers often go in wild search of correlations within huge databases, without so much as a starting hypothesis.

Even when the methodology is good, with the gold standard being a study where participants are chosen at random, the execution can fall short.

A famous 2013 study on the benefits of the Mediterranean diet against heart disease had to be retracted in June by the most prestigious of medical journals, the New England Journal of Medicine, because not all participants were randomly recruited; the results have been revised downwards.

So what should we take away from the flood of studies published every day?

Ioannidis recommends asking the following questions: is this something that has been seen just once, or in multiple studies? Is it a small or a large study? Is this a randomized experiment? Who funded it? Are the researchers transparent?

These precautions are fundamental in medicine, where bad studies have contributed to the adoption of treatments that are at best ineffective, and at worst harmful.

In their book “Ending Medical Reversal,” Vinayak Prasad and Adam Cifu offer terrifying examples of practices adopted on the basis of studies that went on to be invalidated, such as opening a brain artery with stents to reduce the risk of a new stroke.

It was only after 10 years that a robust, randomized study showed that the practice actually increased the risk of stroke.

The solution lies in the collective tightening of standards by all players in the research world, not just journals but also universities, public funding agencies. But these institutions all operate in competitive environments.

Read full story here…