Facial Recognition Algorithm Caused Wrongful Arrest

This story reveals why Amazon, IBM and Microsoft have pulled out of the facial recognition business in order to deflect certain criticism over racial bias. In this instance, the algorithm nailed the wrong black man for a crime he did not commit. ⁃ TN Editor

On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.

An hour later, when he pulled into his driveway in a quiet subdivision in Farmington Hills, Mich., a police car pulled up behind, blocking him in. Two officers got out and handcuffed Mr. Williams on his front lawn, in front of his wife and two young daughters, who were distraught. The police wouldn’t say why he was being arrested, only showing him a piece of paper with his photo and the words “felony warrant” and “larceny.”

His wife, Melissa, asked where he was being taken. “Google it,” she recalls an officer replying.

The police drove Mr. Williams to a detention center. He had his mug shot, fingerprints and DNA taken, and was held overnight. Around noon on Friday, two detectives took him to an interrogation room and placed three pieces of paper on the table, face down.

“When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection. Shinola is an upscale boutique that sells watches, bicycles and leather goods in the trendy Midtown neighborhood of Detroit. Mr. Williams said he and his wife had checked it out when the store first opened in 2014.

The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

“Is this you?” asked the detective.

The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

“No, this is not me,” Mr. Williams said. “You think all Black men look alike?”

Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.

A faulty system

A nationwide debate is raging about racism in law enforcement. Across the country, millions are protesting not just the actions of individual officers, but bias in the systems used to surveil communities and identify people for prosecution.

Facial recognition systems have been used by police forces for more than two decades. Recent studies by M.I.T. and the National Institute of Standards and Technology, or NIST, have found that while the technology works relatively well on white men, the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.

Last year, during a public hearing about the use of facial recognition in Detroit, an assistant police chief was among those who raised concerns. “On the question of false positives — that is absolutely factual, and it’s well-documented,” James White said. “So that concerns me as an African-American male.”

This month, AmazonMicrosoft and IBM announced they would stop or pause their facial recognition offerings for law enforcement. The gestures were largely symbolic, given that the companies are not big players in the industry. The technology police departments use is supplied by companies that aren’t household names, such as Vigilant Solutions, Cognitec, NEC, Rank One Computing and Clearview AI.

Read full story here…

Neil Ferguson’s Computer Model Is Ripped To Shreds

Professor Neil Ferguson of Imperial College in London started the Great Panic of 2020 with a thoroughly flawed computer model that was thoroughly unfit for scientific use. Was he instead virtue-signalling to his radical left-wing married lover? ⁃ TN Editor

A LOT of attention has been given to Professor Neil Ferguson’s dubious track record on epidemics and his equally dubious judgment in meeting his lover on at least two occasions during the lockdown introduced on his own advice.

On closer examination, there are two especially significant elements crying out for independent investigation: the quality and reliability of Ferguson’s computer model, and the political affiliations of his lover.

Firstly, the computer model. The source code behind the Ferguson model has finally been made available to the public via the GitHub website. Mark E Jeftovic, in his Axis of Easy website, says: ‘A code review has been undertaken by an anonymous ex-Google software engineer here, who tells us the GitHub repository code has been heavily massaged by Microsoft engineers, and others, in an effort to whip the code into shape to safely expose it to the public. Alas, they seem to have failed and numerous flaws and bugs from the original software persist in the released version. Requests for the unedited version of the original code behind the model have gone unanswered.’

Jeftovic believes the most worrisome outcome of the model review is that the code produces ‘non-deterministic outputs’. This means that owing to bugs, the code can produce very different results given identical inputs, making the code unsuitable for scientific purposes. Jeftovic says the documentation provided wants the reader to accept that given a ‘starting seed’, the model will always produce the same results. ‘Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.’

He says that a team even found that the output varied depending on which type of computer it was run on.

Jeftovic cannot understand why the Imperial team failed to realise that their software was so flawed. He quotes the usual computing adage ‘Garbage In/Garbage Out’ which the untrained reader may think is what is being asserted in the code review. In fact, he says, ‘it’s not. What’s being asserted is that output is garbage, regardless of the input. In this case, the output we’re experiencing as a result is a worldwide lockdown and shutdown of the global economy, and we don’t really know if this was necessary or not, because we have no actual data (aside from Sweden) and severely flawed models.’

Another expert, Martin Armstrong (who has a controversial record) also reviews the Ferguson model code and comes to very similar conclusions. He says that it ‘is such a joke it is either an outright fraud, or it is the most inept piece of programming  I have ever seen in my life . . . This is the most unprofessional operation perhaps in computer science. The entire team should be disbanded and an independent team put in place to review the work of Neil Ferguson . . . The only reasonable conclusion I can reach is that this has been deliberately used to justify bogus forecasts intent for political activism . . . There seems to have been no independent review of Ferguson’s work, which is unimaginable!’

Which leads us neatly to the second facet of the affair – the actual ‘affair’ and the politically radical lover. Professor Ferguson, 51, is said to be estranged from his wife Kim, with whom he has a 17-year-old son. He is reported to have used the match-finding website OkCupid a year ago to meet Antonia Staats, 38, currently married and living with her husband and two children. Ms Staats is a Left-wing campaigner who works for the US-based online network Avaaz, an organisation that promotes global activism on, among other things, climate change. The Guardian has called Avaaz the globe’s largest and most powerful online activist network, and it has a world-wide following of around 10million people. It is loosely connected with Bill Gates, through the World Economic Forum, which also lists Al Gore and Christine Lagarde on its board. Staats works as a senior campaigner on climate change for the group, and is said to be sympathetic towards the aims of Extinction Rebellion. Indirectly, on the surface at least, this ties Ferguson to climate change, a cause that the lockdown has served very well by managing to shut down the world economy.

Read full story here…

Facial Recognition AI Predicts Criminals Based On Face?

The racist pseudoscience of phrenology was debunked in the early 1900s but Technocrat software developers have given it new life with AI-based facial recognition algorithms, saying they can spot a likely criminal with 80% accuracy and without racial bias. ⁃ TN Editor

A team from the University of Harrisburg, PA, has developed automated computer facial recognition software that they claim can predict with 80 percent accuracy and “no racial bias” whether a person is likely going to be a criminal, purely by looking at a picture of them. “By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications,” they said, declaring that they were looking for “strategic partners” to work with to implement their product.

In a worrying use of words, the team in their own press release, move from referring to those the software recognizes as being “likely criminals” to “criminals” in the space of just one sentence, suggesting they are confident in the discredited racist pseudoscience of phrenology they appear to have updated for the 21st century.

Public reaction to the project was less than enthusiastic, judging by comments left on Facebook, which included “Societies have been trying to push the idea of ‘born criminals’ for centuries,” “and this isn’t profiling because……?” and “20 percent getting tailed by police constantly because they have the ‘crime face.’” Indeed, the response was so negative that the university pulled the press release from the internet. However, it is still visible using the Internet Wayback Machine.

While the research team claims to be removing bias and racism from decision making, leaving it up to a faceless algorithm, those who write the code, and those who get to decide who constitutes a criminal in the first place, certainly do have their own biases. Why are the homeless or people of color who “loiter” on sidewalks criminalized, but senators and congresspersons who vote for wars and regime change operations not? And who is more likely to be arrested? Wall Street executives doing cocaine in their offices or working-class people smoking marijuana or crack? The higher the level of a person in society, the more serious and harmful their crimes become, but the likelihood of an arrest and a custodial sentence decreases. Black people are more likely to be arrested for the same crime as white people and are sentenced to longer stays in prison, too. Furthermore, facial recognition software is notorious for being unable to tell people of color apart, raising further concerns.

Read full story here…

Harvard: Using AI For Personalized Predictive Quarantine

If your predictive AI doesn’t work with crime prevention, why not try it out on predictive quarantines instead? Harvard says all it needs is more data, where government “can certainly ramp up national health data gathering by creating or rolling out more comprehensive electronic medical records.” ⁃ TN Editor

Over the past few months the world has experienced a series of Covid-19 outbreaks that have generally followed the same pathway: an initial phase with few infections and limited response, followed by a take-off of the famous epidemic curve accompanied by a country-wide lockdown to flatten the curve.  Then, once the curve peaks, governments have to address what President Trump has called “the biggest decision” of his life: when and how to manage de-confinement.

Throughout the pandemic, great emphasis has been placed on the sharing (or lack of it) of critical information across countries — in particular from China — about the spread of the disease.  By contrast, relatively little has been said about how Covid-19 could have been better managed by leveraging the advanced data technologies that have transformed businesses over the past 20 years. In this article we discuss one way that governments could  leverage those technologies in managing a future pandemic — and perhaps even the closing phases of the current one.

The Power of Personalized Prediction

An alternative approach for policy makers to consider adding in their mix for battling Covid-19 is based on the technology of personalized prediction, which has transformed many industries over the last 20 years.  Using machine learning and artificial intelligence (AI) technology, data-driven firms (from “Big Tech” to financial services, travel, insurance, retail, and media) make personalized recommendations for what to buy, and practice personalized pricing, risk, credit, and the like using the data that they have amassed about their customers.

In a recent HBR article, for example, Ming Zeng, Alibaba’s former chief strategy officer, described how Ant Financial, his company’s small business lending operation, can assess loan applicants in real time by analyzing their transaction and communications data on Alibaba’s e-commerce platforms. Meanwhile, companies like Netflix evaluate consumers’ past choices and characteristics to make predictions about what they’ll watch next.

The same approach could work for pandemics — and even the future of Covid-19. Using multiple sources of data, machine-learning models would be trained to measure an individual’s clinical risk of suffering severe outcomes (if infected with Covid): what is the probability they will need intensive care, for which there are limited resources? How likely is it that they will die? The data could include individuals’ basic medical histories (for Covid-19, the severity of the symptoms seems to increase with age and with the presence of co-morbidities such as diabetes or hypertension) as well as other data, such as household composition. For example, a young, healthy individual (who might otherwise be classified as “low risk”) could be classified as “high risk” if he or she lives with old or infirm people who would likely need intensive care should they get infected.

These clinical risk predictions could then be used to customize policies and resource allocation at the individual/household level, appropriately accounting for standard medical liabilities and risks. It could, for instance, enable us to target social distancing and protection for those with high clinical risk scores, while allowing those with low scores to live more or less normally. The criteria for assigning individuals to high or low risk groups would, of course, need to be determined, also considering available resources, medical liability risks, and other risk trade-offs, but the data science approaches for this are standard and used in numerous applications.

A personalized approach has multiple benefits. It may help build herd immunity with lower mortality — and fast. It would also allow better — and fairer — resource allocation, for example of scarce medical equipment (such as test kits, protective masks, and hospital beds) or other resources.

De-confinement strategies at later stages of a pandemic — a next key step for Covid-19 in most countries — can benefit in a similar way. Deciding which people to start the de-confinement process with, is, by nature, a classification problem similar to the classification problems familiar to most data-driven firms.  Some governments are already approaching de-confinement by using age as a proxy for risk, a relatively crude classification that potentially misses other high-risk individuals (such as the above example of healthy young people living with the elderly).

Performing classification based on data and AI prediction models could lead to de-confinement decisions that are safe at the community level and far less costly for the individual and the economy.  We know that a key feature of Covid-19 is that it has exceptionally high transmission rate, but also relatively low severe symptoms or mortality rate. Data indicate that possibly more than 90% of infected people are either asymptomatic or experience mild symptoms when infected.

In theory, with a reliable prediction of who these 90% are we could de-confine all these individuals. Even if they were to infect each other, they would not have severe symptoms and would not overwhelm the medical system or die. These 90% low clinical risk de-confined people would also help the rapid build up of high herd immunity, at which point the remaining 10% could be also de-confined.

Read full story here…

FOIA Docs: Feds Excited To Create Mass Surveillance Network

Technocrats operating within the U.S. government are stampeding to impose total surveillance networks in America, similar to those seen in China but with one twist: The race to dominance requires us to leapfrog over China’s AI and do it even better.

Eric Schmidt is the Chairman of the National Security Commission on Artificial Intelligence. Schmidt is former Chairman of Google and Alphabet, and is a member of the elitist Trilateral Commission. ⁃ TN Editor

A FOIA request by the Electronic Privacy Information Center revealed how excited the National Security Commission on Artificial Intelligence (NSCAI) is about using CCTV cameras to create a national surveillance network.

An NSCAI presentation titled “Chinese Tech Landscape Overview” discusses China’s facial recognition CCTV camera network in glowing terms.

“When we talk about data resources, really the largest data source is the government.'”

The presentation discusses how the Chinese government profits from encouraging companies to use facial recognition on visitors and employees.

“Now that these companies are operating at scale they are building a host of other services (e.g. facial recognition for office buildings, augmented reality)”

In America things are not all that different.

In the United States, the Feds encourage private companies like Clearview AI, Amazon Ring and Flock Safety to use facial recognition and automatic license plate readers to identify everyone.

Under the section “State Datasets: Surveillance = Smart Cities” the presentation extol’s China’s smart city surveillance saying, “it turns out that having streets carpeted with cameras is good infrastructure for smart cities as well.”

Americans do not need more government surveillance and we certainly do not need our smart cities carpeted with government surveillance devices.

The NSCAI says, “mass surveillance is a killer application for deep learning.

As our government applies AI deep learning to things like CCTV cameras, cellphone locations, and license plate readers, a person’s entire life can be predicted.

AI’s will use deep learning to accurately guess where you work, eat, shop, sleep, worship and vacation. Basically, mass surveillance is a killer application for knowing all there is to know about everyone.
Last week MLliverevealed that a startup AI company co-founded by the University of Michigan is helping governments use CCTV cameras to monitor people for social distancing as indicated by a professor of electrical and computer engineering at the University of Michigan (UM):

“Two weeks ago, Corso said he and his team began tracking physical distancing at locations like Times Square in New York, Miami Beach, Abbey Road in London and the Ruthven Museums Building at UM.”

Police in New York City use CCTV cameras to fine people up to $1,000 for not social distancing. While police in Florida set up checkpoints on highways and police in the United Kingdom use CCTV cameras to enforce stay-at-home orders.

Voxel51 uses their “physical distancing index” to track social distancing in major cities around the globe.

“Voxel51 is tracking the impact of the coronavirus global pandemic on social behavior, using a metric we developed called the Voxel51 Physical Distancing Index (PDI). The PDI helps people understand how the coronavirus is changing human activity in real-time around the world. Using our cutting-edge computer vision models and live video streams from some of the most visited streets in the world, the PDI captures the average amount of human activity and social distancing behaviors in major cities over time.”

What worries me is how law enforcement might use Voxel51 to fine or arrest people for not observing government-mandated social distancing.

Despite what Voxel51 claims about anonymizing identifiable data, they are still collecting their data from public/government cameras.

A 2019 article in the Michigan News University of Michigan revealed that Voxel51 uses artificial intelligence to identify and follow people and objects.

“Voxel51 has set out to overcome those obstacles with their video analytics platform and open source software libraries that, together, enable state-of-the-art video recognition. It identifies and follows objects and actions in each clip. As co-founder Brian Moore says, We transform video into value.”

I find it hard to believe that cities and governments would pay money to simply look at anonymized data. Especially when Voxel51’s business model is built around identifying people and objects on a mass-scale.
A perfect example of how the Feds view mass surveillance can best be summed up in the NSCAI presentation, “American companies have a lot to gain by adopting ideas from Chinese companies.”

Everyday, it seems Americans are being told we need more national surveillance programs to keep everyone safe.

Our governments’ obsession with monitoring everyone is only going to grow as the coronavirus grips the country. It is our job to keep these surveillance programs from being implemented or risk becoming an authoritarian state like China.

Read full story here…

Mind-Reading AI Uses Brain Implant For Thoughts-To-Words

While medically encouraging for people who are paralyzed, the ability to translate thoughts into words has grave consequences for the age of Technocracy where nothing is hidden from the Scientific Dictatorship. ⁃ TN Editor

An artificial intelligence can accurately translate thoughts into sentences, at least for a limited vocabulary of 250 words. The system may bring us a step closer to restoring speech to people who have lost the ability because of paralysis.

Joseph Makin at the University of California, San Francisco, and his colleagues used deep learning algorithms to study the brain signals of four women as they spoke. The women, who all have epilepsy, already had electrodes attached to their brains to monitor seizures.

Each woman was asked to read aloud from a set of sentences as the team measured brain activity. The largest group of sentences contained 250 unique words.

The team fed this brain activity to a neural network algorithm, training it to identify regularly occurring patterns that could be linked to repeated aspects of speech, such as vowels or consonants. These patterns were then fed to a second neural network, which tried to turn them into words to form a sentence.

Each woman repeated the sentences at least twice, and the final repetition didn’t form part of the training data, allowing the researchers to test the system.

Each time a person speaks the same sentence, the brain activity associated will be similar but not identical. “Memorising the brain activity of these sentences wouldn’t help, so the network instead has to learn what’s similar about them so that it can generalise to this final example,” says Makin. Across the four women, the AI’s best performance was an average translation error rate of 3 per cent.

Makin says that using a small number of sentences made it easier for the AI to learn which words tend to follow others.

For example, the AI was able to decode that the word “Turner” was always likely to follow the word “Tina” in this set of sentences, from brain activity alone.

Read full story here…

Snowden: AI Plus Coronavirus Is ‘Turnkey To Tyranny’

Technocrat-minded surveillance companies are ‘in the zone’ with governments more willing than ever to buy their AI and surveillance technologies. Once embedded into society, they will be used against citizens long after the coronavirus has subsided. ⁃ TN Editor

Governments around the world are using high-tech surveillance measures to combat the coronavirus outbreak. But are they worth it?

Edward Snowden doesn’t think so.

The former CIA contractor, whose leaks exposed the scale of spying programs in the US, warns that once this tech is taken out of the box, it will be hard to put it back.

“When we see emergency measures passed, particularly today, they tend to be sticky,” Snowden said in an interview with the Copenhagen International Documentary Film Festival.

The emergency tends to be expanded. Then the authorities become comfortable with some new power. They start to like it.

Supporters of the draconian measures argue that normal rules are not enough during a pandemic and that the long-term risks can be addressed once the outbreak is contained. But a brief suspension of civil liberties can quickly be extended.

Security services will soon find new uses for the tech. And when the crisis passes, governments can impose new laws that make the emergency rules permanent and exploit them to crack down on dissent and political opposition.

Take the proposals to monitor the outbreak by tracking mobile phone location data.

This could prove a powerful method of tracing the spread of the virus and the movements of people who have it. But it will also be a tempting tool to track terrorists — or any other potential enemies of the states.

AI becoming ‘turnkey to tyranny’

Artificial intelligence has become a particularly popular way of monitoring life under the pandemic. In China, thermal scanners installed at train stations identify patients with fevers, while in Russia, facial recognition systems spot people breaking quarantine rules.

The coronavirus has even given Clearview AI a chance to repair its reputation. The controversial social media-scraping startup is in talks with governments about using its tech to track infected patients, according to the Wall Street Journal.

A big attraction of AI is its efficiency by assigning probabilities to different groups of people. But too much efficiency can be a threat to freedom, which is why we limit police powers through measures such as warrants and probable cause for arrest.

The alternative is algorithmic policing that justifies excessive force and perpetuates racial profiling.

Snowden is especially concerned about security services adding AI to all the other surveillance tech they have.

“They already know what you’re looking at on the internet,” he said. “They already know where your phone is moving. Now they know what your heart rate is, what your pulse is. What happens when they start to mix these and apply artificial intelligence to it?

Read full story here…

Robert Epstein

Does Big Tech Really Have The Power To Unseat Donald Trump?

Dr. Robert Epstein, a Democrat, has been writing that Big Tech will make it impossible for Trump to be re-elected in 2020. He misses the point that Big Tech are Technocrats intending to completely dominate society, everywhere. ⁃ TN Editor

When it comes to election manipulation, left-leaning American technology companies make the Russians look like rank amateurs.

No matter which weak candidate the Democrats ultimately nominate, and even with Russia’s help, President Donald Trump can’t win the 2020 election. For that matter, in races nationwide in which the projected winning margins are small—say, under 5 percent or so—Republicans, in general, are likely to lose.

That’s because of new forces of influence that the internet has made possible in recent decades and that Big Tech companies—Google more aggressively than any other—have been determined to perfect since Armageddon Day—oh, sorry, Election Day—in 2016.

For the record, I’m neither a conservative nor a Trump supporter. But I love democracy and America more than I love any particular party or candidate, and rigorous research that I have been conducting since 2013 shows that Big Tech companies now have unprecedented power to sway elections.

While I cheer the fact that 95 percent of donations from tech companies and their employees go to Democrats, I can’t stand by and watch these companies undermine democracy. As long as I’m still breathing, I will do everything I can to stop that from happening—and, for the record, I’m NOT suicidal.

The threat these companies pose is far from trivial. For one thing, they can shift opinions and votes in numerous ways that people can’t detect.

Remember the rumors about that movie theater in New Jersey that got people to buy more Coke and popcorn using subliminal messages embedded into a film? Well, those rumors were a bit exaggerated—those messages actually had a minimal effect—but Google-and-the-Gang are now controlling a wide variety of subliminal methods of persuasion that can, in minutes, shift the voting preferences of 20 percent or more of undecided voters without anyone having the slightest idea they’ve been manipulated.

Worse still, they can use these techniques without leaving a paper trail for authorities to trace. In a leak of Google emails to the Wall Street Journal in 2018, one Googler asks his colleagues how the company can use “ephemeral experiences” to change people’s views about Trump’s travel ban.

Ephemeral experiences are those fleeting ones we have every day when we view online content that’s generated on-the-fly and isn’t stored anywhere: newsfeeds, search suggestions, search results, and so on. No authority can go back in time to see what search suggestions or search results you were shown, but dozens of randomized, controlled, double-blind experiments I’ve conducted show that such content can dramatically shift opinions and voting preferences. See the problem?

Speaking of content, I’m getting sick of seeing headlines about Russian interference in our elections. Unless the Russians suddenly figure out how to massively hack our voting machines—and shame on us if we’re incompetent enough to let that happen—there’s no evidence that bad actors such as Russia or the now-defunct Cambridge Analytica can shift more than a few thousand votes here and there. Generally speaking, all they can do is throw some biased content onto the internet. But content isn’t the problem anymore.

All that matters now is who has the power to decide what content people will see or will not see (censorship), and what order that content is presented in. That power is almost entirely in the hands of the arrogant executives at two U.S. companies. Their algorithms decide which content gets suppressed, the order in which content is shown, and which content goes viral. You can counter a TV ad with another TV ad, but if the tech execs are supporting one candidate or party, you can’t counteract their manipulations.

Forget the Russians. As I said when I testified before Congress last summer, if our own tech companies all favor the same presidential candidate this year—and that seems likely—I calculate that they can easily shift 15 million votes to that candidate without people knowing and without leaving a paper trail.

By the way, the more you know about someone, the easier it is to manipulate him or her. Google and Facebook have millions of pieces of information about every American voter, and they will be targeting their manipulations at the individual level for every single voter in every swing state. No one in the world except Google and Facebook can do that.

In President Eisenhower’s famous 1961 farewell address, he warned not only about the rise of a military-industrial complex; he also warned about the rise of a “technological elite” who could someday control our country without us knowing.

That day has come, my friends, and it’s too late for any law or regulation to make a difference—at least in the upcoming election. There’s only one way at this point to get these companies to take their digits off the scale, and that’s to do to them what they do to us and our children every day: monitor them aggressively.

Read full story here…

The Creepy Line

Google, The ‘Creepy Line’ And The 2020 Elections

Google has the power to sway elections, but is it already using it in the 2020 election cycle? The nature of the ‘creepy line’ is comparable to the ‘Twilight Zone’, where reality and illusion are blurred to the point that it is impossible to know for sure. ⁃ TN Editor

The Creepy Line is a particularly sinister term used in an unguarded remark by former Google CEO Eric Schmidt in 2010. In hindsight, what is most disturbing about the comment is how casually he explained Google’s policy regarding invading the privacy of its customers and clients.

“Google policy on a lot of these things,” Schmidt says about 45 seconds into the introduction, “is to get right up to the creepy line and not cross it.” Time pointer needed.

The Creepy Line is an 80-minute documentary available through several options available at the link below. For now, it is available for free at Amazon Prime, but I’m not sure how long it will be offered there considering many current concerns regarding censorship of anti-establishment themes on various social media platforms. This film offers a very frank look at the number one source of news in our country: Facebook and Google.

Early in the film, you will discover how Google acquired an enormous and permanent cache of data about users. Initially, the data was used to refine search algorithms used to help index the websites and information uploaded to the world wide web. Now, however, it is used to fine-tune ads and content that most suits your interests, storing the information to better provide content suggestions for you. But, this film will give you a really disquieting idea (at least it should) about what else they may be doing with that data.

Initially, Google was simply the most popular Search engine, basically the largest available “indexing” algorithm on the net. Then, Google came up with Google Chrome, a browser, to track and log not only what you look for but also where you go and every keystroke you make while there. In fact, Google realized they could serve you best if they know what you are doing even when offline, which is why the the Android system can track you everywhere you take your phone. With all the free apps available and used globally, Google has a very accurate picture of what everyone’s daily life looks like anywhere in the world.

At intervals during the presentation, Professor Jordan Peterson offers insight from his own experience with social media and agenda setting. For those unfamiliar with Peterson, he was propelled into fame when he very publicly refused to use the new gender pronouns approved by Canada’s Political Correct Policy. Peterson’s outspoken refusal to yield to the thought police led to him being interviewed as being a spokesman for the Millennial Mindset, especially their willingness to accept new technology without questioning it.

“These are all free services but obviously they’re not,” notes Peterson, during his commentary, as he discusses the impact upon his life his sudden notoriety and the negative publicity Google and You Tube caused for him. He discusses his own battle with depression as well as insights into his daughter’s experiences with social media, which gives him special psychiatric insight into teenage (millennial) angst, perhaps. Some may find his frank openness about the issues off-putting, but he comes across to me as a man who has walked through hell and doesn’t want to talk about it, but has decided he will do so, if you are interested. I find Peterson’s point of view extremely relevant, especially in light of the the news regarding Peak Prosperity’s de-platforming today and the implications for our own sources of information going forward.

He is not the main speaker during the film, but Peterson does an excellent job explaining how the surveillance business model works. This leads to a discussion of how Google Maps, Google Docs, and the use of Gmail (even drafts of emails you don’t send!) combine together to form and shape your thoughts and behavior, similar to a bunch of people in a control room with dials which monitor and control your every interaction with the world. (15:28)

Less than ten minutes into the movie, you might have already decided to turn to non-Google search engines, but there is no hope of your retrieving any information they already have on you. It belongs to them, a legal point discussed several times during the presentation.

Read full story here…

Oren Etzioni

Super-AI Might Emerge Like Coronavirus To Destroy Civilization

The very same Technocrats who brought us AI in the first place are now pondering whether or not a ‘super-AI’ could suddenly emerge that would destroy civilization. For a more reliable answer, perhaps they should ask Alexa. ⁃ TN Editor

CEO of Allen Institute for AI, Professor Oren Etzioni, issued a series of potential warning signs that would alert us to “super-intelligence” being around the corner.

Humans must be ready for signs of robotic super-intelligence but should have enough time to address them, a top computer scientist has warned.

Oren Etzioni, CEO of Allen Institute for AI, penned a recent paper titled: “How to know if artificial intelligence is about to destroy civilisation.”

He wrote: “Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences?

“Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent super-intelligence is an existential risk for humanity.

“But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that super-intelligence is indeed around the corner?”

He likened warning signs to canaries in coal mines, which were used to detect carbon monoxide because they would collapse.

Prof Etzioni argued these warning signs come when AI programmes develop a new capability.

He continued for MIT Review: “Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer.

“It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived.

“Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones.”

But he did warn that the “automatic formulation of learning problems” would be the first canary, followed by self-driving cars.

Read full story here…