The Focus Of AI Is Shifting From ‘Data’ To ‘Knowledge’

Artificial Intelligence naturally produces Artificial Knowledge. Do you see anything wrong with this proposition? One definition of artificial is “made by people to look very like something natural”, which means it’s fake while deceptively intending to convince you it’s real.  ⁃ TN Editor

The Artificial Intelligence (AI) revolution was ignited more than half a century ago. In the last decade, AI has grown from an academic scientific field to start being a practical part of our everyday lives. The most common AI business strategies we see are built around data. We believe that proprietary data is currently the most strategic moat for AI companies, but in the coming years, it will become less of a unique asset, making proprietary data differentiation less sustainable. Therefore, we expect a shift in focus, from data-based AI strategies, to knowledge-based AI strategies.

The big data advancement, facilitated by the deployment of numerous sensors, internet connectivity and hardware and software improvement in computational power, communication abilities and digital storage, have enabled AI to scale from small academic research projects to large enterprise production applications. Essentially, big data required sophisticated AI models to analyze and derive knowledge and insights, while the AI models needed the critical mass of big data for training and optimization. Hence, at present, data is often perceived as a sufficient strategic moat for AI startups. As venture capital investors, we see this phenomenon routinely. In recent years, we have seen many startups that place data acquisition at the heart of their business strategy. An increasing number of such companies emphasize the unique data sets they have acquired and their long-term strategy for acquiring additional proprietary data – as a sustainable barrier of entry. Moreover, as AI tools and AI-as-a-service platforms have commoditized the development of AI models, and public data has become ubiquitous, the perceived need to build and defend a data moat has become palpable.

In today’s technology ecosystem, the markets have increasingly rewarded companies with leading AI programs and control over proprietary data – as a substantial and sustainable competitive advantage. Companies such as Google and Netflix have developed and curated massive and authoritative datasets over a long period of time, while many other companies struggled in vain to match their success. An example is the massive disruption of rival media service providers and production companies, which were outmaneuvered by Netflix’ sophisticated data strategy.

Nevertheless, due to expected advancements in the ability and willingness to exchange data, we believe that within a decade, proprietary data moats will be less sustainable. While data will still fuel the AI value engine, AI business strategies will be increasingly focused on knowledge.

Moving Up the AI Value Pyramid, towards the Knowledge Layer

The AI value pyramid is based on data and driven by knowledge. While today “we are drowning in information but starved for knowledge”, we expect moving up the AI value pyramid, towards the knowledge layer. Indeed, we have begun to see advances that will foster and accelerate this trend by creation of data exchanges. We expect that data exchange will be facilitated by a combination of increased feasibility and a willingness to share commoditized data in return for valuable knowledge. In summary, data will become more plentiful, available, reliable and standardized and inexpensive – the perfect definition of an ideal commodity. Using data as a sustainable barrier of entry will be more difficult in the future.

The increased feasibility to share data will be accelerated by the proliferation of data sources through the Internet of Things (IoT). In addition, there are new techniques, protocols and standards for pooling, sharing and exchanging data. Looking ahead, the increased ability to share data will become truly significant when there is incentive and a growing inclination to do so. As AI undermines and disrupts legacy competitive barriers to entry, many organizations relentlessly attempt to collect their own proprietary data and monetize it. Alas, this data acquisition and utilization is neither easy, nor fruitful and therefore creates strategic dissonance.  This is because, while AI is increasingly indispensable for most organizations, it’s not part of their legacy skills or core expertise. In addition, the chronic and enduring shortage of engineers, developers, product leads and managers trained in AI sharpens this dissonance and leads to a solution preference for data sharing with the goal of knowledge exchange.

An example of the combination of ability and willingness creating through the exchange of data for knowledge generation is the new proposal by the European Union, to create “a single market for data,” in order to empower people, business and organizations  to make better decisions based in insights from non-personal data in order to compete with the current tech giants.

Another factor contributing to data moats becoming less sustainable is the invention of novel data solutions which enable using smaller sets of data for training models. Synthetic data solutions (for example, with Generative Adversarial Networks) and other minimization techniques, like data augmentation, might allow companies to create disruptive AI products, without huge amounts of data.

Read full story here…




China’s Dystopian AI Development Incorporates Population Control

Anyone who thinks that AI learns by itself and is ethically neutral has bought the Technocrat lie. AI is just a computer algorithm programmed by humans to do what humans want it to do. Biases cannot be excluded. China’s AI seeks to reduce global population and is being exported to other countries.

As a Technocracy, China is bent on perfecting the “science of social engineering” so that all of society can be monitored and controlled to suit Technocrat goals. One big goal is to reduce global population in order to consume less resources. ⁃ TN Editor

Since its conception, people have worried that an artificial intelligence would turn against humanity and threaten our lives. While this may be a result to be feared several years in the future, right now the more pressing danger is AI used to oppress millions of people and facilitate the threat of a controlling regime.

Homebound in the pandemic quarantine, my daughter and I have been rewatching Person of Interest on Netflix.

In essence, the former network series is about a man who created a nearly omniscient artificial intelligence that watches everyone through networks of cameras, computers and smartphones. Each week, our team of heroes, assembled by the AI creator, tries to help a person whom the AI has identified as a likely soon-to-be murder victim. Because this AI was invented by the show’s protagonist, it demonstrates empathy and values human life.

However, into the third season, a second AI is operated on behalf of the government, and this AI does not value human life. It works under orders from and for the benefit of a shadowy corporation, which begins to organize people’s lives for its own purposes, including killing people who won’t fit into its program. It is frightening.

This show is a good television and I recommend it to anyone who regularly reads this blog and thus interested in how technology can affect our lives. Person of Interest may also introduce you to what is now happening in this world’s most populous country.

I have written much recently on video surveillance and facial recognition in the U.S., but American police are limited by Constitutional requirements and there can easily be rules applied to new technologies that will limit the government’s abilities to use them indiscriminately. The U.S. also boasts protections for rights of individuals to assemble and protest, a government that can regularly be changed, and a court system that generally protects those rights. An independent press shines light on government behavior deemed abusive of these rights.

So think about a country without any of those checks and balances, with no individual rights guaranteed under law, no free press, no independently operative court system, and a single party dictatorship holding all power. Now think about what would happen if you give that country unlimited electronic and physical surveillance from hundreds of millions of cameras to drones to capturing all phone, text and internet traffic including searches and social media. Then give this society ever-increasing sophistication in artificial intelligence to manage the information flow and assign meaning to all the acts it captures, even aggregate the full view of a person’s behavior into a score that determines all important aspects of a person’s life. Recieve a good score from the government, and you are awarded that apartment you desire or permission to have a baby. A bad score means roadblocks in your life.  This is what China is rapidly becoming.

China is not only instituting a surveillance society, including a social scoring system for every resident, but it is investing heavily in the artificial intelligence needed to manage it all and make evaluations of what cameras, biometric readers and internet filters capture. According to U.S. military estimates, China will be spending $70 billion in government funds on AI development in 2020 compared to $17 billion in 2017. U.S. non-defense spending on AI this year will be about one billion dollars.

Not only is China building government laboratories to develop the next several generations of AI, but the government’s close coordination with companies like Huawei and Alibaba provide the surveillance state with top private commercial research as well. All powered by the supermassive amounts of data produced in the world’s largest surveillance state, because huge data sets are the building blocks of effective AI. China, as The Economist recently observed, is the Saudi Arabia of data.

The other recommendation I will make in this column is to read Ross Anderson’s article in The Atlantic called The Panopticon is Already Here, which explains how surveillance, AI, social scoring, a one-party state, and political oppression are combining in China to both create the first all-knowing social system and to export it to other countries. Anderson reports on how the entire system is being tested right now in the “open air prison” of Xinjiang province, where Muslim Uighurs are monitored every minute of their lives.

Anderson writes of Chinese President, Party leader and effective dictator, Xi Jinping, “With AI, Xi can build history’s most oppressive authoritarian apparatus, without the manpower Mao needed to keep information about dissent flowing to a single, centralized node. In China’s most prominent AI start-ups—SenseTime, CloudWalk, Megvii, Hikvision, iFlytek, Meiya Pico—Xi has found willing commercial partners. And in Xinjiang’s Muslim minority, he has found his test population.”

More than a million Uighurs have been imprisoned which is more political prisoners than any instance since the Nazi concentration camps. John Oliver discussed life in these prisons and re-education camps on his show this week. But the Uighurs still living in Xinjiang Province are subject to checkpoints, constant video and other surveillance and the introduction of Han Chinese “big brothers and sisters” to monitor forced assimilation into the communist culture. According to Anderson, “At these checks, police extract all the data they can from Uighurs’ bodies. They measure height and take a blood sample. They record voices and swab DNA.”

And this surveillance and political compliance testing ground can be easily exported to the rest of the country. Anderson notes, “Once Xi perfects this system in Xinjiang, no technological limitations will prevent him from extending AI surveillance across China. He could also export it beyond the country’s borders, entrenching the power of a whole generation of autocrats.”

The investment in AI drives the entire process. The Atlantic article states, “Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China’s government could soon achieve an unprecedented political stranglehold on more than 1 billion people.”

So new surveillance tools like robot bird surveillance drones, good enough to fool other birds into flying with them, are already being introduced in China to feed more data about people’s behavior into state-run AI. As stated by  C/NET,  “China also employs facial recognitionartificial intelligencesmart glasses and other technologies to monitor its 1.4 billion citizens with the aim of one day giving each of them a personal score based on how they behave.”

Imagine a credit score that instead of simply measuring financial behavior and ability, measures all aspects of your life and interactions with society.  And then imagine that your score can determine what kind of apartment you are allowed to have – or if the government will allow you to live in an apartment at all. The same is true for your job, education opportunities, reproduction and other core aspects of your life. This is the Chinese social score. As pointed out in Wired UK, while some of the current system is voluntary, there are incentives for participating and penalties for not participating.

Finally, and maybe most frightening, China is using its economic clout and private industry to export population control technology to dictators around the world. Anderson observes, “China is already developing powerful new surveillance tools, and exporting them to dozens of the world’s actual and would-be autocracies. Over the next few years, those technologies will be refined and integrated into all-encompassing surveillance systems that dictators can plug and play.”

Read full story here…




Facial Recognition Algorithm Caused Wrongful Arrest

This story reveals why Amazon, IBM and Microsoft have pulled out of the facial recognition business in order to deflect certain criticism over racial bias. In this instance, the algorithm nailed the wrong black man for a crime he did not commit. ⁃ TN Editor

On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.

An hour later, when he pulled into his driveway in a quiet subdivision in Farmington Hills, Mich., a police car pulled up behind, blocking him in. Two officers got out and handcuffed Mr. Williams on his front lawn, in front of his wife and two young daughters, who were distraught. The police wouldn’t say why he was being arrested, only showing him a piece of paper with his photo and the words “felony warrant” and “larceny.”

His wife, Melissa, asked where he was being taken. “Google it,” she recalls an officer replying.

The police drove Mr. Williams to a detention center. He had his mug shot, fingerprints and DNA taken, and was held overnight. Around noon on Friday, two detectives took him to an interrogation room and placed three pieces of paper on the table, face down.

“When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection. Shinola is an upscale boutique that sells watches, bicycles and leather goods in the trendy Midtown neighborhood of Detroit. Mr. Williams said he and his wife had checked it out when the store first opened in 2014.

The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

“Is this you?” asked the detective.

The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

“No, this is not me,” Mr. Williams said. “You think all Black men look alike?”

Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.

A faulty system

A nationwide debate is raging about racism in law enforcement. Across the country, millions are protesting not just the actions of individual officers, but bias in the systems used to surveil communities and identify people for prosecution.

Facial recognition systems have been used by police forces for more than two decades. Recent studies by M.I.T. and the National Institute of Standards and Technology, or NIST, have found that while the technology works relatively well on white men, the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.

Last year, during a public hearing about the use of facial recognition in Detroit, an assistant police chief was among those who raised concerns. “On the question of false positives — that is absolutely factual, and it’s well-documented,” James White said. “So that concerns me as an African-American male.”

This month, AmazonMicrosoft and IBM announced they would stop or pause their facial recognition offerings for law enforcement. The gestures were largely symbolic, given that the companies are not big players in the industry. The technology police departments use is supplied by companies that aren’t household names, such as Vigilant Solutions, Cognitec, NEC, Rank One Computing and Clearview AI.

Read full story here…




Neil Ferguson’s Computer Model Is Ripped To Shreds

Professor Neil Ferguson of Imperial College in London started the Great Panic of 2020 with a thoroughly flawed computer model that was thoroughly unfit for scientific use. Was he instead virtue-signalling to his radical left-wing married lover? ⁃ TN Editor
 

A LOT of attention has been given to Professor Neil Ferguson’s dubious track record on epidemics and his equally dubious judgment in meeting his lover on at least two occasions during the lockdown introduced on his own advice.

On closer examination, there are two especially significant elements crying out for independent investigation: the quality and reliability of Ferguson’s computer model, and the political affiliations of his lover.

Firstly, the computer model. The source code behind the Ferguson model has finally been made available to the public via the GitHub website. Mark E Jeftovic, in his Axis of Easy website, says: ‘A code review has been undertaken by an anonymous ex-Google software engineer here, who tells us the GitHub repository code has been heavily massaged by Microsoft engineers, and others, in an effort to whip the code into shape to safely expose it to the public. Alas, they seem to have failed and numerous flaws and bugs from the original software persist in the released version. Requests for the unedited version of the original code behind the model have gone unanswered.’

Jeftovic believes the most worrisome outcome of the model review is that the code produces ‘non-deterministic outputs’. This means that owing to bugs, the code can produce very different results given identical inputs, making the code unsuitable for scientific purposes. Jeftovic says the documentation provided wants the reader to accept that given a ‘starting seed’, the model will always produce the same results. ‘Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.’

He says that a team even found that the output varied depending on which type of computer it was run on.

Jeftovic cannot understand why the Imperial team failed to realise that their software was so flawed. He quotes the usual computing adage ‘Garbage In/Garbage Out’ which the untrained reader may think is what is being asserted in the code review. In fact, he says, ‘it’s not. What’s being asserted is that output is garbage, regardless of the input. In this case, the output we’re experiencing as a result is a worldwide lockdown and shutdown of the global economy, and we don’t really know if this was necessary or not, because we have no actual data (aside from Sweden) and severely flawed models.’

Another expert, Martin Armstrong (who has a controversial record) also reviews the Ferguson model code and comes to very similar conclusions. He says that it ‘is such a joke it is either an outright fraud, or it is the most inept piece of programming  I have ever seen in my life . . . This is the most unprofessional operation perhaps in computer science. The entire team should be disbanded and an independent team put in place to review the work of Neil Ferguson . . . The only reasonable conclusion I can reach is that this has been deliberately used to justify bogus forecasts intent for political activism . . . There seems to have been no independent review of Ferguson’s work, which is unimaginable!’

Which leads us neatly to the second facet of the affair – the actual ‘affair’ and the politically radical lover. Professor Ferguson, 51, is said to be estranged from his wife Kim, with whom he has a 17-year-old son. He is reported to have used the match-finding website OkCupid a year ago to meet Antonia Staats, 38, currently married and living with her husband and two children. Ms Staats is a Left-wing campaigner who works for the US-based online network Avaaz, an organisation that promotes global activism on, among other things, climate change. The Guardian has called Avaaz the globe’s largest and most powerful online activist network, and it has a world-wide following of around 10million people. It is loosely connected with Bill Gates, through the World Economic Forum, which also lists Al Gore and Christine Lagarde on its board. Staats works as a senior campaigner on climate change for the group, and is said to be sympathetic towards the aims of Extinction Rebellion. Indirectly, on the surface at least, this ties Ferguson to climate change, a cause that the lockdown has served very well by managing to shut down the world economy.

Read full story here…




Facial Recognition AI Predicts Criminals Based On Face?

The racist pseudoscience of phrenology was debunked in the early 1900s but Technocrat software developers have given it new life with AI-based facial recognition algorithms, saying they can spot a likely criminal with 80% accuracy and without racial bias. ⁃ TN Editor
 

A team from the University of Harrisburg, PA, has developed automated computer facial recognition software that they claim can predict with 80 percent accuracy and “no racial bias” whether a person is likely going to be a criminal, purely by looking at a picture of them. “By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications,” they said, declaring that they were looking for “strategic partners” to work with to implement their product.

In a worrying use of words, the team in their own press release, move from referring to those the software recognizes as being “likely criminals” to “criminals” in the space of just one sentence, suggesting they are confident in the discredited racist pseudoscience of phrenology they appear to have updated for the 21st century.

Public reaction to the project was less than enthusiastic, judging by comments left on Facebook, which included “Societies have been trying to push the idea of ‘born criminals’ for centuries,” “and this isn’t profiling because……?” and “20 percent getting tailed by police constantly because they have the ‘crime face.’” Indeed, the response was so negative that the university pulled the press release from the internet. However, it is still visible using the Internet Wayback Machine.

While the research team claims to be removing bias and racism from decision making, leaving it up to a faceless algorithm, those who write the code, and those who get to decide who constitutes a criminal in the first place, certainly do have their own biases. Why are the homeless or people of color who “loiter” on sidewalks criminalized, but senators and congresspersons who vote for wars and regime change operations not? And who is more likely to be arrested? Wall Street executives doing cocaine in their offices or working-class people smoking marijuana or crack? The higher the level of a person in society, the more serious and harmful their crimes become, but the likelihood of an arrest and a custodial sentence decreases. Black people are more likely to be arrested for the same crime as white people and are sentenced to longer stays in prison, too. Furthermore, facial recognition software is notorious for being unable to tell people of color apart, raising further concerns.

Read full story here…




Harvard: Using AI For Personalized Predictive Quarantine

If your predictive AI doesn’t work with crime prevention, why not try it out on predictive quarantines instead? Harvard says all it needs is more data, where government “can certainly ramp up national health data gathering by creating or rolling out more comprehensive electronic medical records.” ⁃ TN Editor

Over the past few months the world has experienced a series of Covid-19 outbreaks that have generally followed the same pathway: an initial phase with few infections and limited response, followed by a take-off of the famous epidemic curve accompanied by a country-wide lockdown to flatten the curve.  Then, once the curve peaks, governments have to address what President Trump has called “the biggest decision” of his life: when and how to manage de-confinement.

Throughout the pandemic, great emphasis has been placed on the sharing (or lack of it) of critical information across countries — in particular from China — about the spread of the disease.  By contrast, relatively little has been said about how Covid-19 could have been better managed by leveraging the advanced data technologies that have transformed businesses over the past 20 years. In this article we discuss one way that governments could  leverage those technologies in managing a future pandemic — and perhaps even the closing phases of the current one.

The Power of Personalized Prediction

An alternative approach for policy makers to consider adding in their mix for battling Covid-19 is based on the technology of personalized prediction, which has transformed many industries over the last 20 years.  Using machine learning and artificial intelligence (AI) technology, data-driven firms (from “Big Tech” to financial services, travel, insurance, retail, and media) make personalized recommendations for what to buy, and practice personalized pricing, risk, credit, and the like using the data that they have amassed about their customers.

In a recent HBR article, for example, Ming Zeng, Alibaba’s former chief strategy officer, described how Ant Financial, his company’s small business lending operation, can assess loan applicants in real time by analyzing their transaction and communications data on Alibaba’s e-commerce platforms. Meanwhile, companies like Netflix evaluate consumers’ past choices and characteristics to make predictions about what they’ll watch next.

The same approach could work for pandemics — and even the future of Covid-19. Using multiple sources of data, machine-learning models would be trained to measure an individual’s clinical risk of suffering severe outcomes (if infected with Covid): what is the probability they will need intensive care, for which there are limited resources? How likely is it that they will die? The data could include individuals’ basic medical histories (for Covid-19, the severity of the symptoms seems to increase with age and with the presence of co-morbidities such as diabetes or hypertension) as well as other data, such as household composition. For example, a young, healthy individual (who might otherwise be classified as “low risk”) could be classified as “high risk” if he or she lives with old or infirm people who would likely need intensive care should they get infected.

These clinical risk predictions could then be used to customize policies and resource allocation at the individual/household level, appropriately accounting for standard medical liabilities and risks. It could, for instance, enable us to target social distancing and protection for those with high clinical risk scores, while allowing those with low scores to live more or less normally. The criteria for assigning individuals to high or low risk groups would, of course, need to be determined, also considering available resources, medical liability risks, and other risk trade-offs, but the data science approaches for this are standard and used in numerous applications.

A personalized approach has multiple benefits. It may help build herd immunity with lower mortality — and fast. It would also allow better — and fairer — resource allocation, for example of scarce medical equipment (such as test kits, protective masks, and hospital beds) or other resources.

De-confinement strategies at later stages of a pandemic — a next key step for Covid-19 in most countries — can benefit in a similar way. Deciding which people to start the de-confinement process with, is, by nature, a classification problem similar to the classification problems familiar to most data-driven firms.  Some governments are already approaching de-confinement by using age as a proxy for risk, a relatively crude classification that potentially misses other high-risk individuals (such as the above example of healthy young people living with the elderly).

Performing classification based on data and AI prediction models could lead to de-confinement decisions that are safe at the community level and far less costly for the individual and the economy.  We know that a key feature of Covid-19 is that it has exceptionally high transmission rate, but also relatively low severe symptoms or mortality rate. Data indicate that possibly more than 90% of infected people are either asymptomatic or experience mild symptoms when infected.

In theory, with a reliable prediction of who these 90% are we could de-confine all these individuals. Even if they were to infect each other, they would not have severe symptoms and would not overwhelm the medical system or die. These 90% low clinical risk de-confined people would also help the rapid build up of high herd immunity, at which point the remaining 10% could be also de-confined.

Read full story here…




FOIA Docs: Feds Excited To Create Mass Surveillance Network

Technocrats operating within the U.S. government are stampeding to impose total surveillance networks in America, similar to those seen in China but with one twist: The race to dominance requires us to leapfrog over China’s AI and do it even better.

Eric Schmidt is the Chairman of the National Security Commission on Artificial Intelligence. Schmidt is former Chairman of Google and Alphabet, and is a member of the elitist Trilateral Commission. ⁃ TN Editor

A FOIA request by the Electronic Privacy Information Center revealed how excited the National Security Commission on Artificial Intelligence (NSCAI) is about using CCTV cameras to create a national surveillance network.

An NSCAI presentation titled “Chinese Tech Landscape Overview” discusses China’s facial recognition CCTV camera network in glowing terms.

“When we talk about data resources, really the largest data source is the government.'”

The presentation discusses how the Chinese government profits from encouraging companies to use facial recognition on visitors and employees.

“Now that these companies are operating at scale they are building a host of other services (e.g. facial recognition for office buildings, augmented reality)”

In America things are not all that different.

In the United States, the Feds encourage private companies like Clearview AI, Amazon Ring and Flock Safety to use facial recognition and automatic license plate readers to identify everyone.

Under the section “State Datasets: Surveillance = Smart Cities” the presentation extol’s China’s smart city surveillance saying, “it turns out that having streets carpeted with cameras is good infrastructure for smart cities as well.”

Americans do not need more government surveillance and we certainly do not need our smart cities carpeted with government surveillance devices.

The NSCAI says, “mass surveillance is a killer application for deep learning.

As our government applies AI deep learning to things like CCTV cameras, cellphone locations, and license plate readers, a person’s entire life can be predicted.

AI’s will use deep learning to accurately guess where you work, eat, shop, sleep, worship and vacation. Basically, mass surveillance is a killer application for knowing all there is to know about everyone.
Last week MLliverevealed that a startup AI company co-founded by the University of Michigan is helping governments use CCTV cameras to monitor people for social distancing as indicated by a professor of electrical and computer engineering at the University of Michigan (UM):

“Two weeks ago, Corso said he and his team began tracking physical distancing at locations like Times Square in New York, Miami Beach, Abbey Road in London and the Ruthven Museums Building at UM.”

Police in New York City use CCTV cameras to fine people up to $1,000 for not social distancing. While police in Florida set up checkpoints on highways and police in the United Kingdom use CCTV cameras to enforce stay-at-home orders.

Voxel51 uses their “physical distancing index” to track social distancing in major cities around the globe.

“Voxel51 is tracking the impact of the coronavirus global pandemic on social behavior, using a metric we developed called the Voxel51 Physical Distancing Index (PDI). The PDI helps people understand how the coronavirus is changing human activity in real-time around the world. Using our cutting-edge computer vision models and live video streams from some of the most visited streets in the world, the PDI captures the average amount of human activity and social distancing behaviors in major cities over time.”

What worries me is how law enforcement might use Voxel51 to fine or arrest people for not observing government-mandated social distancing.

Despite what Voxel51 claims about anonymizing identifiable data, they are still collecting their data from public/government cameras.

A 2019 article in the Michigan News University of Michigan revealed that Voxel51 uses artificial intelligence to identify and follow people and objects.

“Voxel51 has set out to overcome those obstacles with their video analytics platform and open source software libraries that, together, enable state-of-the-art video recognition. It identifies and follows objects and actions in each clip. As co-founder Brian Moore says, We transform video into value.”

I find it hard to believe that cities and governments would pay money to simply look at anonymized data. Especially when Voxel51’s business model is built around identifying people and objects on a mass-scale.
A perfect example of how the Feds view mass surveillance can best be summed up in the NSCAI presentation, “American companies have a lot to gain by adopting ideas from Chinese companies.”

Everyday, it seems Americans are being told we need more national surveillance programs to keep everyone safe.

Our governments’ obsession with monitoring everyone is only going to grow as the coronavirus grips the country. It is our job to keep these surveillance programs from being implemented or risk becoming an authoritarian state like China.

Read full story here…




Mind-Reading AI Uses Brain Implant For Thoughts-To-Words

While medically encouraging for people who are paralyzed, the ability to translate thoughts into words has grave consequences for the age of Technocracy where nothing is hidden from the Scientific Dictatorship. ⁃ TN Editor

An artificial intelligence can accurately translate thoughts into sentences, at least for a limited vocabulary of 250 words. The system may bring us a step closer to restoring speech to people who have lost the ability because of paralysis.

Joseph Makin at the University of California, San Francisco, and his colleagues used deep learning algorithms to study the brain signals of four women as they spoke. The women, who all have epilepsy, already had electrodes attached to their brains to monitor seizures.

Each woman was asked to read aloud from a set of sentences as the team measured brain activity. The largest group of sentences contained 250 unique words.

The team fed this brain activity to a neural network algorithm, training it to identify regularly occurring patterns that could be linked to repeated aspects of speech, such as vowels or consonants. These patterns were then fed to a second neural network, which tried to turn them into words to form a sentence.

Each woman repeated the sentences at least twice, and the final repetition didn’t form part of the training data, allowing the researchers to test the system.

Each time a person speaks the same sentence, the brain activity associated will be similar but not identical. “Memorising the brain activity of these sentences wouldn’t help, so the network instead has to learn what’s similar about them so that it can generalise to this final example,” says Makin. Across the four women, the AI’s best performance was an average translation error rate of 3 per cent.

Makin says that using a small number of sentences made it easier for the AI to learn which words tend to follow others.

For example, the AI was able to decode that the word “Turner” was always likely to follow the word “Tina” in this set of sentences, from brain activity alone.

Read full story here…




Snowden: AI Plus Coronavirus Is ‘Turnkey To Tyranny’

Technocrat-minded surveillance companies are ‘in the zone’ with governments more willing than ever to buy their AI and surveillance technologies. Once embedded into society, they will be used against citizens long after the coronavirus has subsided. ⁃ TN Editor

Governments around the world are using high-tech surveillance measures to combat the coronavirus outbreak. But are they worth it?

Edward Snowden doesn’t think so.

The former CIA contractor, whose leaks exposed the scale of spying programs in the US, warns that once this tech is taken out of the box, it will be hard to put it back.

“When we see emergency measures passed, particularly today, they tend to be sticky,” Snowden said in an interview with the Copenhagen International Documentary Film Festival.

The emergency tends to be expanded. Then the authorities become comfortable with some new power. They start to like it.

Supporters of the draconian measures argue that normal rules are not enough during a pandemic and that the long-term risks can be addressed once the outbreak is contained. But a brief suspension of civil liberties can quickly be extended.

Security services will soon find new uses for the tech. And when the crisis passes, governments can impose new laws that make the emergency rules permanent and exploit them to crack down on dissent and political opposition.

Take the proposals to monitor the outbreak by tracking mobile phone location data.

This could prove a powerful method of tracing the spread of the virus and the movements of people who have it. But it will also be a tempting tool to track terrorists — or any other potential enemies of the states.

AI becoming ‘turnkey to tyranny’

Artificial intelligence has become a particularly popular way of monitoring life under the pandemic. In China, thermal scanners installed at train stations identify patients with fevers, while in Russia, facial recognition systems spot people breaking quarantine rules.

The coronavirus has even given Clearview AI a chance to repair its reputation. The controversial social media-scraping startup is in talks with governments about using its tech to track infected patients, according to the Wall Street Journal.

A big attraction of AI is its efficiency by assigning probabilities to different groups of people. But too much efficiency can be a threat to freedom, which is why we limit police powers through measures such as warrants and probable cause for arrest.

The alternative is algorithmic policing that justifies excessive force and perpetuates racial profiling.

Snowden is especially concerned about security services adding AI to all the other surveillance tech they have.

“They already know what you’re looking at on the internet,” he said. “They already know where your phone is moving. Now they know what your heart rate is, what your pulse is. What happens when they start to mix these and apply artificial intelligence to it?

Read full story here…




Robert Epstein

Does Big Tech Really Have The Power To Unseat Donald Trump?

Dr. Robert Epstein, a Democrat, has been writing that Big Tech will make it impossible for Trump to be re-elected in 2020. He misses the point that Big Tech are Technocrats intending to completely dominate society, everywhere. ⁃ TN Editor

When it comes to election manipulation, left-leaning American technology companies make the Russians look like rank amateurs.

No matter which weak candidate the Democrats ultimately nominate, and even with Russia’s help, President Donald Trump can’t win the 2020 election. For that matter, in races nationwide in which the projected winning margins are small—say, under 5 percent or so—Republicans, in general, are likely to lose.

That’s because of new forces of influence that the internet has made possible in recent decades and that Big Tech companies—Google more aggressively than any other—have been determined to perfect since Armageddon Day—oh, sorry, Election Day—in 2016.

For the record, I’m neither a conservative nor a Trump supporter. But I love democracy and America more than I love any particular party or candidate, and rigorous research that I have been conducting since 2013 shows that Big Tech companies now have unprecedented power to sway elections.

While I cheer the fact that 95 percent of donations from tech companies and their employees go to Democrats, I can’t stand by and watch these companies undermine democracy. As long as I’m still breathing, I will do everything I can to stop that from happening—and, for the record, I’m NOT suicidal.

The threat these companies pose is far from trivial. For one thing, they can shift opinions and votes in numerous ways that people can’t detect.

Remember the rumors about that movie theater in New Jersey that got people to buy more Coke and popcorn using subliminal messages embedded into a film? Well, those rumors were a bit exaggerated—those messages actually had a minimal effect—but Google-and-the-Gang are now controlling a wide variety of subliminal methods of persuasion that can, in minutes, shift the voting preferences of 20 percent or more of undecided voters without anyone having the slightest idea they’ve been manipulated.

Worse still, they can use these techniques without leaving a paper trail for authorities to trace. In a leak of Google emails to the Wall Street Journal in 2018, one Googler asks his colleagues how the company can use “ephemeral experiences” to change people’s views about Trump’s travel ban.

Ephemeral experiences are those fleeting ones we have every day when we view online content that’s generated on-the-fly and isn’t stored anywhere: newsfeeds, search suggestions, search results, and so on. No authority can go back in time to see what search suggestions or search results you were shown, but dozens of randomized, controlled, double-blind experiments I’ve conducted show that such content can dramatically shift opinions and voting preferences. See the problem?

Speaking of content, I’m getting sick of seeing headlines about Russian interference in our elections. Unless the Russians suddenly figure out how to massively hack our voting machines—and shame on us if we’re incompetent enough to let that happen—there’s no evidence that bad actors such as Russia or the now-defunct Cambridge Analytica can shift more than a few thousand votes here and there. Generally speaking, all they can do is throw some biased content onto the internet. But content isn’t the problem anymore.

All that matters now is who has the power to decide what content people will see or will not see (censorship), and what order that content is presented in. That power is almost entirely in the hands of the arrogant executives at two U.S. companies. Their algorithms decide which content gets suppressed, the order in which content is shown, and which content goes viral. You can counter a TV ad with another TV ad, but if the tech execs are supporting one candidate or party, you can’t counteract their manipulations.

Forget the Russians. As I said when I testified before Congress last summer, if our own tech companies all favor the same presidential candidate this year—and that seems likely—I calculate that they can easily shift 15 million votes to that candidate without people knowing and without leaving a paper trail.

By the way, the more you know about someone, the easier it is to manipulate him or her. Google and Facebook have millions of pieces of information about every American voter, and they will be targeting their manipulations at the individual level for every single voter in every swing state. No one in the world except Google and Facebook can do that.

In President Eisenhower’s famous 1961 farewell address, he warned not only about the rise of a military-industrial complex; he also warned about the rise of a “technological elite” who could someday control our country without us knowing.

That day has come, my friends, and it’s too late for any law or regulation to make a difference—at least in the upcoming election. There’s only one way at this point to get these companies to take their digits off the scale, and that’s to do to them what they do to us and our children every day: monitor them aggressively.

Read full story here…