Psychic

Ripped: Pre-Crime AI Bigger Scam Than Psychic Detectives

Technocrats believed their own twisted logic and it came around to bite them. For all hype, promises and millions spent, predictive policing is finally being recognized as a total scam. ⁃ TN Editor

Law enforcement agencies around the world have recently begun extricating themselves from expensive, ineffective predictive policing systems. The machine learning equivalent of psychic detectives, it turns out, simply doesn’t work.

AI can’t predict crime

In Palo Alto, California a three-year long program using predictive policing is finally being shuttered. Police spokesperson Janine De la Vega told the LA Times: “We didn’t get any value out of it. It didn’t help us solve crime.” In nearby Mountain View as well, a spokesperson for the police department said “we tested the software and eventually subscribed to the service for a few years, but ultimately the results were mixed and we discontinued the service.”

Predictive policing is a black box AI technology purported to take years of historical policing data (maps, arrest records, etc.) and convert it into actionable insights which predict “hot spots” for future criminal activity. The big idea here is that the AI tells law enforcement leaders where and when to deploy officers in order to prevent crimes from happening.

Another way of putting it: an AI determines that locations where crimes have already happened is a good place for cops to hang out in order to deter more crimes from happening.

We could point out that cops should already be deploying officers to high-crime areas as a method of proactively policing in between reacting to calls, but many agencies are getting hip to that on their own.

In Rio Rancho, New Mexico, for example, according to the LA Times, police realized the system was ineffective and redundant. Captain Andrew Rodriguez said:

It never panned out. It didn’t really make much sense to us. It wasn’t telling us anything we didn’t know.

AI flavored snake oil

Psychic detectives don’t have real psychic abilities. They’re a scam. Whether intentional or not, the perpetrators of these ridiculous claims waste taxpayer dollars, police resources, and valuable time that could be spent on actual investigations. They were all the rage as recently as the early 2000′s. Most police departments, thankfully, now recognize that humans absolutely don’t have psychic powers.

But, even as frauds, human psychics are still better than AI-powered predictive policing systems. It would almost certainly be more cost-effective and equally as accurate if police were to outsource crime prediction to psychics for a reasonable salary rather than continue paying companies like PredPol for their predictive policing products, installations, updates, and subscriptions (yes, subscriptions).

This is because AI can’t predict crime, it just analyses risk. Furthermore, it’s a confirmation bias scam. If, for example, the AI says a particular area is at the highest possible risk for crime, and officers deploy there but find no crime: the AI’s working because the police presence deterred crime. And if the officers do spot crime? Of course, the algorithms are working because it knew there’d be crime. It can’t lose!

Black box AI can’t explain why it makes a given prediction, and its ability to predict crime cannot be measured in any meaningful way.

Read full story here…




Google Being Sued Over Illegally Obtained Health Records

University of Chicago Medical Center conspired to sell sensitive health records of patients to Google, violating HIPPA and privacy laws. The lawsuit notes, this is “likely the greatest heist of consumer medical records in history.”

Google has been sucking up medical records for several years. In 2016, New Scientist discovered that Google secured access to 1.6 million patient records from the UK’s National Health Service.  ⁃ TN Editor

A former University of Chicago medical patient filed a class-action lawsuit against the University of Chicago and Google, claiming that the University of Chicago Medical Center is giving private patient information to the tech giant without patients’ consent.

About two years ago, the university medical center partnered with Google with the hope of identifying patterns in patient health records to help predict future medical issues.

Now, former patient Matt Dinerstein is filing a lawsuit on behalf of the medical center’s patients, alleging that the university violated privacy laws by sharing sensitive health records with Google from 2009 to 2016, aiding Google’s goal of creating a digital health record system, according to the Chicago Maroon.

The suit alleges that the university deceived its patients by telling them that their medical records would be protected, but ultimately violated the Health Insurance Portability and Accountability Act (HIPAA), a federal law that ensures privacy and security for personal medical data. It also claims that UChicago violated state laws in Illinois that makes it illegal for companies to participate in dishonest client practices.

The complaint details Google’s alleged two-part plan: obtain the Electronic Health Record (EHR) of almost every patient at the UChicago Medical Center, then use the information to create its own lucrative commercial EHR system.

“While tech giants have dominated the news over the last few years for repeatedly violating consumers’ privacy, Google managed to fly under the radar as it pulled off what is likely the greatest heist of consumer medical records in history,” the complaint stated. “The compromised personal information is not just run-of-the-mill like credit card numbers, usernames and passwords, or even social security numbers, which nowadays seem to be the subject of daily hacks.”

“Rather, the personal medical information obtained by Google is the most sensitive and intimate information in an individual’s life, and its unauthorized disclosure is far more damaging to an individual’s privacy.”

Dinerstein’s lawsuit claims that EHRs contain patient information ranging from height and weight to diseases they carry such as AIDS or diabetes and medical procedures they have undergone.

The medical records include the demographics of patients, along with their diagnoses, prescribed medicine, and past procedures, the lawsuit alleges. According to the Department of Health and Human Services, HIPAA protects patients’ “individually identifiable health information,” which includes “demographic data, that relates to…the individual’s past, present or future physical or mental health or condition, the provision of health care to the individual, or the past, present, or future payment for the provision of health care to the individual.”

“The disclosure of EHRs here is even more egregious because the University promised in its patient admission forms that it would not disclose patients’ records to third parties, like Google, for commercial purposes,” the lawsuit continued. “Nevertheless, the University did not notify its patients, let alone obtain their express consent, before turning over their confidential medical records to Google for its own commercial gain.”

Google detailed its use of EHRs, including ones obtained from the University of Chicago, in a 2018 research paper. The Big Tech company claimed that there are no privacy concerns because the records did not include the identities of patients.

Although Google claims to lack the personal identity associated with each set of information, the complaint calls this a “false sense of security” for patients, since Google’s comprehensive data-mining abilities, along with the time and date of each treatment and notes from medical providers that the records allegedly contained, allow them to identify each individual.

“While this type of public misinformation campaign may be expected from a tech company that has been known to play fast and loose with the information of its customers, the fact that a prominent institution like The University of Chicago would act in such a way is truly stunning,” the complaint said.

According to the lawsuit, Google has been interested in using algorithms to predict looming health issues. To gain the necessary information, Google first developed a personal health information storage platform that it later discontinued because few consumers participated. The company then bought DeepMind, a startup that uses artificial intelligence (AI) to study health care, reported the Chicago Tribune.

Read full story here…




Google

Raging Against The Algorithm: Google And Persuasive Technology

Google dominates because it panders to human weaknesses, not strengths. Is the problem then simply systemic, accidental or predestined, or is it specifically designed and directed by humans who have lost their moral and ethical compass?

As you listen to the video by Tristan Harris, you must ask the question, can technocrats solve the problem that technocrats created in the first place? Put another way, is more technology the answer to overcoming current technology?⁃ TN Editor

Monsters and titans share the stage of mythology across cultures as the necessary realisations of the human imagination. From stone cave to urban dwelling, the theme is unremitting; kept in the imagination, such creatures perform, innocently enough, benign functions. The catch here is the human tendency to realise such creatures. They take the form of social engineering and utopia. Folly bound, such projects and ventures wind up corrupting and degrading. The monster is born, and the awful truth comes to the fore: the concentration camp, the surveillance state, newspeak, the armies of censorship.

The technology giants of the current era are the modern utopians, indulging human hunger and interests by shaping them. One company gives us the archetype. It is Google, which has the unusual distinction of being both noun and verb, entity and action. Google’s power is disproportionately vast, a creepy sprawl that cherishes transparency while lacking it, and treasuring information while regulating its reach. It is also an entity that has gone beyond being a mere repository of searches and data, an attempt to induce behavioural change on the part of users.

Google always gives the impression that its users are in the lead, autonomous, independent in a verdant land of digital frolicking. The idea that the company itself fosters such change, teasing out alterations in behaviour, is placed to one side. There are no Svengalis in Googleland, because we are all free. Free, but needing assistance amidst chaos and “multitasking”.

People have what the company calls “micro-moments”, those, as behavioural economist Dan Ariely describes as “on-the-go mobile moments” where decisions are reached by a user while engaged, simultaneously, in a range of tasks: hotels to book, travel choices to make, work schedules to fulfil. While Ariely is writing more broadly from the perspective of the ubiquitous digital marketer, the language is pure Googleleese, smacking of part persuasion and part imposition. “Want to develop a strategy to shape your consumer decisions?” asks Google. “Start by understanding the key micro-moments in their journey.” Understand them; feed their mind; hold their hand.

The addiction to Google produces what can no longer be seen as retarding, but fostering. A generation is growing up without a hard copy research library, a ready-to-hand list of classics, and the means to search through records without resorting to those damnable digital keys. Debates are bound to be had (some already pollute the digital space) about whether this is necessarily a condition to lament. Embrace digital amnesia! To Google is to exist.

What is undeniable is that the means to find information – instantaneous, glut-filled, desperately quick – has created users who inhabit a space that guides their thinking, pre-empting, cajoling and adjusting. One form of literacy, we might kindly say, is being supplanted by another: the Google imbecile is upon us.

Given the nature of such effects, it is little wonder that politicians find Google threatening to their mouldy and rusted on craft. The politician’s preserve is sound – or unsound – communication; success at the next election is dependent upon the idea that the electors understand, and approve, what has been relayed to them (whether that material is factual, or not, a lie or otherwise, is beside the point: the politician yearns to convince in order to win).

The old search engine titan supplies something of a snag in this regard. On the one hand, it offers the political classes the means to reach a global audience, an avenue to screech and promote the next hair-brained scheme that comes into the mind of the political apparat. But what if the message stymies on the way, finding delays in the means of what is called “search engine optimisation”? Is Google to blame, or bog standard ordinariness on the part of the politician?

US politicians think they have an answer. Only they are permitted control of the narrative, and disseminating the lie. Of late they have been trying to sketch out a path they are not used to: regulating industries once hailed as sentinels of freedom, promoters of liberty. Their complaints tend to lack consistency. On the one hand, they find various Google algorithms problematic (preference for alt-right sites, conspiratorial gruel as damaging), but their slant is wonky and skewed. Had these algorithms been driving favourable search terms (conformist, steady, unquestioning, anti-Trump), the matter would be a non-starter. Our message, they would say, is getting out there.

This week, the US Senate Committee on Commerce, Science and Transportation tried to make sense, in rather accusing fashion, of “persuasive technology”. Nanette Byrnes furnishes us with a definition: “the idea that computers, mobile phones, websites, and other technologies could be designed to influence people’s behaviour and even attitudes”. The Pope does remain resolutely Catholic.

The committee hearing featured such opinions as those of Senator John Thune (R-SD), who wished to use the proceedings to draft legislation that would “require internet platforms to give consumers the option to engage with the platform without having the experience shaped by algorithms.” The Senator is happy to accept that artificial intelligence “powers automations to display content to optimize engagement” but sees a devil in the works, as “AI algorithms can have an unintended and possibly even dangerous downside”. This is tantamount to wanting a Formula One Grand Prix without fast cars and an athletics competition in slow motion.

Facing the senators from Google’s side was Maggie Stanphill, director of Google User Experience. Her testimony was couched in words more akin to the glossiness of a travel brochure with a complimentary sprinkling of cocaine. “Google’s Digital Wellbeing Initiative is a top company goal, focusing on providing our users with insights about their digital habits and tools to support an intentional relationship with technology.” Google merely “creates products that improve the lives of the people who use them.” The company has provided access that has “democratized information and provided services for billions of people around the world.” When asked about whether Google was doing its bit in the persuasion business, Stanphill was unequivocal. “We do not use persuasive technology.”

The session’s theme was clear: oodles and masses of content are good, but must be appropriate. In Information Utopia, where digital Adam and Eve still run naked, wickedness will not be allowed. If people want to seek content that is “negative” (this horrendous arbitrary nature keeps appearing), they should not be allowed do. Gag them, and make sure the popular terms sought are white washed of any offensive or dangerous import. Impose upon the tech titans a responsibility to control the negative.

Senator Brian Schatz (D-Hawaii) complained of those companies “letting these algorithms run wild […] leaving humans to clean up the mess. Algorithms are amoral.” Tristan Harris, co-founder and executive director of the Centre for Humane Technology, spoke of the competition between companies to use algorithms which “more accurately predict what will keep users there the longest.” If you want to maximise the time spent searching terms or, in the case of YouTube, watching a video, focus “the entire ant colony of humanity towards crazytown.” For Harris, “technology hacks human weaknesses.” The moral? Do not give people what they want.

The rage against the algorithm, and the belief that no behavioural pushing is taking place in search technology, is misplaced on a few fronts. On a certain level, all accept how such modes of retrieving information work. Disagreement arises as to their consequences, a concession, effectively, to the Google user as imbecile. Stanphill is being disingenuous for assuming that persuasive technology is not a function of Google’s work (it patently is, given the company’s intention of improving the “intentional relationship with technology”). In her testimony, she spoke of building “products with privacy, transparency and control for the users, and we build a lifelong relationship with the user, which is primary.” The Senators, in turn, are concerned that the users, diapered by encouragements in their search interests, are incapable of making their own fragile minds up.

The nature of managed information in the digital experience is not, as Google, YouTube and like companies show, a case of broadening knowledge but reaffirming existing assumptions. The echo chamber bristles with confirmations not challenges, with the comforts of prejudice rather than the discomforts of heavy-artillery learning. But the elected citizens on the Hill, and the cyber utopians, continue to struggle and flounder in the digital jungle they had seen as an information utopia equal to all. For the Big Tech giants, it’s all rather simple: the attention grabbing spectacle, bums on seats, and downloads galore.

Read full story here…




AI

When AI Becomes Your Boss, You Become The Robot

AI sees humanity itself as a thing to be optimized, squeezing out all inefficiency. When unleashed as your boss, AI effectively turns you into a living robot to be controlled like a puppet on a string. Employees will never tolerate this indefinitely. ⁃ TN Editor

Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanize and unfairly punish employees.

When Conor Sprouls, a customer service representative in the call center of insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.

Sprouls and the other call center workers at his office in Warwick, Rhode Island, still have plenty of human supervisors. But the software on their screens — made by Cogito, an A.I. company in Boston — has become a kind of adjunct manager, always watching them. At the end of every call, Sprouls’ Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.

Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them real-time feedback.

“There is variability in human performance,” Feast said. “We can infer from the way people are speaking with each other whether things are going well or not.”

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claims it has a 96% accuracy rate.

Then there are the startups. Cogito, which works with large insurance companies like MetLife and Humana as well as financial and retail firms, says it has 20,000 users. Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its clients, uses in-store sensors to calculate a “true productivity” score for each worker, and rank workers from most to least productive.

Management by algorithm is not a new concept. In the early 20th century, Frederick Winslow Taylor revolutionized the manufacturing world with his “scientific management” theory, which tried to wring inefficiency out of factories by timing and measuring each aspect of a job. More recently, Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources — scheduling, payroll, performance reviews — to computers.

But using A.I. to manage workers in conventional, 9-to-5 jobs has been more controversial. Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanize and unfairly punish employees. And while it’s clear why executives would want A.I. that can track everything their workers do, it’s less clear why workers would.“

It is surreal to think that any company could fire their own workers without any human involvement,” Marc Perrone, the president of United Food and Commercial Workers International Union, which represents food and retail workers, said in a statement about Amazon in April.

Read full story here…




Everseen Walmart

Walmart Using AI-Powered Cameras In 1,000 Stores To Track Shoppers

Walmart has hired a social engineering company, Everseen, to control and change behavior of its shoppers through the use of AI-based surveillance systems. By surveilling everyone, Walmart hopes to catch a few cheaters.

According to Everseen’s website, “process mining” “amplifies awareness of scenes unfolding daily in retail, pinpointing the “moments that matter”, in order to nudge a behavior one seeks to change, and/or transform the underlying process.” ⁃ TN Editor

 

Walmart is using computer vision technology to monitor checkouts and deter potential theft and other causes of shrink in more than 1,000 stores, the company confirmed to Business Insider.

The surveillance program, internally called Missed Scan Detection, uses cameras to help identify and correct checkout scanning errors and failures.

Ireland-based Everseen is one of several companies supplying Walmart with the technology for its Missed Scan Detection program.

“We are continuously investing in people, programs and technology to keep our stores and communities safe,” a Walmart spokeswoman said.

Walmart is using computer vision technology to monitor checkouts and deter potential theft in more than 1,000 stores, the company confirmed to Business Insider.

The surveillance program, which Walmart refers to internally as Missed Scan Detection, uses cameras to help identify checkout scanning errors and failures.

The cameras track and analyze activities at both self-checkout registers and those manned by Walmart cashiers. When a potential issue arises, such as an item moving past a checkout scanner without getting scanned, the technology notifies checkout attendants so they can intervene.

The program is designed to reduce shrinkage, which is the term retailers use to define losses due to theft, scanning errors, fraud, and other causes.

US retailers lost an estimated 1.33% of revenues to shrinkage in 2017, totalling an estimated $47 billion, according to the National Retail Federation. If Walmart’s shrink rates match the industry average, the company’s US business would have lost more than $4 billion last year to theft and other related losses.

“Walmart is making a true investment to ensure the safety of our customers and associates,” Walmart spokeswoman LeMia Jenkins said. “Over the last three years, the company has invested over half a billion dollars in an effort to prevent, reduce and deter crime in our stores and parking lots. We are continuously investing in people, programs and technology to keep our stores and communities safe.”

Walmart began rolling out Missed Scan Detection technology to stores two years ago, and it appears to be working successfully so far. Shrink rates have declined at stores where it’s deployed, Jenkins said.

Ireland-based Everseen is one of several companies supplying Walmart with the technology for the program.

“Everseen overcomes human limitations. By using state of the art artificial intelligence, computer vision systems, and big data, we can detect abnormal activity and other threats,” an Everseen video advertises. “Our digital eye has perfect vision and it never needs a coffee break or a day off.”

Read full story here…




Georgetown

Georgetown Law: Face Surveillance In The U.S.

Georgetown Law lays down the total surveillance society with skilled and documented precision. Detroit and Chicago may be the first to have China-like surveillance with cameras everywhere, but more cities are close behind. Technocracy is coming, and is dangerously close. ⁃ TN Editor

Authorities in Guiyang have eyes everywhere. Thanks to a vast, sophisticated camera system blanketing this Southwest Chinese city, police are purportedly able to locate and identify anyone who shows their face in public—in a matter of minutes. They can trace where you have been over the past week. If you are a citizen they can “match your face with your car, match you with your relatives and people you’re in touch with … know who you frequently meet.”

This is a reality made possible by real-time face surveillance. Thanks to face recognition technology, authorities are able to conduct biometric surveillance—pick you out from a crowd, identify you, trace your movements across a city with the network of cameras capturing your face—all completely in secret. No longer is video surveillance limited to recording what happens; it may now identify who is where, doing what, at any point in time.

It’s tempting to think that it is a remote, future concern for the United States. But for the millions of people living in Detroit and Chicago, face surveillance may be an imminent reality. Detroit’s million-dollar system affords police the ability to scan live video from cameras located at businesses, health clinics, schools, and apartment buildings. Chicago police insist that they do not use face surveillance, but the city nonetheless has paid to acquire and maintain the capability for years.

For millions of others in New York City, Orlando, and Washington, D.C., face surveillance is also on the horizon. And for the rest of the country, there are no practical restrictions against the deployment of face surveillance by federal, state, or local law enforcement.

There is no current analog for the kind of police surveillance made possible by pervasive, video-based face recognition technology. By enabling the secret and mass identification of anyone enrolled in a police—or other government—database, it risks fundamentally changing the nature of our public spaces.

Free Speech. When used on public gatherings, face surveillance may have a chilling effect on our First Amendment rights to unabridged free speech and peaceful assembly. This is something law enforcement agencies themselves have recognized, cautioning: “The public could consider the use of facial recognition in the field as a form of surveillance …. The mere possibility of surveillance has the potential to make people feel extremely uncomfortable, cause people to alter their behavior, and lead to self-censorship and inhibition.”3

Privacy. Supreme Court Chief Justice John Roberts wrote for the majority in Carpenter v. United States: “A person does not surrender all Fourth Amendment protection by venturing into the public sphere.”4 The Court, examining police use of historic cell-site location information, noted that for the government to “secretly monitor and catalogue every single movement” of someone across time unconstitutionally violated society’s expectations about what law enforcement can and should be able to do.5

Face surveillance technology can facilitate precisely this type of tracking, locating where someone is and has been by the location of the camera that captures that person’s face. If mounted on churches, health clinics, community centers, and schools, face surveillance cameras risk revealing a person’s “familial, political, professional, religious, and sexual associations,” the very “privacies of life” that the Supreme Court in Carpenter suggested receive protection under the U.S. Constitution.6

Bias. The risks of face surveillance are likely to be borne disproportionately by communities of color. African Americans are simultaneously more likely to be enrolled in face recognition databases and the targets of police surveillance use.7 Compounding this, studies continue to show that face recognition performs differently depending on the age, gender, and race of the person being searched.8 This creates the risk that African Americans will disproportionately bear the harms of face recognition misidentification.

Detroit’s “Real-Time Video Feed Facial Recognition

A sign on the side of Summit Medical Center designates it as a Green Light Partner with the Detroit Police Department (DPD). It informs the public that this women’s health care clinic is monitored by video cameras whose feeds are viewed down at DPD headquarters.9

This sign, and ones just like it at more than 500 locations across Detroit, is meant to deter crime and make residents feel safe, informing the public that the area is being watched.10

What the signs do not say is that many of these video cameras may also be connected to a face surveillance system, enabling them to record not only what is happening at a given location, but who is at that location at any given moment. DPD has purchased the capability to locate and identify anyone who has an arrest record, in real-time, using video cameras mounted across the city.11

From the perspective of quickly solving the crimes that aren’t deterred by the Project Green Light Signs, this may sound like a good thing. Police are able to more quickly identify repeat offenders and make arrests.

But face surveillance doesn’t identify crime; it identifies people. With such a system, all people caught on camera—passersby, patrons, or patients—are scanned, their faces compared against the face recognition database on file. For patients visiting Summit Medical Center to terminate a pregnancy, receive HIV treatment, counseling, or another service, this probably sounds less like a guarantee of safety and more like an invasion into a deeply personal moment in their lives.

At Odds With The Will Of The Public

“The technology is here and being used by police departments already. There’s an article … from China, just this week at a concert, where a concertgoer went and was arrested within minutes based off of facial recognition data at that concert. It is here. We have to balance public safety versus people’s Constitutional rights. That’s our job. But our job is to uphold the Constitution, the Constitution of Illinois and the Constitution of the United States. And I’ll remind people of the First Amendment of the U.S. Constitution, for the right of people to peaceably assemble.” — Rep. Tim Butler (R)

Chicago’s expansive face surveillance capabilities run counter to the strong interest in protecting citizens’ biometric data expressed by the state legislature. Illinois passed the first, and the country’s most protective, biometric privacy law in 2008. The Biometric Information Privacy Act (BIPA) guards state citizens against the collection, use, and dissemination of their biometric information without their express, written consent—but only by commercial entities.

Public agencies, such as law enforcement, are notably exempt from BIPA’s requirements. However, recent debate in the state House over a police drone bill suggests that legislators, and by extension the public, may be similarly uncomfortable with the prospect of biometric surveillance by the police. House members repeatedly voiced alarm at the prospect of the drones being equipped with facial recognition capabilities. Members characterized the prospect as “truly terrifying” and “somewhat of an Orwellian reach into crowd control”—a capability that may run afoul of the First and Fourth Amendments of the Constitution.

Nonetheless, Chicago authorities appear intent on operating at odds with the concerns that many state lawmakers have expressed regarding biometric privacy. The limited information that is available suggests that Chicago is home to the most widespread face surveillance system in the United States today. An amendment proposed last year to Chicago’s Municipal Code additionally attempted to circumvent the protections BIPA afforded to citizens. The amendment would have permitted commercial entities that have signed an agreement with police to be able to use face recognition systems to meet whatever “security needs” they may have.54

This proposal—and CPD’s secretive face surveillance system—creates a stark divide between the privacy protections for Illinois residents outside Chicago, and those within.

Read full story here…

Ed note: more analysis on Orlando, Washington and New York City




DARPA

DARPA: Funding Wearable Brain-Machine Interfaces

Technocrats at DARPA are intent on creating a non-surgical brain-machine interface as a force-multiplier for soldiers. The research will require “Investigational Device Exemptions” from the Administration. ⁃ TN Editor

DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.

“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.”

Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.

For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. Yet, in fact, similar technology could greatly benefit clinical populations as well. By removing the need for surgery, N3 systems seek to expand the pool of patients who can access treatments such as deep brain stimulation to manage neurological illnesses.

The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. The research is split between two tracks. Teams are pursuing either completely noninvasive interfaces that are entirely external to the body or minutely invasive interface systems that include nanotransducers that can be temporarily and nonsurgically delivered to the brain to improve signal resolution.

  • The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive interface system that pairs an external transceiver with electromagnetic nanotransducers that are nonsurgically delivered to neurons of interest. The nanotransducers would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication.
  • The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity. The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types.
  • The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity.
  • The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
  • The Rice University team, under principal investigator Dr. Jacob Robinson, aims to develop a minutely invasive, bidirectional system for recording from and writing to the brain. For the recording function, the interface will use diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. To enable the write function, the team will use a magneto-genetic approach to make neurons sensitive to magnetic fields.
  • The Teledyne team, under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.

Throughout the program, the research will benefit from insights provided by independent legal and ethical experts who have agreed to provide insights on N3 progress and consider potential future military and civilian applications and implications of the technology. Additionally, federal regulators are cooperating with DARPA to help the teams better understand human-use clearance as research gets underway. As the work progresses, these regulators will help guide strategies for submitting applications for Investigational Device Exemptions and Investigational New Drugs to enable human trials of N3 systems during the last phase of the four-year program.

“If N3 is successful, we’ll end up with wearable neural interface systems that can communicate with the brain from a range of just a few millimeters, moving neurotechnology beyond the clinic and into practical use for national security,” Emondi said. “Just as service members put on protective and tactical gear in preparation for a mission, in the future they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete.”

Read full story here…




Experts: The Only Defense Against Killer AI Is Not Developing It

Out of control killer AI in warfare is inevitable because it will become too complex for human management and control. The only real answer is to not develop it in the first place. ⁃ TN Editor

A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don’t risk eradication. Whether you’re for or against the AI arms race: it’s happening. Here’s what that means, according to a trio of experts.

Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXivdiscussing the potential ramifications of integrating AI systems into modern warfare.

The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover. In essence it’s a short, sober, and terrifying look at how all this various machine learning technology will play out based on analysis of current cutting-edge military AI technologies and predicted integration at scale.

The paper begins with a warning about impending catastrophe, explaining there will almost certainly be a “normal accident,” concerning AI – an expected incident of a nature and scope we cannot predict. Basically, the militaries of the world will break some civilian eggs making the AI arms race-omelet:

Study of this field began with accidents such as Three Mile Island, but AI technologies embody similar risks. Finding and exploiting these weaknesses to induce defective behavior will become a permanent feature of military strategy.

If you’re thinking killer robots duking it out in our cities while civilians run screaming for shelter, you’re not wrong – but robots as a proxy for soldiers isn’t humanity’s biggest concern when it comes to AI warfare. This paper discusses what happens after we reach the point at which it becomes obvious humans are holding machines back in warfare.

According to the researchers, the problem isn’t one we can frame as good and evil. Sure it’s easy to say we shouldn’t allow robots to murder humans with autonomy, but that’s not how the decision-making process of the future is going to work.

The researchers describe it as a slippery slope:

If AI systems are effective, pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.

LAWS, or lethal autonomous weapons systems, will almost immediately scale beyond humans’ ability to work with computers and machines — and probably sooner than most people think. Hand-to-hand combat between machines, for example, will be entirely autonomous by necessity:

Over time, as AI becomes more capable of reflective and integrative thinking, the human component will have to be eliminated altogether as the speed and dimensionality become incomprehensible, even accounting for cognitive assistance.

And, eventually, the tactics and responsiveness required to trade blows with AI will be beyond the ken of humans altogether:

Given a battlespace so overwhelming that humans cannot manually engage with the system, the human role will be limited to post-hoc forensic analysis, once hostilities have ceased, or treaties have been signed.

If this sounds a bit grim, it’s because it is. As Import AI’s Jack Clark points out, “This is a quick paper that lays out the concerns of AI+War from a community we don’t frequently hear from: people that work as direct suppliers of government technology.”

It might be in everyone’s best interest to pay careful attention to how both academics and the government continue to frame the problem going forward.

Read full story here…




EU On AI Ethics: Must ‘Enhance Positive Social Change’

EU Technocrats define ethics for AI: “Environmental and societal well-being — AI systems should be sustainable and “enhance positive social change.” This is the coveted ‘Science of Social Engineering’ that harkens back to the original Technocracy movement in the 1930s. ⁃ TN Editor
 

The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology.

So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that a number of things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them.

So, yes, these guidelines are about stopping AI from running amuck, but on the level of admin and bureaucracy, not Asimov-style murder mysteries.

To help with this goal, the EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:

  • Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  • Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
  • Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
  • Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
  • Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  • Environmental and societal well-being AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.

You’ll notice that some of these requirements are pretty abstract and would be hard to assess in an objective sense. (Definitions of “positive social change,” for example, vary hugely from person to person and country to country.) But others are more straightforward and could be tested via government oversight. Sharing the data used to train government AI systems, for example, could be a good way to fight against biased algorithms.

These guidelines aren’t legally binding, but they could shape any future legislation drafted by the European Union. The EU has repeatedly said it wants to be a leader in ethical AI, and it has shown with GDPR that it’s willing to create far-reaching laws that protect digital rights.

But this role has been partly forced on the EU by circumstance. It can’t compete with America and China — the world’s leaders in AI — when it comes to investment and cutting-edge research, so it’s chosen ethics as its best bet to shape the technology’s future.

Read full story here…




China Claims Its Social Credit System Has ‘Restored Morality’

All of China’s 1.4 billion citizens are enrolled in its facial recognition and Social Credit System, resulting in 13 million being blacklisted; as a result, China is bragging that it has ‘restored morality’. ⁃ TN Editor

China’s state-run newspaper Global Times revealed in a column defending the nation’s authoritarian “social credit system” Monday that the communist regime had blacklisted 13.49 million Chinese citizens for being “untrustworthy.”

The article did not specify what these individuals did to find themselves on the list, though the regime has revealed the system assigns a numerical score to every Chinese citizen based on how much the Communist Party approves of his or her behavior. Anything from jaywalking and walking a dog without a leash to criticizing the government on the internet to more serious, violent, and corrupt crimes can hurt a person’s score. The consequences of a low credit score vary, but most commonly appear to be travel restrictions at the moment.

China is set to complete the implementation of the system in the country in 2020. As the date approaches, the government’s propaganda arms have escalated its promotion as necessary to live in a civilized society. Last week, the Chinese Communist Youth League released a music video titled “Live Up to Your Word” featuring well-known Chinese actors and musicians who cater to a teenage audience. The song in the video urged listeners to “be a trustworthy youth” and “give thumbs up to integrity” by abiding by the rules of the Communist Party. While it did not explicitly say the words “social credit system,” observers considered it a way to promote the behavior rewarded in social credit points

Monday’s Global Times piece claimed it will “restore morality” by holding bad citizens accountable, with “bad” solely defined in the parameters set by Communist Party totalitarian chief Xi Jinping. The federal party in Beijing is also establishing a points-based metric for monitoring the performance of local governments, making it easier to keep local officials in line with Xi’s agenda.

“As of March, 13.49 million individuals have been classified as untrustworthy and rejected access to 20.47 million plane tickets and 5.71 million high-speed train tickets for being dishonest,” the Global Times reported, citing the government’s National Development and Reform Commission (NDRC). Among the new examples the newspaper highlights as dishonest behavior are failing to pay municipal parking fees, “eating on the train,” and changing jobs with “malicious intent.”

China had previously revealed that, as of March, the system blocked an unspecified number of travelers from buying over 23 million airplane, train, and bus tickets nationwide. That report did not say how many people the travel bans affected, as the same person could presumably attempt to buy more than one ticket or tickets for multiple means of transportation. The system blocked over three times the number of plane tickets as train tickets, suggesting the government is suppressing international travel far more than use of domestic vehicles. At the time of the release of the initial numbers in March, estimates found China had tripled the number of people on its no-fly list, which predates the social credit system.

The Chinese also reportedly found that some of the populations with the highest number of system violations lived in wealthy areas, suggesting Xi is targeting influential businesspeople with the system to keep them under his command.

In addition to limited access to travel, another punishment the Chinese government rolled out in March was the use of an embarrassing ringtone to alert individuals of a low-credit person in their midst. The ringtone would tell those around a person with low credit to be “careful in their business dealings” with them.

In the system, all public behavior, the Global Times explained Monday, will be divided into “administrative affairs, commercial activities, social behavior, and the judicial system” once the system is complete. No action will be too small to impact the score.

“China’s ongoing construction of the world’s largest social credit system will help the country restore social trust,” the article argued.

Read full story here…