emotions

Why You Should Be Worried About Machines Reading Your Emotions

Reading emotions is akin to phrenology, or reading the bumps on your head to predict mental traits. Both are based on simplistic and faulty assumptions which could falsely scar an individual for life. ⁃ TN Editor

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science.

Emotion detection technology requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyze and interpret the emotional content of those facial features.

Typically, the second step employs a technique called supervised learning, a process by which an algorithm is trained to recognize things it has seen before. The basic idea is that if you show the algorithm thousands and thousands of images of happy faces with the label “happy” when it sees a new picture of a happy face, it will, again, identify it as “happy”.

A graduate student, Rana el Kaliouby, was one of the first people to start experimenting with this approach. In 2001, after moving from Egypt to Cambridge University to undertake a PhD in computer science, she found that she was spending more time with her computer than with other people. She figured that if she could teach the computer to recognize and react to her emotional state, her time spent far away from family and friends would be less lonely.

Kaliouby dedicated the rest of her doctoral studies to work on this problem, eventually developing a device that assisted children with Asperger syndrome read and respond to facial expressions. She called it the “emotional hearing aid”.

In 2006, Kaliouby joined the Affective Computing lab at the Massachusetts Institute of Technology, where together with the lab’s director, Rosalind Picard, she continued to improve and refine the technology. Then, in 2009, they co-founded a startup called Affectiva, the first business to market “artificial emotional intelligence”.

Read full story here…




‘Google Inside Your Head’: Brain Implants To Revolutionize AI For Humans

Part of the Transhuman dream is to achieve God-like omniscience, which is the goal with brain implants. Who would control the knowledge available to you? Google. Proceed at your own risk. ⁃ TN Editor
 

GOOGLE will be inside our heads as brain implants are developed to revolutionize AI for humans, according to an artificial intelligence expert.

Top AI expert Nikolas Kairinos believes within 20 years, implants put into human’s heads will allow us to not have to memorise a thing.

The CEO and Founder of Fountech.ai exclusively told Daily Star Online: “You won’t need to memorise anything.

Nick said humans, without making a sound or typing a single thing, will hear the answer to any question we may have inside our heads.

“Without making a sound or typing anything, you can ask something like ‘how you you say this in French?’ and instantly you’ll hear the information from the AI implant and be able to say it.”

Nick says the need to learn things in “parrot fashion” as we are taught in schools will disappear completely.

He revealed: “The need to actually learn something parrot fashion is going to disappear because we will have access to that instantly.

“Google will be in your head, and that’s not far-fetched.”

“It’ll be like having a really smart assistant that will almost think like you.”

Nick has more than 20 years’ experience of working with start ups and focuses on problem solving using artificial intelligence.

He revealed his thoughts on the future of AI to Daily Star Online and told us the massive changes he believes robots will have on our everyday lives.

Read full story here…

Northwestern Neuroscientist Researching Brain Chips To Make People Superintelligent

What if you could make money, or type something, just by thinking about it? It sounds like science fiction, but it might be close to reality.

In as little as five years, super smart people could be walking down the street; men and women who’ve paid to increase their intelligence.

Northwestern University neuroscientist and business professor Dr. Moran Cerf made that prediction, because he’s working on a smart chip for the brain.

“Make it so that it has an internet connection, and goes to Wikipedia, and when I think this particular thought, it gives me the answer,” he said.

Cerf is collaborating with Silicon Valley big wigs he’d rather not name.

Facebook also has been working on building a brain-computer interface, and SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface called Neuralink.

“Everyone is spending a lot of time right now trying to find ways to get things into the brain without drilling a hole in your skull,” Cerf said. “Can you eat something that will actually get to your brain? Can you eat things in parts that will assemble inside your head?”

It sounds mind-blowing. Relationships might be on the line.

“This is no longer a science problem. This is a social problem,” Cerf said.

Cerf worries about creating intelligence gaps in society; on top of existing gender, racial, and financial inequalities.

“They can make money by just thinking about the right investments, and we cannot; so they’re going to get richer, they’re going to get healthier, they’re going to live longer,” he said.

Read full story here…




In-Store Cameras Spot Shoplifters Before They Steal

Vaak’s website claims to “Analyze more than 100 person feature quantities such as face, clothes, movement direction, attribute and estimate behavior and purpose.” Basically, it tags you ‘guilty’ before you commit a crime. ⁃ TN Editor

It’s watching, and knows a crime is about to take place before it happens. Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

While AI is usually envisioned as a smart personal assistant or self-driving car, it turns out the technology is pretty good at spotting nefarious behavior. Like a scene out of the movie “Minority Report,” algorithms analyze security-camera footage and alert staff about potential thieves via a smartphone app. The goal is prevention; if the target is approached and asked if they need help, there’s a good chance the theft never happens.

Vaak made headlines last year when it helped to nab a shoplifter at a convenience store in Yokohama. Vaak had set up its software in the shop as a test case, which picked up on previously undetected shoplifting activity. The perpetrator was arrested a few days later.

“I thought then, ‘Ah, at last!’” said Vaak founder Ryo Tanaka, 30. “We took an important step closer to a society where crime can be prevented with AI.”

Shoplifting cost the global retail industry about $34 billion in lost sales in 2017 — the biggest source of shrinkage, according to a report from Tyco Retail Solutions. While that amounts to approximately 2 percent of revenue, it can make a huge difference in an industry known for razor-thin margins.

The opportunity is huge. Retailers are projected to invest $200 billion in new technology this year, according to Gartner Inc., as they become more open to embracing technology to meet consumer needs, as well as improve bottom lines.

“If we go into many retailers whether in the U.S. or U.K., there are very often going to be CCTV cameras or some form of cameras within the store operation,” said Thomas O’Connor, a retail analyst at Gartner. “That’s being leveraged by linking it to an analytics tool, which can then do the actual analysis in a more efficient and effective way.”

Because it involves security, retailers have asked AI-software suppliers such as Vaak and London-based Third Eye not to disclose their use of the anti-shoplifting systems. It’s safe to assume, however, that several big-name store chains in Japan have deployed the technology in some form or another. Vaak has met with or been approached by the biggest publicly traded convenience-store and drugstore chains in Japan, according to Tanaka.

Read full story here…




Robot Deliveries Grew By 40% In 2018

With robot delivery growth rates of 40-50% per year, the displacement of human workers will soar during the next 10 years. As a result, the robotics industry will be one of the hottest investment areas, further exacerbating the displacements. ⁃ TN Editor

Robots took on a record number of jobs in North American firms last year, the Robotic Industries Association (RIA) said Thursday.

According to the RIA’s data, 35,880 robots were shipped in 2018 to the U.S., Canada and Mexico, up 7 percent from the previous year. Of those shipments, 16,702 were to non-automotive companies — a year-on-year increase of 41 percent.

The consumer goods sector purchased almost 50 percent more robots in 2018 than in 2017, while life sciences saw an increase of a third.

However, shipments to the automotive industry slowed by 12 percent. The industry accounted for 53 percent of total robot shipments to North American companies — its lowest share since 2010.

“These sales and shipments aren’t just to large, multinational companies anymore. Small and medium-sized companies are using robots to solve real-world challenges, which is helping them be more competitive on a global scale,” said Jeff Burnstein, president of the Association for Advancing Automation — the RIA’s parent company.

U.S. record

In the U.S. alone, robot shipments across all sectors increased by more than 15 percent, marking a record number of shipments to American companies.

Every sector included in the RIA’s analysis saw an increase, with the exception of the automotive industry, where robot shipments to American vehicle makers fell by 30 percent.

Despite an increasing uptake of automation in the workplace, some have argued that companies should be doing more to preserve human jobs.

Last month, South African President Cyril Ramaphosa told a press conference that policymakers needed to “deliver a human-centered agenda.”

In its 2018 “Future of Work” report, the World Economic Forum noted that businesses “will need to recognize human capital investment as an asset rather than a liability.”

Read full story here…




Orlando To Introduce Driverless Busses

Busses like these are being released around the world, including Australia, Japan, Sweden, Finland, Paris, China and others. Like it or not, self-driving technology is here to stay and will help drive the 4th Industrial Revolution.  ⁃ TN Editor

Officials Tuesday revealed a glimpse of what could one day be the future of transit in Orlando by unveiling a small driverless bus that soon will maneuver around Lake Nona.

The battery-powered vehicle, run by Beep software, is one of two that are expected to begin operating in southeast Orlando this spring. The shuttles are said to be quiet and smooth riding and can carry a maximum of 15 passengers, reaching speeds of 16 mph.

For several years, city officials have studied autonomous vehicles — including embarking on a $300,000 study of the technology — in hopes of launching it one day within the city.




Digisexuals AI Robots

‘Digisexuals’ Demand Human Rights At UN To Have Sex With AI Robots

Is technology causing humanity to go mad? The rise of ‘digisexuals’ will lead to demographic disaster as well as the most dysfunctional human relationships in the history of the world.  ⁃ TN Editor

ECH-savvy “digisexuals” who lust after AI software and realistic robots are demanding human rights.

An emerging sexual identity known as “digisexuality” is said to be gaining traction among open-minded youngsters in Britain, Japan, Russia and the United States.

Research by academics Neil McArthur and Markie Twist, who co-authored a paper titled “The Rise of Digisexuality”, suggests the trend is becoming more commonplace.

These digisexuals are forgoing humans in favour of intimate, and even sexual, relationships with advanced computer software and lifelike robots, according to Markie and McArthur.

One digisexual Akihiko Kondo, a 35-year-old school administrator who married a virtual reality singer in Japan, deems himself to be sexual minority facing discrimination.

For those who identity as digisexuals, Markie and McArthur believe they may be resistance akin to the pushback against other sexual minorities such as homo, trans, and bisexuals.

Pressuring for human rights protections could be one way in which digisexuals attempt to achieve recognition.

The campaign, it seems, has already begun online.

“I think we are moving towards a system that grants broad sexual freedom and recognises the value of alternative sexual identities in general,” Dr McArthur, a philosophy professor at the University of Manitoba, told Daily Star Online.

“Canada and the Nordic countries are the leaders at this but the rest of Europe and America are not far behind.”

Read full story here…




Pentagon Releases Blueprint For Accelerating Artificial Intelligence

 Pentagon Technocrats are having a heyday with AI. The report states, “AI is poised to transform every industry, and is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others.” The problem is obvious: “artificial intelligence” is an oxymoron. ⁃ TN Editor

The Pentagon made public for the first time on Feb. 12 the outlines of its master plan for speeding the injection of artificial intelligence (AI) into military equipment, including advanced technologies destined for the battlefield.

By declassifying key elements of a strategy it had adopted last summer, the Defense Department appeared to be trying to address disparate criticism that it was not being heedful enough of the risks of using AI in its weaponry or not being aggressive enough in the face of rival nations’ efforts to embrace AI.

The 17-page strategy summary said that AI — a shorthand term for machine-driven learning and decision-making — held out great promise for military applications, and that it “is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others.”

It depicted AI’s embrace in solely positive terms, asserting that “with the application of AI to defense, we have an opportunity to improve support for and protection of U.S. service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.”

Stepping back from AI in the face of aggressive AI research efforts by potential rivals would have dire — even apocalyptic — consequences, it further warned. It would “result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”

The publication of the Pentagon strategy’s core concepts comes eight months after a Silicon Valley revolt against the military’s premier AI research program. After thousands of Google employees signed a petition protesting the company’s involvement in an effort known as Project Maven, meant to speed up the analysis of videos taken by a drone so that military personnel could more readily identify potential targets, Google announced on June 1 that it would back out of it.

But the release of the strategy makes clear that the Trump administration isn’t having second thoughts

about the utility of AI. It says the focus of the Defense Department’s Joint Artificial Intelligence Center (JAIC), created last June, will be on “near-term execution and AI adoption.” And in a section describing image analysis, the document suggests there are some things machines can do better than humans can. It says that “AI can generate and help commanders explore new options so that they can select courses of action that best achieve mission outcomes, minimizing risks to both deployed forces and civilians.”

Read full story here…




george orwell

Creators Say New AI Text Generator Too Dangerous To Release

While it is commendable that the creators see dangerous results of their AI and are willing to withhold it pending further analysis, it is just a matter of time before is slips out. There is no human intelligence used when generating artificial stories.  ⁃ TN Editor

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.

One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”

Read full story here…




Police Across The US Are Training Crime-Predicting AIs On Falsified Data

The entire criminal justice system across America is being corrupted by the blatant misuse of AI technology. Police are not ignorant when they seek the results they want instead of the objective facts of a matter. This is comparable to the false global warming science community.  ⁃ TN Editor

In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.

It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were “serious, wide-ranging, systemic and deeply rooted within the culture of the department.”

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.

Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions.

But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study. “If the data itself is incorrect, it will cause more police resources to be focused on the same over-surveilled and often racially targeted communities. So what you’ve done is actually a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”

The researchers examined 13 jurisdictions, focusing on those that have used predictive policing systems and been subject to a government-commissioned investigation. The latter requirement ensured that the policing practices had legally verifiable documentation. In nine of the jurisdictions, they found strong evidence that the systems had been trained on “dirty data.”

The problem wasn’t just data skewed by disproportionate targeting of minorities, as in New Orleans. In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates. In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints. Some police officers even planted drugs on innocent people to meet their quotas for arrests. In modern-day predictive policing systems, which rely on machine learning to forecast crime, those corrupted data points become legitimate predictors.

Read full story here…




IBM’s ‘Debater AI’ Loses Debate To Human

Now computer AI can argue with you. IBM again pushes the envelope with its advanced AI software/hardware to take on a champion human debater, but it lost. The next debate may turn out differently. ⁃ TN Editor

The subject under debate was whether the government should subsidize preschools. But the real question was whether a machine called IBM Debater could out-argue a top-ranked human debater.

The answer, on Monday night, was no.

Harish Natarajan, the grand finalist at the 2016 World Debating Championships, swayed more among an audience of hundreds toward his point of view than the AI-powered IBM Debater did toward its. Humans, at least those equipped with degrees from Oxford and Cambridge universities, can still prevail when it comes to the subtleties of knowledge, persuasion and argument.

It wasn’t a momentous headline victory like we saw when IBM’s Deep Blue computers beat the best human chess player in 1997 or Google’s AlphaGo vanquish the world’s best human players of the ancient game of Go in 2017. But IBM still showed that artificial intelligence can be useful in situations where there’s ambiguity and debate, not just a simple score to judge who won a game.

“What really struck me is the potential value of IBM Debater when [combined] with a human being,” Natarajan said after the debate. IBM’s AI was able to dig through mountains of information and offer useful context for that knowledge, he said.

It was the second time IBM Debater took on humans in public, though it’s taken part in dozens of debates behind Big Blue’s walls. In the first IBM Debater competition, the AI defeated one human debater soundly while losing a closer competition with another. This time, though, the human opponent was tougher — indeed, IBM researchers involved in the years-long project expected their AI would lose.

IBM Debater lost, but there’s no question it won in a way: Listening to it, you evaluate what it’s saying, not just that it’s a computer saying something. The machine marshaled its argument, broke that down into a few points and backed them up with data from various studies. It wasn’t perfect, but it was on point.

And, weirdly for an AI, it told us how Homo sapiens ought to behave.

“Giving opportunities to the less fortunate should be a moral obligation for any human being,” IBM Debater said.

Read full story here…