Fool You: Creepy Nvidia AI Generates Authentic-Looking Humans

Nvidia’s technology is creepy enough just by itself, but it can be used to take your image and then make it say anything they want without detecting fakery. Nvidia is a leader in graphics, AI, Smart City technology and facial recognition. ⁃ TN Editor

Believe it or not, all these faces are fake. They have been synthesized by Nvidia’s new AI algorithm, a generative adversarial network capable of automagically creating humans, cats, and even cars.

The technology works so well that we can expect synthetic image search engines soon — just like Google’s, but generating new fake images on the fly that look real. Yes, you know where that is going — and sure, it can be a lot of fun, but also scary. Check out the video. It truly defies belief:

According to Nvidia, its GAN is built around a concept called “style transfer.” Rather than trying to copy and paste elements of different faces into a frankenperson, the system analyzes three basic styles — coarse, middle, and fine styles — and merges them transparently into something completely new.

Coarse styles include parameters such as pose, the face’s shape, or the hair style. Middle styles include facial features, like the shape of the nose, cheeks, or mouth. Finally, fine styles affect the color of the face’s features like skin and hair.

According to the scientists, the generator is “capable of separating inconsequential variation from high-level attributes” too, in order to eliminate noise that is irrelevant for the new synthetic face.

For example, it can distinguish a hairdo from the actual hair, eliminating the former while applying the latter to the final photo. It can also specify the strength of how styles are applied to obtain more or less subtle effects.

Not only the generative adversarial network is capable of autonomously creating human faces, but it can do the same with animals like cats. It can even create new cars and even bedrooms.

Nvidia’s system is not only capable of generating completely new synthetic faces, but it can also seamlessly modify specific features of real people, like age, the hair or skin colors of any person.

Read full story here…




Expert: AI Soldiers Will Develop ‘Moral Compass’ And Defy Orders

It has already been demonstrated that AI algorithms exhibit the biases of their creators, so why not murderous intents as well? Technocrats are so absorbed with their Pollyanna inventions that they cannot see the logical end of their existence. ⁃ TN Editor

Murderous robot soldiers will become so advanced they could develop their own moral code to violently defy orders, an AI expert claims.

Ex-cybernetics engineer Dr Ian Pearson predicts we are veering towards a future of conscious machines.

But if robots are thrust into action by military powers, the futurologist warns they will be capable of conjuring up their own “moral viewpoint”.

And if they do, the ex-rocket scientist claims they may turn against the very people sending them out to battle.

Dr Pearson, who blogs for Futurizon, told Daily Star Online: “As AI continues to develop and as we head down the road towards consciousness – and it isn’t going to be an overnight thing, but we’re gradually making computers more and more sophisticated – at some point you’re giving them access to moral education so they can learn morals themselves.

“You can give them reasoning capabilities and they might come up with a different moral code, which puts them on a higher pedestal than the humans they are supposed to be serving.

Asked if this could prove fatal, he responded: “Yes, of course.

“If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population.

“Who knows what decisions they might take?

“If you have a guy on a battlefield, telling soldiers to shoot this bunch of people, for whatever reason, but the computer thinks otherwise, the computer is not convinced by it, it might conclude that soldier giving the orders is the worst offender rather than the people he’s trying to kill, so it might turn around and kill him instead.

“It’s entirely possible, it depends on how the systems are written.”

Dr Pearson’s warning comes amid growing concerns of fully autonomous robots being used in war.

Read full story here…




Like “Blade Runner”, Eye-Scanning Lie Detector Is Here to Stay

By Conversus own admission, its system is wrong 14% of the time, i.e., it only tells the truth about 86% of the time and after that, it lies. Yes, the lie detector lies when it mis-judges you, and it can negatively change the course of your life. ⁃ TN Editor

Sitting in front of a Converus EyeDetect station, it’s impossible not to think of Blade Runner. In the 1982 sci-fi classic, Harrison Ford’s rumpled detective identifies artificial humans using a steam-punk Voight-Kampff device that watches their eyes while they answer surreal questions. EyeDetect’s questions are less philosophical, and the penalty for failure is less fatal (Ford’s character would whip out a gun and shoot). But the basic idea is the same: By capturing imperceptible changes in a participant’s eyes—measuring things like pupil dilation and reaction time—the device aims to sort deceptive humanoids from genuine ones.

It claims to be, in short, a next-generation lie detector. Polygraph tests are a $2 billion industry in the US and, despite their inaccuracy, are widely used to screen candidates for government jobs. Released in 2014 by Converus, a Mark Cuban–funded startup, EyeDetect is pitched by its makers as a faster, cheaper, and more accurate alternative to the notoriously unreliable polygraph. By many measures, EyeDetect appears to be the future of lie detection—and it’s already being used by local and federal agencies to screen job applicants. Which is why I traveled to a testing center, just north of Seattle, to see exactly how it works.

Jon Walters makes an unlikely Blade Runner. Smartly dressed and clean cut, the former police chief runs Public Safety Testing, a company that conducts preemployment tests for police forces, fire departments, and paramedics in Washington State and beyond. Screening new hires used to involve lengthy, expensive polygraph tests, which typically require certified examiners to facilitate them. Increasingly, however, Walters tells me, law enforcement agencies are opting for EyeDetect.

Unlike a polygraph, EyeDetect is fast and largely automatic. This bypasses one of the pitfalls of polygraphs: human examiners, who can carry their biases when they interpret tests. According to Walters, biases don’t really “come into play” with EyeDetect, and the test takes a brisk 30 minutes as opposed to the polygraph’s 2- to 4-hour-long slog. Moreover, EyeDetect is a comfortable experience for the test subject. ”When I was wired up for the polygraph, it was kind of intimidating,” Walters told me. “Here you just sit and look into the machine.”

I settle in for a demonstration: a swift 15-minute demo where the test will guess a number I’m thinking of. An infrared camera observes my eye, capturing images 60 times a second while I answer questions on a Microsoft Surface tablet. That data is fed to Converus’ servers, where an algorithm, tuned and altered using machine learning, calculates whether or not I’m being truthful.

The widely accepted assumption underlying all of this is that deception is cognitively more demanding than telling the truth. Converus believes that emotional arousal manifests itself in telltale eye motions and behaviors when a person lies.

Converus claims that EyeDetect is “the most accurate lie detector available,” boasting 86 percent accuracy. By comparison, many academics consider polygraph tests to be 65 to 75 percent accurate. The company already claims close to 500 customers in 40 countries, largely using the EyeDetect for job screening. In the US, this includes the federal government as well as 21 state and local law enforcement agencies, according to Converus. The Department of State recently paid Converus $25,000 to use EyeDetect when vetting local hires at the US Embassy in Guatemala, WIRED’s reporting revealed. Converus says its technology has also been used in an internal investigation at the US Embassy in Paraguay.

In documents obtained through public records requests, Converus says that the Defense Intelligence Agency and the US Customs and Border Protection are also trialing the technology. Converus says that individual locations of Best Western, FedEx, Four Points by Sheraton, McDonald’s, and IHOP chains have used the tech in Guatemala and Panama within the last three years. (A 1988 federal law prohibits most private companies from using any kind of lie detector on staff or recruits in America.) WIRED reached out to all five companies, but none were able to confirm that they had used EyeDetect.

Read full story here…




Historic? DeepMind’s AlphaZero AI Shows Human-Like Intuition

It’s only a chess game this time, but in real life do we really want AI that “readily sacrificed its soldiers for a better position in the skirmish… placing far less value on individual pieces.”? Intuition is a characteristic of the human soul, which inert AI can never duplicate. ⁃ TN Editor

DeepMind’s artificial intelligence programme AlphaZero is now showing signs of human-like intuition and creativity, in what developers have hailed as ‘turning point’ in history.

The computer system amazed the world last year when it mastered the game of chess from scratch within just four hours, despite not being programmed how to win.

But now, after a year of testing and analysis by chess grandmasters, the machine has developed a new style of play unlike anything ever seen before, suggesting the programme is now improvising like a human.

Unlike the world’s best chess machine – Stockfish – which calculates millions of possible outcomes as it plays, AlphaZero learns from its past successes and failures, making its moves based on, a ‘nebulous sense that it is all going to work out in the long run,’ according to experts at DeepMind.

When AlphaZero was pitted against Stockfish in 1,000 games, it lost just six, winning convincingly 155 times, and drawing the remaining bouts.

Yet it was the way that it played that has amazed developers. While chess computers predominately like to hold on to their pieces, AlphaZero readily sacrificed its soldiers for a better position in the skirmish.

Speaking to The Telegraph, Prof David Silver, who leads the reinforcement learning research group at DeepMind said: “It’s got a very subtle sense of intuition which helps it balance out all the different factors.

“It’s got a neural network with millions of different tunable parameters, each learning its own rules of what is good in chess, and when you put them all together you have something that expresses, in quite a brain-like way, our human ability to glance at a position and say ‘ah ha this is the right thing to do’.

“My personal belief is that we’ve seen something of turning point where we’re starting to understand that many abilities, like intuition and creativity, that we previously thought were in the domain only of the human mind, are actually accessible to machine intelligence as well. And I think that’s a really exciting moment in history.”

AlphaZero started as a ‘tabula rasa’ or blank slate system, programmed with only the basic rules of chess and learned to win by playing millions of games against itself in a process of trial and error known as reinforcement learning.

It is the same way the human brain learns, adjusting tactics based on a previous win or loss, which allows it to search just 60 thousand positions per second, compared to the roughly 60 million of Stockfish.

Within just a few hours the programme had independently discovered and played common human openings and strategies before moving on to develop its own ideas, such as quickly swarming around the opponent’s king and placing far less value on individual pieces.

The new style of play has been analysed Chess Grandmaster Matthew Sadler and Women’s International Master Natasha Regan, who say it unlike any traditional chess engine.

”It’s like discovering the secret notebooks of some great player from the past,” said Sadler.

Regan added: “It was fascinating to see how AlphaZero’s analysis differed from that of top chess engines and even top Grandmaster play. AlphaZero could be a powerful teaching tool for the whole community.”

Garry Kasparov, former World Chess Champion, who famously lost to chess machine Deep Blue in 1997, said: “Instead of processing human instructions and knowledge at tremendous speed, as all previous chess machines, AlphaZero generates its own knowledge.

“It plays with a very dynamic style, much like my own.The implications go far beyond my beloved chessboard.”

Read full story here…




Air Force Wants AI Tools to Solve Surveillance Data Glut

Herein lies the problem: Surveillance produces a tsunami of data that cannot be analyzed fast enough without a) supercomputers and b) Artificial Intelligence. Technocrats are in their element. ⁃ TN Editor

Like other military services and Department of Defense components, the Air Force is finding itself overloaded these days with streaming intelligence data, and is looking to machine learning and artificial intelligence to help its analysts quickly put all that information to practical use.

Specifically, the service is looking to fuse Multi-intelligence, or Multi-INT, which can consist of data in multiple formats from manned and unmanned aircraft, satellites, and ground stations, as well as other sources. The volume and variety of that data can leave analysts unable to parse it all and knowledgeably help inform the decision-making process. So the Air Force Research Laboratory (AFRL) has issued a Request for Information looking for input from industry, academia, and other government labs on applicable tools that are available or in development.

An overabundance of data is nothing new–the Air Force has been complaining about the dangers of sensor-driven overload since the early 2000s–but the need to solve the problem is becoming more urgent. The Air Force is moving to a new exploitation paradigm called Sense, Identify, Attribute, Share (SIAS) that requires new approaches to exploiting Multi-INT, according to the RFI.

The Air Force’s Next Generation ISR Dominance Flight Plan, signed in July this year, states that the service “must have the architecture and infrastructure to enable machine intelligence, including automation, human-machine teaming, and ultimately, artificial intelligence,” which will define the service’s Intelligence, Surveillance, and Reconnaissance (ISR) efforts going forward.

“Technology components designed to support SIAS will need to ingest, reason over, and inform both analysts and other emerging technologies designed to automate both ISR database queries and physical collection,” the RFI states.

The Air Force is far from alone in looking to use AI and machine learning to deal with the onslaught of intelligence data. The National Geospatial-Intelligence Agency (NGA) wants to use the technologies to get a handle on the massive amounts of geospatial intelligence (GEOINT) it collects, focusing on the geospatial content within its Multi-INT data sources. NGA most recently awarded seven one-year research contracts for applying advanced algorithms and machine learning to characterize geospatial data. The awards were part of the agency’s three-year Boosting Innovative GEOINT Broad Agency Announcement (BIG BAA) initiative, which since 2016 has awarded a series of contracts targeting specific topic areas.

The Department of Defense’s Project Maven is taking an algorithmic approach to analyzing millions of hours of full-motion video from drones and other sources (and was the center of controversy when some Google employees objected to the company’s involvement; Google eventually decided to leave the project). The Intelligence Advanced Research Projects Activity (IARPA) also is developing AI systems in other areas of what it calls anticipatory intelligence, such as its Deep Intermodal Video Activity (DIVA) program to automate the monitoring and analysis of endless hours of surveillance video.

Read full story here…




Intelligent Robots To Power China’s Factories

As a Technocracy, China seeks maximum efficiency and maximum human displacement. The policies, coupled with draconian social engineering, is anti-human as it eliminates human values and dignity. ⁃ TN Editor
 

Robots powered by artificial intelligence are set to replace Chinese factory workers in a move aimed at boosting the manufacturing industry which has been hit hard by a rise in wages.

The machines which are capable of making, assembling and inspecting goods on production lines have already been rolled out, with one factory laying off 30 workers to make way for the robots. The robots were displayed at China’s Hi-Tech fair in Shenzhen earlier this month, an annual event which showcases new development ideas with the aim of driving growth in a number industries. But the news has annoyed Washington as it is expected to put international competitors at a disadvantage, as the two countries’s bitter trade war continues to escalate.

Speaking to the Financial Times, Sabrina Li, a senior manager at IngDan, said: “We incubated this platform so we can meet the (Made in China 2025) policy.

“One noodle factory was able to dismiss 30 people, making it more productive and efficient.”

Giving the suffering manufacturing industry a leg up is a key part of the Chinese government’s Made in China 2025 policy.

Zhangli Xing, deputy manager of Suzhou Govian Technology which sells the quality control robots, said they are more reliable than human labour.

Mr Xing said : “A person looking by eye would take 5-6 seconds for each object, versus 2-3 seconds by machine. And humans will get tired and make more errors.”

This year the US announced three rounds of tariffs on $250bn worth of Chinese products while China retaliated with levies on $50bn of US products.

President Trump is set to meet with President Xi Jinping at the G-20 meeting in Buenos Aires next week and investors expect their relationship to remain frosty behind closed doors, regardless of cordial handshakes and smiles for the cameras.

Read full story here..




2001 Space Odyssey? Scientists Built An AI Inspired By HAL 9000

Technocrats who are obsessed with becoming aliens in the galaxy are turning to AI (think, HAL 9000) to operate all the technological systems required to keep fragile humans alive. Mars is their first major target. What could possibly go wrong? ⁃ TN Editor

Humans are going places. NASA’s newest plan is to launch crewed missions to Mars in the 2030s, and we’ll need the most advanced and reliable space technology to help get us there safe and sound.

That’s where HAL 9000 – the villainous, insane killbot from 2001: A Space Odyssey – comes in. Believe it or not, sci-fi’s most notorious murder machine was the inspirational basis for a new HAL-like cognitive computer system designed to autonomously run planetary space stations for real one day.

If you’re thinking oh god no please god no don’t worry.

AI and robotics developer Pete Bonasso from Houston-based TRACLabs says his new CASE prototype (“cognitive architecture for space agents”) mimics HAL purely in a technological sense – ie. minus the paranoia and betrayal.

Putting those psychological flaws aside, the iconic character’s advanced computational power and scope made a vivid impression on Bonasso half a century ago.

“I saw Stanley Kubrick’s 2001: A Space Odyssey in my senior year at West Point in 1968,” Bonasso recalls in a new paper outlining the research.

Back then, the engineering student only had access to a single computer (the whole university only had one): a General Electric 225 with just 125 kilobytes of memory.

Despite the machine’s limitations, Bonasso figured out how to program it to play games of virtual pool, but witnessing HAL was a revelation on a whole other level.

“When I saw 2001, I knew I had to make the computer into another being, a being like HAL 9000,” Bonasso explains.

Decades later, the programmer has effectively achieved just that.

The AI prototype developed by Bonasso has so far only managed a planetary space station in a simulation lasting four hours, but initial results are promising: in the experiment, zero virtual astronauts were ruthlessly double-crossed and slaughtered.

“Our colleagues and NASA counterparts are not concerned that our HAL might get out of control,” Bonasso explained to Space.com.

“That’s because it can’t do anything it’s not programmed to do.”

What CASE can do is plan and control activities and technological operations to keep a colony base running around the clock.

Read full story here…




Australia: Festival Of Dangerous Ideas Lights The Fuse

Technocrat Toby Walsh spoke on the future of AI: “Society shapes technology and technology can shape society.” Technocracy is all about social engineering, or shaping society, into man’s image. AI is not the solution to the world’s “wicked” problems. ⁃ TN Editor

What will happen in the next sexual revolution? When will machines become smarter than humans? The Festival of Dangerous Ideas, presented this year for the first time in conjunction with UNSW, asked some fascinating questions.

The first year of UNSW’s Centre for Ideas co-presenting the Festival of Dangerous Ideas with The Ethics Centre is deemed an overwhelming success.

Over two days, 16,500 curious minds travelled by ferry to Sydney Harbour’s Cockatoo Island to feast on ideas informing our future. Thirty-one sessions interspersed with art installations created space for critical thinking and constructive disagreement on issues facing humanity.

Ann Mossop, Director of the Centre for Ideas and co-curator of the Festival of Dangerous Ideas, says the Festival is all about bringing ideas to public audiences in a fresh and engaging way.

“This is very much the mission of the Centre for Ideas and goes to the heart of the University’s strategy of social engagement. The Festival also values freedom of expression, independent thought and open debate, which are core values for the University.”

To help spark that debate, the Festival invited leading thinkers from around the world, including Romanian-American New York Times correspondent Rukmini Callimachi, who delves deeply into ISIS and is well known for her podcast Caliphate; Chuck Klosterman, American author and essayist who focuses on pop culture; and Germaine Greer, who has been expounding dangerous ideas for some decades.

One speaker – Stephen Fry – made a whirlwind trip to Sydney of less than 48 hours to deliver his keynote “The Hitch”, an homage to his friend Christopher Hitchens who delivered the opening address at the first Festival of Dangerous Ideas in Sydney in 2009.

Local leading thinkersdrawn from the UNSW community had the opportunity to reach a wider and different audience at the Festival, and numerous events were aligned with UNSW’s Grand Challenge topic of Living with Technology in the 21st Century.

Toby Walsh, Scientia Professor of Artificial Intelligence, for example, predicted 2062 as the date when AI will match human intelligence, and raised the accompanying issues of data manipulation and the ethics of killer robots.

Scientia Professor Rob Brooks, from the School of Biology, Earth and Environmental Sciences and Director of the Evolution & Ecology Centre, suggested sexbots could be the next sexual revolution, making us more relaxed about sex. He also warned us to beware the free sexbot and its implications.

This is the first year the Festival has been held on Cockatoo Island, and Ms Mossop says it makes it into a different kind of event.

“Rather than dipping into the Festival, most people came for an extended session and enjoyed different speakers, as well as the art installations and just being on the island. It made it more lively, because they saw speakers they didn’t already know and immersed themselves in the Festival experience.

“The program of cabaret, art, talks, ethics workshops and unique environment made for very rich experiences. Stephen Fry was amazing at the Town Hall, and our speakers on the island really delivered. [Social media activist and former Westboro Baptist Church member] Megan Phelps-Roper made a huge impression on audiences, [British historian] Niall Ferguson is a truly extraordinary speaker and Toby Walsh unleashed a drone.  What more can you ask for?”

The Festival was at capacity, with tickets for the island events and Stephen Fry at the Sydney Town Hall sold out.

The Centre for Ideas team and UNSW Events were supported by 130 UNSW mostly student volunteers over the two days, while the Social Media team extended the Festival reach – the event hashtag  #fodi was trending in the top 5 on Twitter throughout the day.

Read full story here…




Restaurant Robots May Upend The Food Industry, Or Not

Restaurants have traditionally provided a solid foundation for entry-level jobs in society. As robots displace young workers, the possibility exists that, after the novelty wears off, the public will completely reject restaurant automation, leaving Technocrat inventors shaking their heads. ⁃ TN Editor

Is the rise of the robot the demise of the restaurant server, chef and bartender? Restaurants like Spyce are leading the way with robots that cook complex meals on-demand. Companies such as OTGreimagined the restaurant airport experience and replaced servers who take customer orders with self-ordering tablets. A study by the Center for an Urban Future found that the automation potential for waiters and waitresses is 77%.

That figure increases to 87% when you factor in workers that prep food. This doesn’t mean all these jobs will be automated, but it is a stark reminder that automation has and will continue to reshape the workforce in ways that impact workers and change the customer experience.

The type of experience a business wants to provide its customers, combined with forces like labor and real estate costs, will influence the rate at which automation replaces humans and disrupts the traditional workflow.  For example, technology is available to automatically pull espresso shots and make cappuccinos. But the number of coffee shops employing baristas appears to have grown. This seems counterintuitive, but is influenced by the consumers’ desire to enjoy an authentic coffee shop experience that includes a handcrafted espresso drink with a touch of human interaction.

But operational efficiencies for businesses, consumers’ desire for convenience, and their evolving purchasing habits could turn your bearded barista into an endangered species.  Increasing labor costs, no tip credit or the potential loss of it for full service restaurants, and greater efficiencies have already been cited by restaurateurs as a reason they’ve digitized some of their workforce.

Although some restaurants have replaced servers or counter staff with self-ordering tablets, they still rely on humans to bring the food to your table or pack your food to go. Companies like Bear Roboticscould change that, just read their tagline Reshaping the Restaurant with Robotics & AI.’ 

The robot they’ve developed can be seen on their website delivering cuisine to diners’ tables. And while the robot may not provide the same human hospitality as a person, they can create an futuristic experience. With further advances in technology, these robots will be able to deliver aspects of hospitality like being courteous and helpful, just without the actual human touch.

Restaurant delivery personnel aren’t safe from technology if you live in a city. Imagine if you ordered a pizza from your local pizzeria and instead of a delivery person standing at your door with pizza in hand, a small car-like vehicle appears, its top pops open and you take your pizza out. Companies like Marble have built sidewalk delivery robots, which they are developing to become fully autonomous in dense urban environments where they’ll navigate people and all the obstructions found on city streets and sidewalks. 

It’s not far-fetched to imagine that instead of a high school student delivering your pizza in the suburbs, a self-driving car will pull down your driveway, a little robot will get out and bring the food up to your front door. And while customers may not crave human interaction from restaurant delivery, there’s a different expectation when it comes to seeing your barkeep at the local pub.

Read full story here…




Robot Manufacturers Warn ‘Bug In AI Code’ Will Lead To Murder Sprees

Autonomous weaponry is poised to be thrust into the military market and used by the world’s leading nations.

But an expert in artificial intelligence believes their introduction will come at a truly devastating cost.

Oklahoma University lecturer Dr Subhash Kak has warned a flaw in their design could result in a large number of deaths.

Dr Kak told Daily Star Online: “The manufacturers are cognisant of such malfunction of fault which they will do their best to minimise or eliminate.

“At the same time they would pressure parliament or other legislative bodies to give them exemption from liability.

“There could be a bug in the code of the robot that promotes such behaviour.”

His comments came after he previously told us: “Killer robots could easily go wrong.

“They may be used by crazed individuals or religious extremists to terrorise and kill people.

“They could go wrong due to a bug, or an unknown coding flaw that showed up as response to an unforeseen or unanticipated environment or situation.

“Or they could be hacked.”

Senior arms researcher at Campaign to Stop Killer Robots, Bonnie Docherty, previously warned killer robots will “proliferate around the world”.

And she believes they would violate ethical and legal standards.

She said: “Permitting the development and use of killer robots would undermine established moral and legal standards.

“Countries should work together to preemptively ban these weapons systems before they proliferate around the world.

“The groundswell of opposition among scientists, faith leaders, tech companies, nongovernmental groups, and ordinary citizens shows that the public understands that killer robots cross a moral threshold.

“Their concerns, shared by many governments, deserve an immediate response.”

A number of countries, including the US and Russia, have recently opposed a treaty banning killer robots.

It led Noel Sharkey, a roboticist, to slam them as “shameful”.

He said: “The two main options on the table for next year’s work were binding regulations in the form of a political declaration led by Germany and France and negotiations towards a new international law to prohibit the use and development of autonomous weapons systems led by Austria, Brazil and Chile.

Read full story here…