AI Reads Mind, Creates Video Of Your Thoughts

Scientists claim that AI can learn what you are thinking by analyzing brain impulses, and then reproduce the image on a video. Rudimentary as it is, it could break down the most private enclave of humanity: your personal thoughts. ⁃ TN Editor

A mind-reading tool powered by artificial intelligence has produced a staggering video of human thoughts in real-time.

Russian researchers trained the programme to guess what people are thinking based on their brain waves.

They trained the AI by using clips of different objects, and the brainwave activity of participants watching them.

Participants were then shown clips of nature scenes, people on jet skis and human expressions.

AI then recreated videos using a electroencephalogram (EEG) cap, reports New Scientist.

Out of 234 attempts, 210 were successfully categorised by the video.

Colours and large shapes were deemed the most successful, the report adds.

But human faces were harder to recreate with many distorted, researchers said.

The video first surfaced last month.

But since then Victor Sharmas, at the University of Arizona, commented on the video and said we are still only looking at the surface of human thought.

He said: “What we are currently seeing is a caricature of human experience, but nothing remotely resembling an accurate recreation.”

Dr Ian Pearson, a futurologist and ex-cybernetics engineer, told us: “I would think in a lab we’re probably a decade or 15 years away from that (technology).

“I don’t think it will be very long after that before police are using it in interrogations, getting somebody in for questioning.

“Instead of a police officer asking questions, they’ll stick a helmet on to decide what it is that’s going through your mind.

Read full story here…




Woof: Spot The Robot Police Dog

Short of laws or regulations to control police departments, robotics will play a huge part of future enforcement. Spot the robot dog is seen as a viable replacement for police dots. Future weaponization is almost certain. ⁃ TN Editor

Cops have long had dogs, and robots, to help them do their jobs. And now, they have a robot dog.

Massachusetts State Police is the first law enforcement agency in the country to use Boston Dynamics’ dog-like robot, called Spot. While the use of robotic technology is not new for state police, the temporary acquisition of Spot — a customizable robot some have called “terrifying” — is raising questions from civil rights advocates about how much oversight there should be over police robotics programs.

The state’s bomb squad had Spot on loan from the Waltham-based Boston Dynamics for three months starting in August until November, according to records obtained by the American Civil Liberties Union of Massachusetts and reviewed by WBUR.

The documents do not reveal a lot of details on the robot dog’s exact use, but a state police spokesman said Spot, like the department’s other robots, was used as a “mobile remote observation device” to provide troopers with images of suspicious devices or potentially hazardous locations, like where an armed suspect might be hiding.

“Robot technology is a valuable tool for law enforcement because of its ability to provide situational awareness of potentially dangerous environments,” state police spokesman David Procopio wrote.

tate police say Spot was used in two incidents, in addition to testing.

Boston Dynamics vice president for business development Michael Perry said the company wants Spot to have lots of different uses, in industries ranging from oil and gas companies, to construction, to entertainment. He envisions police sending Spot into areas that are too hazardous for a human — a chemical spill, or near a suspected bomb, or into a hostage situation.

“Right now, our primary interest is sending the robot into situations where you want to collect information in an environment where it’s too dangerous to send a person, but not actually physically interacting with the space,” Perry said.

Spot is a “general purpose” robot, with an open API. That means customers — whether a police department or warehouse operator — can customize Spot with its own software. (State police say they didn’t use this feature.) It has a 360-degree, low-light camera, and an arm.

For all of its potential, Boston Dynamics doesn’t want Spot weaponized. Perry said the lease agreements have a clause requiring the robot not be used in a way that would “physically harm or intimidate people.”

“Part of our early evaluation process with customers is making sure that we’re on the same page for the usage of the robot,” he said. “So upfront, we’re very clear with our customers that we don’t want the robot being used in a way that can physically harm somebody.”

That’s one of the reasons why the company is opting for lease agreements, rather than a sale, Perry said. Boston Dynamics wants to be selective in which companies get access to Spot — and have the ability to take the equipment back if the lease is violated.

Worries About Weaponized Robots

Through Procopio, state police said the department never weaponized any of its robots, including Spot.

But while Spot and other tactical robots aren’t designed to kill, they still can. In 2016, Dallas Police sent a bomb disposal robot armed with explosives to kill a sniper who had shot at police officers and killed five. Experts said it was the first time a non-military robot had been used to intentionally kill a person.

That deadly potential, and lack of transparency about the state police’s overall robotics program, worries Kade Crockford, director of the technology for liberty program at the ACLU of Massachusetts. Crockford said they want to see a policy from state police about its use of robotics and a conversation about how and when robots should be used. State police didn’t say whether there’s a current policy about the use of robots, and the ACLU’s records request to the agency didn’t turn one up.

“We just really don’t know enough about how the state police are using this,” Crockford said. “And the technology that can be used in concert with a robotic system like this is almost limitless in terms of what kinds of surveillance and potentially even weaponization operations may be allowed.”

Read full story here…




Knowledge worker

AI Could Most Affect ‘Knowledge Workers’

Technocrats who are inventing AI algorithms by the hundreds may inadvertently have their own work replaced by their own inventions. Knowledge and white-collar workers are now in the cross-hairs for job replacement.  ⁃ TN Editor

The robot revolution has long been thought of as apocalyptic for blue-collar workers whose tasks are manual and repetitive. A widely cited 2017 McKinsey study said 50 percent of work activities were already automatable using current technology and those activities were most prevalent in manufacturing. New data suggests white-collar workers — even those whose work presumes more analytic thinking, higher paychecks, and relative job security — may not be safe from the relentless drumbeat of automation.

That’s because artificial intelligence — powerful computer tech like machine learning that can make human-like decisions and use real-time data to learn and improve — has white-collar work in its sights, according to a new study by Stanford University economist Michael Webb and published by Brookings Institution. The scope of jobs potentially impacted by AI reaches far beyond white-collar jobs like telemarketing, a field that has already been decimated by bots, into jobs previously thought to be squarely in the province of humans: knowledge workers like chemical engineers, physicists, and market-research analysts.

The new research looks at the overlap between the subject-noun pairs in AI patents and job descriptions to see which jobs are most likely to be affected by AI technology. So for example, job descriptions for market-research analyst — a relatively common position with a high rate of AI exposure — share numerous terms in common with existing patents, which similarly seek to “analyze data,” “track marketing,” and “identify markets.”

It’s more forward-looking than other studies in that it analyzes patents for technology that might not yet be fully developed or deployed.

Typically, estimates of automation effects on the workforce, which vary widely depending on the study, have focused on what jobs could be automated using existing technologies. The findings have generally been most damning for lower-wage, lower-education workers, where robotics and software have often eliminated part or all of certain jobs.

The specter of increased automation has raised concerns about how large swaths of Americans will be able to support themselves when their jobs become mechanized and whether the loss of low-income jobs will increase wealth inequality. This new patent research suggests automation’s impact could be much broader and affect high-paying white-collar jobs as well.

A caveat: Some AI patents might never be used, and they might not be used for their initial intentions. Also, one’s actual job is not wholly defined by the text of the original job description. But this study does provide a framework with which to view general exposure to automation.

As Adam Ozimek, chief economist at freelancing platform Upwork, put it, “Just because someone patented a device, for example, that used artificial intelligence to do market research does not mean that AI will in fact be successful at this for practical business use.”

The Stanford study also doesn’t say whether these workers will actually lose their jobs, only that their work could be impacted. So it’s perfectly possible these technologies will be used to augment jobs rather than supplant them.

Read full story here…




Alexa

Amazon’s Hal 9000: Dave? What is it, Alexa?

Amazon is set to leapfrog Facebook’s dystopia and go directly to personal control; just think Hal in the movie, 2001: A Space Odyssey. Alexa’s AI will be tailored to mold your actions, consumption and relationships. ⁃ TN Editor

Amazon has big plans for its virtual assistant. One day, perhaps sooner than you think, Alexa will take a proactive role in directing our lives. It’ll interpret our data, make decisions for us, and summon us when it has something to say.

Rohit Prasad, the scientist in charge of Alexa‘s development, recently gave MIT Technology Review’s Karen Hao one of the most terrifying interviews in modern journalism. We know how dangerous it is to let bad actors run amok with AI and our data – if you need a refresher, recall the Cambridge Analytica scandal.

That’s not to say Prasad is a bad actor or anything but a talented scientist. But he and the company he works for probably have access to more of our data than ten Facebooks and Twitters combined. And, to paraphrase Kanye West, no one person or company should have all that power.

Hao writes:

Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, has now revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

The idea of Alexa being an omnipresent companion looking to orchestrate your life should probably alarm you. But, for now, the work Prasad and the Alexa team are doing isn’t scary on its own merit. If you’re one of the eight or nine people on the planet who has never interacted with Alexa, you’re both missing out and not really missing out. Virtual assistants, today, are equal parts miraculously intuitive and frustratingly limited.

With one interaction, you’ll say “Alexa, play some music” and the assistant will ‘randomly’ select a playlist that touches the depths of your soul, as if it knew better than you did what you needed to hear.

But the next time you use it, you might find yourself in a three-minute-long argument over whether you wanted to listen to music by Cher or purchase a beige chair (with free two-day shipping).

From a consumer point-of-view, it’s hard to imagine Alexa becoming so useful that we’d come running when it summons us. But Alexa‘s primary mission will always be to gather data. Simply put: Amazon, Microsoft, and Google are all trillion dollar companies because data is the most valuable resource in the world, and Alexa is among the world’s greatest data collectors.

Once Alexa stops listening for commands and starts making suggestions, it means Amazon‘s no longer focused on building a handful of giant training databases comprised of data from hundreds of millions of users. Instead, it indicates that it’s focused on building millions of training databases composed of data gleaned from single individuals or very small user groups.

Read full story here…




AI

Only Human Arrogance Says AI Can Become Sentient

You cannot know what you do not know, and scientists cannot describe, much less explain, exactly what human consciousness or soul is, so how can they brag that they will create it in a computer algorithm? This is the height of arrogance with a strong desire to play God.

Both Technocracy and Transhumanism are based on Scientism, a religious belief that truth is the exclusive product of science, and that no truth can exist outside of scientific discovery. It pointedly excludes all other religious thought and especially Biblical Christianity. It is ironic that they want to imitate the powers of a God they disdain in the first place. ⁃ TN Editor

 

Cogito, ergo sum,” Rene Descartes. Translation: “I think, therefore I am.”

What makes us, us? How is it that we’re able to look at a tree and see beauty, hear a song and feel moved, or take comfort in the smell of rain or the taste of coffee? How do we know we still exist when we close our eyes and lie in silence? To date, science doesn’t have an answer to those questions.

In fact, it doesn’t even have a unified theory. And that’s because we can’t simulate consciousness. All we can do is try to reverse-engineer it by studying living beings. Artificial intelligence, coupled with quantum computing, could solve this problem and provide the breakthrough insight scientists need to unravel the mysteries of consciousness. But first we need to take the solution seriously.

There’s been a rash of recent articles written by experts claiming definitively that a machine will never have consciousness. This represents a healthy level of skepticism, which is necessary for science to thrive, but there isn’t a lot of room for absolutes when theoretical future-tech is involved.

An untold number of experts have weighed in on the idea of sentient machines – computers with the capacity to feel alive – and, for the most part, they all believe the idea of a living robot is science fiction, at least for now. And it is. But so too are the ideas of warp drives, teleportation, and time travel.

Yet, as you can see, each of these far-out ideas are not only plausible, but grounded in serious research:

We could be hundreds or thousands of years away from conscious AI, but that’s a drop in the ocean of time compared to “never.”

The prehistoric scientists working on the problem of replicating naturally occurring fire and harnessing it as an energy source may have been the brightest minds of their time, but their collective knowledge on thermodynamics would pale beside an average 5th grader’s today. Recent work in the fields of quantum computing and artificial intelligence may not show a direct path to machine consciousness, but theories that say it cannot happen are trying to prove a negative.

We cannot definitively say that intelligent extraterrestrial life does not exist simply because there’s evidence that life on Earth is a universal anomaly. And, equally so, we cannot logically say machines will never have consciousness simply because we haven’t figured out how to imbue them with it yet. Citing the difficulty of a problem isn’t evidence that it’s unsolvable.

Somehow, consciousness as we understand it manifested in the universe once. It seems arrogant to imagine we understand its limits and boundaries or that it cannot emerge as part of a quantum function in a machine system by the direction or invention of a human.

But, before we can even consider the problem of building machines that feel, we need to figure out what consciousness actually is.

Scientists tend to agree that consciousness is the feeling of being alive. While we can’t be sure, we like to think that animals are living and conscious, and plants are just living. We generally assume non-living things are not “conscious” or aware of their existence. But we don’t know.

Read full story here…




humanoid

Humanoid Androids Have Entered The Workplace

The Technocrat and Transhuman holy grail is not to augment the human experience, but to replace it all together, with AI hosting the souls of would-be immortals. The Android race now has a myriad of startups working feverishly to be first. ⁃ TN Editor
 

  • Russian start-up Promobot recently unveiled what it calls the world’s first android that looks just like a real person and can serve in a business capacity.
  • Robo-C can be made to look like anyone, so it’s like an android clone.
  • It comes with an artificial intelligence system that has more than 100,000 speech modules.
  • It can perform workplace tasks, such as answering customer questions at offices, airports, banks and museums, while accepting payments.

November 2019 is a landmark month in the history of the future. That’s when humanoid robots that are indistinguishable from people start running amok in Los Angeles. Well, at least they do in the seminal sci-fi film “Blade Runner.” Thirty-seven years after its release, we don’t have murderous androids running around. But we do have androids like Hanson Robotics’ Sophia, and they could soon start working in jobs traditionally performed by people.

Russian start-up Promobot recently unveiled what it calls the world’s first autonomous android. It closely resembles a real person and can serve in a business capacity. Robo-C can be made to look like anyone, so it’s like an android clone. It comes with an artificial intelligence system that has more than 100,000 speech modules, according to the company.

It can operate at home, acting as a companion robot and reading out the news or managing smart appliances — basically, an anthropomorphic smart speaker. It can also perform workplace tasks such as answering customer questions in places like offices, airports, banks and museums, while accepting payments and performing other functions.

Digital immortality?

“We analyzed the needs of our customers, and there was a demand,” says Promobot co-founder and development director Oleg Kivokurtsev. “But, of course, we started the development of an anthropomorphic robot a long time ago, since in robotics there is the concept of the ‘Uncanny Valley,’ and the most positive perception of the robot arises when it looks like a person. Now we have more than 10 orders from companies and private clients from around the world.

Postulated by Japanese roboticist Masahiro Mori in 1970, the Uncanny Valley is a hypothesis related to the design of robots. It holds that the more humanlike a robot appears, the more people will notice its flaws. This can create a feeling akin to looking at zombies, and can creep people out. A properly designed android that’s as faithful as possible to the human original, however, can overcome this “valley” (a dip when the effect is imagined as a graph) and the zombie factor.

While it can’t walk around, Robo-C has 18 moving parts in its face, giving it 36 degrees of freedom. The company says it has over 600 micro facial expressions, the most on the market. It also has three degrees of freedom in its neck and torso, offering limited movement. Still, Promobot says it can be useful in homes and workplaces. The price of the robot is $20,000 to $50,000 depending on options and customized appearance.

For more on tech, transformation and the future of work, join CNBC at the @ Work: People + Machines Summit in San Francisco on Nov. 4. Leaders from Dropbox, SAS, McKinsey and more will teach us how to balance the needs of today with the possibilities of tomorrow, and the winning strategies to compete.

The company says it’s building four Robo-Cs: one for a government service center, where the machine will scan passports and perform other functions, one that will look like Einstein and be part of a robot exhibition, and two for a family in the Middle East that wants to have android versions of its father and his wife to greet guests.

“The key moment in development [of Robo-C] is the digitization of personality and the creation of an individual appearance,” says Kivokurtsev. “As a result, digital immortality, which we can offer our customers.”

Read full story here…




BCI

BCI: The Final Frontier Of Mind Reading Tech Is Very Close

Technocrats are pushing hard to penetrate your mind to read your thoughts. Brain-Computer Interfaces coupled with AI will have huge corporate demand and you will lose all privacy. ⁃ TN Editor
 

Social media companies can already use online data to make reliable guesses about pregnancy or suicidal ideation – and new BCI technology will push this even further.

It’s raining on your walk to the station after work, but you don’t have an umbrella. Out of the corner of your eye, you see a rain jacket in a shop window. You think to yourself: “A rain jacket like that would be perfect for weather like this.”

Later, as you’re scrolling on Instagram on the train, you see a similar-looking jacket. You take a closer look. Actually, it’s exactly the same one – and it’s a sponsored post. You feel a sudden wave of paranoia: did you say something out loud about the jacket? Had Instagram somehow read your mind?

While social media’s algorithms sometimes appear to “know” us in ways that can feel almost telepathic, ultimately their insights are the result of a triangulation of millions of recorded externalized online actions: clicks, searches, likes, conversations, purchases and so on. This is life under surveillance capitalism.

As powerful as the recommendation algorithms have become, we still assume that our innermost dialogue is internal unless otherwise disclosed. But recent advances in brain-computer interface (BCI) technology, which integrates cognitive activity with a computer, might challenge this.

In the past year, researchers have demonstrated that it is possible to translate directly from brain activity into synthetic speech or text by recording and decoding a person’s neural signals, using sophisticated AI algorithms.

While such technology offers a promising horizon for those suffering from neurological conditions that affect speech, this research is also being followed closely, and occasionally fundedby technology companies like Facebook. A shift to brain-computer interfaces, they propose, will offer a revolutionary way to communicate with our machines and each other, a direct line between mind and device.

But will the price we pay for these cognitive devices be an incursion into our last bastion of real privacy? Are we ready to surrender our cognitive liberty for more streamlined online services and better targeted ads?

A BCI is a device that allows for direct communication between the brain and a machine. Foundational to this technology is the ability to decode neural signals that arise in the brain into commands that can be recognized by the machine.

Because neural signals in the brain are often noisy, decoding is extremely difficult. While the past two decades have seen some success decoding sensory-motor signals into computational commands – allowing for impressive feats like moving a cursor across a screen with the mind or manipulating a robotic arm – brain activity associated with other forms of cognition, like speech, have remained too complex to decode.

But advances in deep learning, an AI technique that mimics the brain’s ability to learn from experience, is changing what’s possible. In April this year, a research team at the University of California, San Francisco, published results of a successful attempt at translating neural activity into speech via a deep-learning powered BCI.

The team placed small electronic arrays directly on the brains of five people and recorded their brain activity, as well as the movement of their jaws, mouths and tongues as they read out loud from children’s books. This data was then used to train two algorithms: one learned how brain signals instructed the facial muscles to move; the other learned how these facial movements became audible speech.

Once the algorithms were trained, the participants were again asked to read out from the children’s books, this time merely miming the words. Using only data collected from neural activity, the algorithmic systems could decipher what was being said, and produce intelligible synthetic versions of the mimed sentences.

Read full story here…




California Bans Law Enforcement From Using Facial Recognition Software For 3 Years

With all the craziness happening in California politics, it is hard to determine the real reason why the State Legislature banned facial recognition software for the next 3 years. Nevertheless, Californians will enjoy a greater measure of privacy than in other states. ⁃ TN Editor

California lawmakers today passed a bill placing a three-year state-wide moratorium on the use of facial recognition technology by law enforcement agencies.

AB 1215, The Body Camera Accountability Act, was introduced earlier this year by assemblymember Phil Tang, a Democrat. Both San Francisco and Oakland previously passed similar bills preventing the use of facial recognition by law enforcement agencies, now the ban‘s gone state-wide.

The bill goes into effect on 1 January, 2020, and will be reviewed under a “sunset provision” in 2023.

Tang, according to an ACLU statement, says the bill will protect Californians:

Without my bill, facial recognition technology essentially turns body cameras into a 24-hour surveillance tool, giving law enforcement the ability to track our every movement. Let’s not become a police state and keep body cameras as they were originally intended – to provide police accountability and transparency.

US citizens have the right to privacy and the reasonable expectation that public surveillance systems are in place to protect us in the event that a crime is committed.

But AI-powered facial recognition systems aren’t designed to monitor public spaces for crimes. As we’ve seen in leaked Palantir documents, these systems are meant to connect to a database wherein police officers have access to the private details of any citizen. Here’s a graphic showing what kind of information law enforcement officers have available to them with the Palantir app.

In essence, these tools give police officers the kind of data and information that a detective 20 years ago couldn’t have gleaned with a search warrant and six months to investigate – today there’s literally an app for that.

Read full story here…




How Google’s Search Engine Determines Winners And Losers

Run by Technocrat mindset, Google practices its Science of Social Engineering at every level, putting companies in a position to live or die by search engine placement of ads and keywords. If you run afoul, Google can crush you. ⁃ TN Editor

“Where’s the best place to hide a body? The second page of a Google search.”

The gallows humor shows that people rarely look beyond the first few results of a search, but Lee Griffin isn’t laughing.

In the 13 years since he co-founded British price comparison website GoCompare, the 41-year-old has tried to keep his company at the top of search results, doing everything from using a “For Dummies” guide in the early days to later hiring a team of engineers, marketers and mathematicians. That’s put him on the front lines of a battle challenging the dominance of Alphabet Inc.’s Google in the search market — with regulators in the U.S. and across Europe taking a closer look.

Most of the sales at GoCompare, which helps customers find deals on everything from car and travel insurance to energy plans, come from Google searches, making its appearance at the top critical. With Google — whose search market share is more than 80% — frequently changing its algorithms, buying ads has become the only way to ensure a top spot on a page. Companies like GoCompare have to outbid competitors for paid spots even when customers search for their brand name.

“Google’s brought on as this thing that wanted to serve information to the world,” Griffin said in an interview from the company’s offices in Newport, Wales. “But actually what it’s doing is to show you information that people have paid it to show you.”

Market Dominance

GoCompare is far from the only one to suffer from Google’s search dominance. John Lewis, a high-end British retailer, last month alluded to the rising cost of climbing up in Google search results. In the U.S., IAC/InterActive Corp., which owns internet services like Tinder, and ride-hailing company Lyft Inc. have signaled Google’s stranglehold on the market.

The clamor from companies has prompted the U.K. competition watchdog to study online platforms and digital advertising in July, aiming to examine the market power of companies like Google over online marketing. The European Union has been trying to rein in Google, fining the company 1.5 billion euros ($1.6 billion) this year for thwarting advertising rivals. In the U.S. there’s a rising chorus of voices on the political left and right demanding Google be cut down to size, somehow.

Searching Game

The case of GoCompare shows just how difficult it is to win the search game.

GoCompare is known locally for its off-beat ads where an opera singer belts out its name in restaurants, taxis and, more controversially, crawls out of a flipped car in a recreation of an accident. When customers look for the company’s name after seeing an ad or type in a query for auto insurance, what appears is a combination of paid advertisements, Google’s own blurbs and then so-called natural search results, a list of what the tech giant deems are the most reliable sources of the information. But even ranking highly on natural search results can be costly.

“The way the algorithm works is constantly changing and you don’t get insight into it,” said Lexi Mills, chief executive officer of Shift6, a marketing consulting firm that helps clients improve their search results. “The people who get to optimize tend to be the people with the most money.”

Nowhere is Google’s power more evident — and potentially damaging to businesses — than in the market for “branded keywords.” This is where businesses buy ads based on their brand names. So GoCompare bids on the word “GoCompare” and when people search for that, Google runs an ad at the top of results usually linking to the company’s website.

‘Odd Place’

Some businesses say they have to buy these ads — whatever the cost — because rivals can bid on the keywords too.

If GoCompare decides not to bid for its own brand, Google can legally sell the ad placements with its name to a competitor, with the top bidders getting the best spots on the page and taking away customers.

“That seems like an odd place to be that I have to bid on my own brand,” said Griffin. When the company confronted Google about it, the tech giant said “tell your competitors to stop bidding on you,” according to Griffin.

Read full story here…




killer robots

Microsoft Head Says Rise Of Killer Robots Is ‘Unstoppable’

A new global arms race? Forget nukes, it’s killer robots. Any nation or terrorist group with a screwdriver can join the melee to build killer robots. To the Technocrat mindset, it’s a much more efficient way to destroy things and kill people. ⁃ TN Editor
 

The rise of killer robots is now unstoppable and a new digital Geneva Convention is essential to protect the world from the growing threat they pose, according to the President of the world’s biggest technology company.

In an interview with The Telegraph, Brad Smith, president of Microsoft, said the use of ‘lethal autonomous weapon systems’ poses a host of new ethical questions which need to be considered by governments as a matter of urgency.

He said the rapidly advancing technology, in which flying, swimming or walking drones can be equipped with lethal weapons systems – missiles, bombs or guns – which could be programmed to operate entirely or partially autonomously, “ultimately will spread… to many countries”.

The US, China, Israel, South Korea, Russia and the UK are all developing weapon systems with a significant degree of autonomy in the critical functions of selecting and attacking targets.

The technology is a growing focus for many militaries because replacing troops with machines can make the decision to go to war easier.

But it remains unclear who is responsible for deaths or injuries caused by a machine – the developer, manufacturer, commander or the device itself.

Smith said killer robots must “not be allowed to decide on their own to engage in combat and who to kill” and argued that a new international convention needed to be drawn up to govern the use of the technology.

“The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.”

Speaking at the launch of his new book, Tools and Weapons, at the Microsoft store in London’s Oxford Circus, Smith said there was also a need for stricter international rules over the use of facial recognition technology and other emerging forms of artificial intelligence.

“There needs to be there needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Read full story here…