DARPA Accelerates Tech Work On Human Enhancing Human Senses

Regardless of benevolent-sounding platitudes about helping medical patients, DARPA’s singular military mission is to create technology-enhanced super-soldiers.  TN Editor

DARPA, the research arm of the U.S. Department of Defense, awarded contracts to six teams working on developing better brain-computer interface technology. The goal, according to DARPA, is to repair and enhance the senses of those with disabilities.

REPAIRING AND ENHANCING

The U.S. Department of Defense is moving forward with its work on brain-computer interfaces (BCIs), with the Defense Advanced Research Projects Agency (DARPA) awarding contracts to five research groups and one private company on Tuesday.

BCIs have been around for a while now, but the potential for expanding their capabilities is relatively recent. The primary purpose of BCIs have been developing better neuroprostheses, which is also one of the visions of DARPA’s Neural Engineering System Design (NESD) program. Through NESD, DARPA wants to develop high-resolution brain interfaces that could restore and enhance human senses.

“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” founding NESD program manager Phillip Alvelda said at the announcement, the Singularity Archive reports. Of the six awarded contracts, four will work on vision enhancement while the other two will focus on hearing and speech.

Back in 2016, DARPA announced that NESD will develop neural interface systems that will improve communication between the brain and the digital world. The idea is to convert electrochemical signals in the brain into the binary bits of zeros and ones used in computers. Braintree founder Bryan Johnson even thinks it’s possible to make our neural code programmable through such systems.

Not only would this help treat persons with sense disabilities by inputting various senses directly into the brain as digital signals, it would also enhance them. The ability of such neural interface systems to do this has led others working on similar technologies — like Elon Musk with his Neuralink — to consider BCIs as humanity’s way of keeping up with the development of intelligent machines. Indeed, the power of BCIs to meld the human mind with machines has led to the emerging field of neuroreality, which is a transformation of how we see and interact with the world around us.

Read full story here…




Top US General Warns Against Rogue Killer Robots

Hat tip to Gen. Paul Selva for bringing ethics into the argument to not give the military autonomous killing robots. However, he acknowledges the infighting within the military over the issue and describes it as ‘raucous’.  This is a battle between Technocrats and non-Technocrats.  TN Editor

The second highest-ranking general in the U.S. military on Tuesday warned lawmakers against equipping the military with autonomous weapons systems that humans could lose control of and advocated for keeping the “ethical rules of war” in place.

In a Senate Armed Services Committee hearing on Tuesday, Gen. Paul Selva responded to a question from Sen. Gary Peters (D-MI) about a Defense Department directive that requires a human operator to be involved in the decision-making process when it comes to taking lives with autonomous weapons systems.

Selva warned lawmakers that the military should keep “the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”

“I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life,” Selva told the committee.

Peters mentioned that the directive expires later this year, and told Selva that America’s enemies would not hesitate to employ such technology.

“Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day,” Peters told Selva.

Selvar responded, saying that America does and should always “take our values to war.”

“There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” he told Peters, but added that he was in favor of “keeping that restriction.

Read full story here…




Jack Ma, Alibaba

Jack Ma: Rapidly Rising Technology Could Easily Lead To WWIII

Ma runs Alibaba, China’s equivalent of Amazon, and is one of the richest men in the world. He concludes that if nothing is done to prevent it, we are headed to WWIII. Ma does not regret being part of the Technocracy that brought us to this point, but apparently he is scaring himself along with everyone else. ⁃ TN Editor

Chinese business magnate Jack Ma said that evolving technologies are likely to pose a threat to more than just the job market and could in fact trigger a Third World War.

In an interview with CNBC, the billionaire chairman of Alibaba said that world leaders have a duty to educate people to prevent the pain caused by a rapid rise in automation and artificial intelligence.

“The first technology revolution caused World War I,” he said. “The second technology revolution caused World War II. This is the third technology revolution.”

He said that the opportunities that he sees in the evolution of AI and in globalisation are the reasons he travels so much.  He says that on his many trips he is “talking to all the government and state leaders and telling them move fast”.

“If they do not move fast, there’s going to be trouble,” he said. “My belief is that you have to repair the roof while it is still functioning.”

But he also said that progress in AI could have the benefit of allowing people to travel more and spend less time working in future.

“I think in the next 30 years, people only work four hours a day and maybe four days a week,” Mr Ma said. “My grandfather worked 16 hours a day in the farmland and [thought he was] very busy. We work eight hours, five days a week and think we are very busy.”

Read full story here…




laser

Army Mounts High Power Laser On Apache Heliocopter

Lasers are abounding in military applications, and have been mounted on trucks, ships, airplanes, and now helicopters. This is Technocrats run amok as they invent and create because they can.  TN Editor

The United States Special Operations Command just tested a high-energy laser on the AH-64 Apache attack helicopter, marking the first time such a weapon has been deployed aboard a rotary-wing aircraft.

According to a press release from defense company Raytheon, the test was a complete success, “providing solid experimental evidence for the feasibility of high resolution, multi-band targeting sensor performance and beam propagation supportive of High Energy Laser capability for the rotary-wing attack mission.”

“This data collection shows we’re on the right track. By combining combat proven sensors, like the MTS, with multiple laser technologies, we can bring this capability to the battlefield sooner rather than later,” the release quoted Raytheon vice president of Advanced Concept and Technologies for Raytheon Space and Airborne Systems Art Morrish as saying.

Matthew Ketner, branch chief of the High Energy Laser Controls and Integration Directorate at the Naval Surface Warfare Center Dahlgren Division, Virginia, shows the effects of laser hits on materials during Lab Day in the Pentagon, May 18, 2017. (Photo Credit: Mr. David Vergun (Army News Service)

 The Apache used a HEL mated with a version of Raytheon’s Multi-Spectral Targeting System, which combined electro-optical and infrared sensors, against a number of targets. The data from this test will be used to future HEL systems to address unique challenges that stem from their installation on rotary-wing aircraft, including the effects of vibration, downwash, and dust.

The Apache has had laser systems since it entered service in 1984, but the lasers were low-power systems that are used to guide AGM-114 Hellfire air-to-ground missiles. A HEL will have the ability to destroy targets.

An Army release noted that the service has also tested lasers on the Heavy Expanded Mobility Tactical Truck in April 2016 and the Stryker this past February and March. In both cases, the lasers downed a number of unmanned aerial vehicles. The Navy has a laser on board USS Ponce (AFSB(I) 15, formerly LPD 15), which is currently operating in the Persian Gulf.

 

Read full story here…




More Hack Attacks Coming As Microsoft Slams Government Secrecy

The NSA discovered how to secretly hack Windows, but didn’t tell anyone. When the hack was leaked, real hackers jumped on it and turned the world upside-down. This means the the NSA is a ‘Black-Hat” hacker. White-Hat hackers have a moral code of ethics to always disclose a known vulnerability as soon as it is discovered. Technocrats at the NSA see no need for disclosure of any kind, and will likely never be held accountable for the billions in damages caused by the Wannacry ransomeware virus.  TN Editor

Officials across the globe scrambled over the weekend to catch the culprits behind a massive ransomware worm that disrupted operations at car factories, hospitals, shops and schools, while Microsoft on Sunday pinned blame on the U.S. government for not disclosing more software vulnerabilities.

Cyber security experts said the spread of the worm dubbed WannaCry – “ransomware” that locked up more than 200,000 computers in more than 150 countries – had slowed but that the respite might only be brief amid fears new versions of the worm will strike.

In a blog post on Sunday, Microsoft President Brad Smith appeared to tacitly acknowledge what researchers had already widely concluded: The ransomware attack leveraged a hacking tool, built by the U.S. National Security Agency, that leaked online in April.

“This is an emerging pattern in 2017,” Smith wrote. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world.”

He also poured fuel on a long-running debate over how government intelligence services should balance their desire to keep software flaws secret – in order to conduct espionage and cyber warfare – against sharing those flaws with technology companies to better secure the internet.

“This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem,” Smith wrote. He added that governments around the world should “treat this attack as a wake-up call” and “consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits.”

The NSA and White House did not immediately respond to requests for comment about the Microsoft statement.

Economic experts offered differing views on how much the attack, and associated computer outages, would cost businesses and governments.

The non-profit U.S. Cyber Consequences Unit research institute estimated that total losses would range in the hundreds of millions of dollars, but not exceed $1 billion.

Most victims were quickly able to recover infected systems with backups, said the group’s chief economist, Scott Borg.

California-based cyber risk modeling firm Cyence put the total economic damage at $4 billion, citing costs associated with businesses interruption.

Read full story here…




Military Using Human Brain Waves To Teach Robots How To Shoot

Three years ago, the military told us that robots will not be used to kill people: so, why are they teaching them to shoot? Lesson: Technocrats lie to perpetuate their inventions.  TN Editor

Without even realizing it, soldiers could soon be training robot sharpshooters to take their jobs.

Modern sensors can see farther than humans. Electronic circuits can shoot faster than nerves and muscles can pull a trigger. Humans still outperform armed robots in knowing what to shoot at — but new research funded in part by the Army may soon narrow that gap.

Researchers from DCS Corp and the Army Research Lab fed datasets of human brain waves into a neural network — a type of artificial intelligence — which learned to recognize when a human is making a targeting decision. They presented their paper on it at the annual Intelligent User Interface conference in Cyprus in March.

Why is this a big deal? Machine learning relies on highly structured data, numbers in rows that software can read. But identifying a target in the chaotic real world is incredibly difficult for computers. The human brain does it easily, structuring data in the form of memories, but not in a language machines can understand. It’s a problem that the military has been grappling with for years.

“We often talk about deep learning. The challenge there for the military is that that involves huge datasets and a well-defined problem,” Thomas Russell, the chief scientist for the Army, said at a recent National Defense Industrial Association event. “Like Google just solved the Go game problem.”

Last year, Google’s DeepMind lab showed that an AI could beat the world’s top player in the game of Go, a game considered exponentially harder than chess. “You can train the system to do deep learning in a [highly structured] environment but if the Go game board changed dynamically over time, the AI would never be able to solve that problem. You have to figure out…in that dynamic environment we have in the military world, how do we retrain this learning process from a systems perspective? Right now, I don’t think there’s any way to do that without having the humans train those systems.”

Their research branched out of a multi-year, multi-pronged program called the Cognition and Neuroergonomics Collaborative Technology Alliance.

“We know that there are signals in the brain that show up when you perceive something that’s salient,” said researcher Matthew Jaswa, one of the authors on the paper. These are called P300 responses, bursts of electric activity that the parietal lobe of the brain emits in response to stimuli. Discovered in the 1960s, the P300 response is basically the brain’s answer to a quick-decision task, such as whether an object that appears suddenly is a target.

The researchers hope their new neural net will enable experiments in which a computer can easily understand when a soldier is evaluating targets in a virtual scenario, as opposed to having to spend lots of time teaching the system to understand how to structure different individuals’ data, eye movements, their P300 responses, etc. The goal, one day, is a neural net that can learn instantaneously, continuously, and in real-time, by observing the brainwaves and eye movement of highly trained soldiers doing their jobs.

Read full story here…




darpa mind uploads

DARPA Is Hacking The Human Brain To ‘Upload’ Skills Directly

DARPA is a Technocrat-run agency of the Department of Defense and is wholly-dedicated to the Transhuman idea of improving man through technology. This is dangerous on a number of levels, including the fact that resulting technology will be in the hands of the government and not its citizens.  TN Editor

 The DARPA Targeted Neuroplasticity Training (TNT) program is exploring ways to speed up skill acquisition by activating synaptic plasticity. If the program succeeds, downloadable learning that happens in a flash may be the result.

In March 2016, DARPA — the U.S. military’s “mad science” branch — announced their Targeted Neuroplasticity Training (TNT) program. The TNT program aims to explore various safe neurostimulation methods for activating synaptic plasticity, which is the brain’s ability to alter the connecting points between neurons — a requirement for learning. DARPA hopes that building up that ability by subjecting the nervous system to a kind of workout regimen will enable the brain to learn more quickly.

The ideal end benefit for this kind of breakthrough would be downloadable learning. Rather than needing to learn, for example, a new language through rigorous study and practice over a long period of time, we could basically “download” the knowledge after putting our minds into a highly receptive, neuroplastic state. Clearly, this kind of research would benefit anyone, but urgent military missions can succeed or fail based on the timing. In those situations, a faster way to train personnel would be a tremendous boon.

FIRST NEUROSTIMULATION, THEN APPLICATION

As part of the TNT program, DARPA is funding eight projects at seven institutions. All projects are part of a coordinated effort that will first study the fundamental science undergirding brain plasticity and will conclude with human trials. The first portion of the TNT program will work to unravel the neural mechanisms that allow nerve stimulation to influence brain plasticity. The second portion of the program will practically apply what has been learned in a variety of training exercises.

To ensure the work stays practical, foreign language specialists, intelligence analysts, and others who train personnel now will work with researchers to help refine the TNT platform to suit military training needs. Researchers will compare the efficacy of using an implanted device to stimulate the brain versus non-invasive stimulation. They will also explore both the ethics of enhanced learning through neurostimulation and ways to avoid side effects and potential risks.

Read full story here…




vikhr tank

Russian Army Testing 14.7 Ton Robot Tank With 30mm Automatic Gun

The robot warfare arms race is red hot, with larger and more destructive killing machines under development. As AI meets armed robots, terminator scenarios won’t seem to distant.  TN Editor

The Russian army has been putting one of the world’s biggest military robots through its paces, acting as a further reminder of Russia’s increasing military strength.

The Vikhr remote-controlled tank was being tested on a military testing ground at a classified location in Russia.

The clips shows the unmanned tank traveling through a dusty path, navigating through stretches of water and then firing into the distance from its machine gun, all without a human in sight.

The huge machine is 6.7 metres (22 feet) long, stands 3.3 metres (10.8 feet) tall and weighs about 14.7 tonnes.

The state-of-the-art robot features a 30 mm automatic gun and six anti-tank guided missiles. It can also be equipped with machine guns or a flamethrower.

It is operated remotely by a driver who can see where it is going through a camera link up, as well as the wider picture via a drone.

The robot can be operated from a distance and can manoeuvre its way across a variety of terrains.

It is capable of travelling at speeds up to 60 kph (37 mph) on dry land and 10 kph (6.2 mph) through water.

The Vikhr can lock onto a target and can follow it, while repeatedly firing, until it is completely destroyed.

The video of the huge robot tackling an assault course, that also included hills, is proving popular with viewers after being posted online.

Read full story here…




F-16

Self-Flying F-16 Can Navigate And Take Out A Target All By Itself

When humans can be autonomously killed by machines, humanity is in grave danger. The U.S. Air Force has just crossed the ethical boundary, leaving the path wide-open for anybody with killing authority to jump on the bandwagon.  TN Editor

In its quest to meet and exceed the challenges of the future, the US Air Force has been increasingly looking to unmanned systems — and a recent test proved that an unmanned F-16 can now think and fight on its own.

The US has used F-16 drones before as realistic targets for the F-35 to blow up in training, but on Monday it announced fully autonomous air-to-air and ground strike capabilities as a new capability thanks to joint research between the service and Lockheed Martin’s legendary Skunkworks.

Not only did the F-16 drone figure out the best way to get there and execute a ground strike mission by itself, it was interrupted by an air threat, responded, and kept going.

“We’ve not only shown how an Unmanned Combat Air Vehicle can perform its mission when things go as planned, but also how it will react and adapt to unforeseen obstacles along the way,” said Capt. Andrew Petry of the Air Force Research Laboratory in a Lockheed Martin statement.

But having F-16 drones plan and fly their own missions is only part of a much larger picture. The future of the US Air Force may well depend on advanced platforms like F-35s commanding fleets of unmanned drones which can act as additional ears, eyes, and shooters in the sky during battles.

Read full story here…




The Future Of Warfare: Mini-Nukes And Mosquito-Like Robot Weapons

The future of warfare will be decided by technology-crazed Technocrats who invent because they can, not because they need to. Now that an global arms-race has broken out, there is no way to stop the oneupmanship competition to get smaller, more clever and more deadly: Weaponized technology designed to kill humans.  TN Editor

Several countries are developing nanoweapons that could unleash attacks using mini-nuclear bombs and insect-like lethal robots.

While it may be the stuff of science fiction today, the advancement of nanotechnology in the coming years will make it a bigger threat to humanity than conventional nuclear weapons, according to an expert. The U.S., Russia and China are believed to be investing billions on nanoweapons research.

“Nanobots are the real concern about wiping out humanity because they can be weapons of mass destruction,” said Louis Del Monte, a Minnesota-based physicist and futurist. He’s the author of a just released book entitled “Nanoweapons: A Growing Threat To Humanity.”

 One unsettling prediction Del Monte’s made is that terrorists could get their hands on nanoweapons as early as the late 2020s through black market sources.

According to Del Monte, nanoweapons are much smaller than a strand of human hair and the insect-like nanobots could be programmed to perform various tasks, including injecting toxins into people or contaminating the water supply of a major city.

Another scenario he suggested the nanodrone could do in the future is fly into a room and drop a poison onto something, such as food, to presumably target a particular individual.

The federal government defines nanotechnology as the science, technology and engineering of things so small they are measured on a nanoscale, or about 1 to 100 nanometers. A single nanometer is about 10 times smaller than the width of a human’s DNA molecule.

While nanotechnology has produced major benefits for medicine, electronics and industrial applications, federal research is currently underway that could ultimately produce nanobots.

For one, the Defense Advanced Research Projects Agency, or DARPA, has a program called the Fast Lightweight Autonomy program for the purpose to allow autonomous drones to enter a building and avoid hitting walls or objects. DARPA announced a breakthrough last year after tests in a hangar in Massachusetts.

Previously, the Army Research Laboratory announced it created an advanced drone the size of a fly complete with a set of “tiny robotic legs” — a major achievement since it presumably might be capable of entering a building undetected to perform surveillance, or used for more nefarious actions.

Frightening details about military nanotechnologies were outlined in a 2010 report from the Pentagon’s Defense Threat Reduction Agency, including how “transgenic insects could be developed to produce and deliver protein-based biological warfare agents, and be used offensively against targets in a foreign country.”

It also forecast “microexplosives” along with “nanobots serving as [bioweapons] delivery systems or as micro-weapons themselves, and inhalable micro-particles to cripple personnel.”

Read full story here…