DARPA

DARPA: Funding Wearable Brain-Machine Interfaces

Technocrats at DARPA are intent on creating a non-surgical brain-machine interface as a force-multiplier for soldiers. The research will require “Investigational Device Exemptions” from the Administration. ⁃ TN Editor

DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.

“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.”

Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.

For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. Yet, in fact, similar technology could greatly benefit clinical populations as well. By removing the need for surgery, N3 systems seek to expand the pool of patients who can access treatments such as deep brain stimulation to manage neurological illnesses.

The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. The research is split between two tracks. Teams are pursuing either completely noninvasive interfaces that are entirely external to the body or minutely invasive interface systems that include nanotransducers that can be temporarily and nonsurgically delivered to the brain to improve signal resolution.

  • The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive interface system that pairs an external transceiver with electromagnetic nanotransducers that are nonsurgically delivered to neurons of interest. The nanotransducers would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication.
  • The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity. The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types.
  • The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity.
  • The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
  • The Rice University team, under principal investigator Dr. Jacob Robinson, aims to develop a minutely invasive, bidirectional system for recording from and writing to the brain. For the recording function, the interface will use diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. To enable the write function, the team will use a magneto-genetic approach to make neurons sensitive to magnetic fields.
  • The Teledyne team, under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.

Throughout the program, the research will benefit from insights provided by independent legal and ethical experts who have agreed to provide insights on N3 progress and consider potential future military and civilian applications and implications of the technology. Additionally, federal regulators are cooperating with DARPA to help the teams better understand human-use clearance as research gets underway. As the work progresses, these regulators will help guide strategies for submitting applications for Investigational Device Exemptions and Investigational New Drugs to enable human trials of N3 systems during the last phase of the four-year program.

“If N3 is successful, we’ll end up with wearable neural interface systems that can communicate with the brain from a range of just a few millimeters, moving neurotechnology beyond the clinic and into practical use for national security,” Emondi said. “Just as service members put on protective and tactical gear in preparation for a mission, in the future they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete.”

Read full story here…




Experts: The Only Defense Against Killer AI Is Not Developing It

Out of control killer AI in warfare is inevitable because it will become too complex for human management and control. The only real answer is to not develop it in the first place. ⁃ TN Editor

A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don’t risk eradication. Whether you’re for or against the AI arms race: it’s happening. Here’s what that means, according to a trio of experts.

Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXivdiscussing the potential ramifications of integrating AI systems into modern warfare.

The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover. In essence it’s a short, sober, and terrifying look at how all this various machine learning technology will play out based on analysis of current cutting-edge military AI technologies and predicted integration at scale.

The paper begins with a warning about impending catastrophe, explaining there will almost certainly be a “normal accident,” concerning AI – an expected incident of a nature and scope we cannot predict. Basically, the militaries of the world will break some civilian eggs making the AI arms race-omelet:

Study of this field began with accidents such as Three Mile Island, but AI technologies embody similar risks. Finding and exploiting these weaknesses to induce defective behavior will become a permanent feature of military strategy.

If you’re thinking killer robots duking it out in our cities while civilians run screaming for shelter, you’re not wrong – but robots as a proxy for soldiers isn’t humanity’s biggest concern when it comes to AI warfare. This paper discusses what happens after we reach the point at which it becomes obvious humans are holding machines back in warfare.

According to the researchers, the problem isn’t one we can frame as good and evil. Sure it’s easy to say we shouldn’t allow robots to murder humans with autonomy, but that’s not how the decision-making process of the future is going to work.

The researchers describe it as a slippery slope:

If AI systems are effective, pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.

LAWS, or lethal autonomous weapons systems, will almost immediately scale beyond humans’ ability to work with computers and machines — and probably sooner than most people think. Hand-to-hand combat between machines, for example, will be entirely autonomous by necessity:

Over time, as AI becomes more capable of reflective and integrative thinking, the human component will have to be eliminated altogether as the speed and dimensionality become incomprehensible, even accounting for cognitive assistance.

And, eventually, the tactics and responsiveness required to trade blows with AI will be beyond the ken of humans altogether:

Given a battlespace so overwhelming that humans cannot manually engage with the system, the human role will be limited to post-hoc forensic analysis, once hostilities have ceased, or treaties have been signed.

If this sounds a bit grim, it’s because it is. As Import AI’s Jack Clark points out, “This is a quick paper that lays out the concerns of AI+War from a community we don’t frequently hear from: people that work as direct suppliers of government technology.”

It might be in everyone’s best interest to pay careful attention to how both academics and the government continue to frame the problem going forward.

Read full story here…




EU On AI Ethics: Must ‘Enhance Positive Social Change’

EU Technocrats define ethics for AI: “Environmental and societal well-being — AI systems should be sustainable and “enhance positive social change.” This is the coveted ‘Science of Social Engineering’ that harkens back to the original Technocracy movement in the 1930s. ⁃ TN Editor
 

The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology.

So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that a number of things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them.

So, yes, these guidelines are about stopping AI from running amuck, but on the level of admin and bureaucracy, not Asimov-style murder mysteries.

To help with this goal, the EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:

  • Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  • Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
  • Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
  • Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
  • Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  • Environmental and societal well-being AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.

You’ll notice that some of these requirements are pretty abstract and would be hard to assess in an objective sense. (Definitions of “positive social change,” for example, vary hugely from person to person and country to country.) But others are more straightforward and could be tested via government oversight. Sharing the data used to train government AI systems, for example, could be a good way to fight against biased algorithms.

These guidelines aren’t legally binding, but they could shape any future legislation drafted by the European Union. The EU has repeatedly said it wants to be a leader in ethical AI, and it has shown with GDPR that it’s willing to create far-reaching laws that protect digital rights.

But this role has been partly forced on the EU by circumstance. It can’t compete with America and China — the world’s leaders in AI — when it comes to investment and cutting-edge research, so it’s chosen ethics as its best bet to shape the technology’s future.

Read full story here…




China Claims Its Social Credit System Has ‘Restored Morality’

All of China’s 1.4 billion citizens are enrolled in its facial recognition and Social Credit System, resulting in 13 million being blacklisted; as a result, China is bragging that it has ‘restored morality’. ⁃ TN Editor

China’s state-run newspaper Global Times revealed in a column defending the nation’s authoritarian “social credit system” Monday that the communist regime had blacklisted 13.49 million Chinese citizens for being “untrustworthy.”

The article did not specify what these individuals did to find themselves on the list, though the regime has revealed the system assigns a numerical score to every Chinese citizen based on how much the Communist Party approves of his or her behavior. Anything from jaywalking and walking a dog without a leash to criticizing the government on the internet to more serious, violent, and corrupt crimes can hurt a person’s score. The consequences of a low credit score vary, but most commonly appear to be travel restrictions at the moment.

China is set to complete the implementation of the system in the country in 2020. As the date approaches, the government’s propaganda arms have escalated its promotion as necessary to live in a civilized society. Last week, the Chinese Communist Youth League released a music video titled “Live Up to Your Word” featuring well-known Chinese actors and musicians who cater to a teenage audience. The song in the video urged listeners to “be a trustworthy youth” and “give thumbs up to integrity” by abiding by the rules of the Communist Party. While it did not explicitly say the words “social credit system,” observers considered it a way to promote the behavior rewarded in social credit points

Monday’s Global Times piece claimed it will “restore morality” by holding bad citizens accountable, with “bad” solely defined in the parameters set by Communist Party totalitarian chief Xi Jinping. The federal party in Beijing is also establishing a points-based metric for monitoring the performance of local governments, making it easier to keep local officials in line with Xi’s agenda.

“As of March, 13.49 million individuals have been classified as untrustworthy and rejected access to 20.47 million plane tickets and 5.71 million high-speed train tickets for being dishonest,” the Global Times reported, citing the government’s National Development and Reform Commission (NDRC). Among the new examples the newspaper highlights as dishonest behavior are failing to pay municipal parking fees, “eating on the train,” and changing jobs with “malicious intent.”

China had previously revealed that, as of March, the system blocked an unspecified number of travelers from buying over 23 million airplane, train, and bus tickets nationwide. That report did not say how many people the travel bans affected, as the same person could presumably attempt to buy more than one ticket or tickets for multiple means of transportation. The system blocked over three times the number of plane tickets as train tickets, suggesting the government is suppressing international travel far more than use of domestic vehicles. At the time of the release of the initial numbers in March, estimates found China had tripled the number of people on its no-fly list, which predates the social credit system.

The Chinese also reportedly found that some of the populations with the highest number of system violations lived in wealthy areas, suggesting Xi is targeting influential businesspeople with the system to keep them under his command.

In addition to limited access to travel, another punishment the Chinese government rolled out in March was the use of an embarrassing ringtone to alert individuals of a low-credit person in their midst. The ringtone would tell those around a person with low credit to be “careful in their business dealings” with them.

In the system, all public behavior, the Global Times explained Monday, will be divided into “administrative affairs, commercial activities, social behavior, and the judicial system” once the system is complete. No action will be too small to impact the score.

“China’s ongoing construction of the world’s largest social credit system will help the country restore social trust,” the article argued.

Read full story here…




AI 90% Accurate For Predicting Death By Heart Attack?

When insurance companies, HMOs, medicare, etc., implement this technology, patients will see rampant discrimination based on their AI health score; after all, who would sell a life insurance policy to someone who is going to die soon? ⁃ TN Editor

Algorithms similar to those employed by Netflix and Spotify to customise services are now better than human doctors at spotting who will die or have a heart attack.

Machine learning was used to train LogitBoost, which its developers say can predict death or heart attacks with 90 per cent accuracy.

It was programmed to use 85 variables to calculate the risk to the health of the 950 patients that it was fed scans and data from.

Patients complaining of chest pain were subjected to a host of scans and tests before being treated by traditional methods.

Their data was later used to train the algorithm.

It ‘learned’ the risks and, during the six-year follow-up, had a 90 per cent success rate at predicting 24 heart attacks and 49 deaths from any cause.

LogitBoost which was programmed to use 85 variables to calculate risks to a person’s health who was complaining of chest pain. Patients had a coronary computed tomography angiography (CCTA) scan (pictured, stock scan) which gathered 58 of the data points

Services like Netflix and Spotify systems all use algorithms in a similar way to adapt to individual users and offer a more personalised look.

Study author Dr Luis Eduardo Juarez-Orozco, of the Turku PET Centre, Finland, said these advances go beyond medicine.

He said: ‘These advances are far beyond what has been done in medicine, where we need to be cautious about how we evaluate risk and outcomes.

‘We have the data but we are not using it to its full potential yet.’

Doctors use risk scores to make treatment decisions – but these scores are based on just a ‘handful’ of variables in patients.

Through repetition and adjustment, machines use large amounts of data to identify complex patterns not evident to humans.

Dr Juarez-Orozco said: ‘Humans have a very hard time thinking further than three dimensions or four dimensions.

‘The moment we jump into the fifth dimension we’re lost.

‘Our study shows that very high dimensional patterns are more useful than single dimensional patterns to predict outcomes in individuals and for that we need machine learning.’

Read full story here…




Swarms Of AI Drones To Patrol Europe’s Borders

Threat analysis and decisions will be made autonomously, notifying border patrol agents; however, this is a slippery slope that could all too easily be inducted into broad law enforcement practices. ⁃ TN Editor

Imagine you’re hiking through the woods near a border. Suddenly, you hear a mechanical buzzing, like a gigantic bee. Two quadcopters have spotted you and swoop in for a closer look. Antennae on both drones and on a nearby autonomous ground vehicle pick up the radio frequencies coming from the cell phone in your pocket. They send the signals to a central server, which triangulates your exact location and feeds it back to the drones. The robots close in.

Cameras and other sensors on the machines recognize you as human and try to ascertain your intentions. Are you a threat? Are you illegally crossing a border? Do you have a gun? Are you engaging in acts of terrorism or organized crime? The machines send video feeds to their human operator, a border guard in an office miles away, who checks the videos and decides that you are not a risk. The border guard pushes a button, and the robots disengage and continue on their patrol.

This is not science fiction. The European Union is financing a project to develop drones piloted by artificial intelligence and designed to autonomously patrol Europe’s borders. The drones will operate in swarms, coordinating and corroborating information among fleets of quadcopters, small fixed-wing airplanes, ground vehicles, submarines, and boats. Developers of the project, known as Roborder, say the robots will be able to identify humans and independently decide whether they represent a threat. If they determine that you may have committed a crime, they will notify border police.

President Donald Trump has used the specter of criminals crossing the southern border to stir nationalist political sentiment and energize his base. In Europe, two years after the height of the migration crisis that brought more than a million people to the continent, mostly from the Middle East and Africa, immigration remains a hot-button issue, even as the number of new arrivals has dropped. Political parties across the European Union are winning elections on anti-immigrant platforms and enacting increasingly restrictive border policies. Tech ethicists and privacy advocates worry that Roborder and projects like it outsource too much law enforcement work to nonhuman actors and could easily be weaponized against people in border areas.

“The development of these systems is a dark step into morally dangerous territory,” said Noel Sharkey, emeritus professor of robotics and artificial intelligence at Sheffield University in the U.K. and one of the founders of the International Committee for Robot Arms Control, a nonprofit that advocates against the military use of robotics. Sharkey lists examples of weaponized drones currently on the market: flying robots equipped with Tasers, pepper spray, rubber bullets, and other weapons. He warns of the implications of combining that technology with AI-based decision-making and using it in politically-charged border zones. “It’s only a matter of time before a drone will be able to take action to stop people,” Sharkey told The Intercept.

Roborder’s developers also may be violating the terms of their funding, according to documents about the project obtained via European Union transparency regulations. The initiative is mostly financed by an €8 million EU research and innovation grant designed for projects that are exclusively nonmilitary, but Roborder’s developers acknowledge that parts of their proposed system involve military technology or could easily be converted for military use.

Much of the development of Roborder is classified, but The Intercept obtained internal reports related to ethical considerations and concerns about the program. That documentation was improperly redacted and inadvertently released in full.

In one of the reports, Roborder’s developers sought to address ethical criteria that are tied to their EU funding. Developers considered whether their work could be modified or enhanced to harm humans and what could happen if the technology or knowledge developed in the project “ended up in the wrong hands.” These ethical issues are raised, wrote the developers, when “research makes use of classified information, materials or techniques; dangerous or restricted materials[;] and if specific results of the research could present a danger to participants or to society as a whole.”

Roborder’s developers argued that these ethical concerns did not apply to their work, stating that their only goal was to develop and test the new technology, and that it would not be sold or transferred outside of the European Union during the life cycle of the project. But in interviews with The Intercept, project developers acknowledged that their technology could be repurposed and sold, even outside of Europe, after the European project cycle has finished, which is expected to happen next year.

Beyond the Roborder project, the ethics reports filed with the European Commission suggest a larger question: When it comes to new technology with the potential to be used against vulnerable people in places with few human rights protections, who decides what we should and should not develop?

Roborder won its funding grant in 2017 and has set out to develop a marketable prototype — “a swarm of robotics to support border monitoring” — by mid-2020. Its developers hope to build and equip a collection of air, sea, and land drones that can be combined and sent out on border patrol missions, scanning for “threats” autonomously based on information provided by human operators, said Stefanos Vrochidis, Roborder’s project manager.

The drones will employ optical, infrared, and thermal cameras; radar; and radio frequency sensors to determine threats along the border. Cell phone frequencies will be used to triangulate the location of people suspected of criminal activity, and cameras will identify humans, guns, vehicles, and other objects. “The main objective is to have as many sensors in the field as possible to assist patrol personnel,” said Kostas Ioannidis, Roborder’s technical manager.

Read full story here…




Where Technocrats Play: U.S. Department Of Energy

Ex-Governor of Texas Rick Perry was appointed Secretary of Energy by President Donald Trump on March 2, 2017.

Even though President Trump has withdrawn from the Paris Climate Accord on global warming, apparently Secretary Perry has not gotten word, for the Department of Energy’s web page on Climate Change is still present on the Energy.gov website, which prominently states, 

Addressing the effects of climate change is a top priority of the Energy Department. As global temperatures rise, wildfires, drought, and high electricity demand put stress on the nation’s energy infrastructure. And severe weather — the leading cause of power outages and fuel supply disruption in the United States — is projected to worsen, with eight of the 10 most destructive hurricanes of all time having happened in the last 10 years.

To fight climate change, the Energy Department supports research and innovation that makes fossil energy technologies cleaner and less harmful to the people and the environment. We’re taking responsible steps to cut carbon pollution, develop domestic renewable energy production and win the global race for clean energy innovation. We’re also working to dramatically increase the efficiency of  applianceshomes, businesses and vehicles.

The Climate Change page presents a map of “How Climate Change Threatens America’s Energy Infrastructure in Every Region.” Then it displays a globe with the heading, “Energy Exascale Earth System Model” (E3SM) and explains that, 

E3SM is a modeling, simulation, and prediction project that optimizes the use of DoE laboratory resources to meet the science needs of the nation.

Even in light of this, it is still puzzling to some why the world’s most powerful supercomputer is being created by the DoE to realize the “full potential of AI.”  According to an article in The Next Web,

The US Department of Energy (DoE) has announced it’s setting aside $600 million to build the world’s fastest supercomputer called Frontier. It will be jointly developed by AMD and Seattle-based supercomputer specialist Cray.

The Frontier supercomputer will be capable of completing more than 1.5 quintillion calculations per second, and will join Aurora to become the second of the two exascale systems planned by US DoE for 2021.

This must somehow be reconciled to the DoE’s very simple Mission Statement also displayed on its website: 

The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

Certainly the Frontier project is “transformative”, but why on earth does the DoE need the fastest AI computer in the world? And, what does it intend to do with it?

Enter Smart Grid. We know from the above that the DoE is “working to dramatically increase the efficiency of applianceshomes, businesses and vehicles.” The continuous and personal data stream collected from the nation’s grid is the perfect input for such an AI super-computer to control the whole system from a single location. When President Obama (a Democrat) unleashed the Smart Grid program in 2009, the goal of micro-managing the country’s energy usage was in plain view, and it soon may be finally realized in practice, with Secretary Rick Perry (a Republican ) taking the credit.

The origin of the entire Smart Grid concept is found in the DoE’s Bonneville Power Administration (BPA), created in 1937 and located in Pacific Northwest. Historically, the BPA was saturated with early Technocrats because of its deep involvement with hydroelectric power. In the early 1990s, the BPA took credit for having coined the term, “Energy Web”, which later became “Smart Grid”.

The 1934 Technocracy Study Course, principally authored by Technocracy, Inc. co-founder M. King Hubbert, clearly specified control over energy in its seven-point requirements list: 

  • Register on a continuous 24 hour-per-day basis the total net conversion of energy
  • By means of the registration of energy converted and consumed, make possible a balanced load.

The Trans-Texas Corridor Fiasco

Another globalist project that Rick Perry took credit for dates back to his tenure as Governor of Texas, when he tried to force the so-called Trans-Texas Corridor (TTC) into existence between 2001 and 2010 as part of the North American Union (NAU) initiative engineered by President George Bush, Mexican President Vincente Fox and Canadian Premier Paul Martin. 

The NAU was a springboard from the North American Free Trade Agreement (NAFTA) to create a continental area comparable in design to the European Union. The TTC was to be a super-corridor 1,200 feet wide that carried tollways, rail and utility lines, stretching from the Mexican border to Oklahoma and ultimately to Kansas City. 

The TTC was envisioned as an extensive network of Public-Private Partnerships that would have allowed global corporations to charge tolls for transit of not only freight delivered up by rail from Mexico, but also for services such as water, electricity, natural gas and fiber optic lines. It would have been one of the largest eminent domain land-grabs ever conducted, with the ultimate seizure of 584,000 acres of land held privately, mostly by ranchers and farmers.

After the plan was exposed to the American public, the uproar and protest was so great that the entire TTC project was ultimately terminated along with the NAU. Much of the credit went to Oklahoma as its legislature refused to let the super-corridor pass into its territory from Texas.

Although Governor Perry tried to dodge the political bullet for the whole episode, Texas Department of  Transportation documents released in 2002 stated that,

Governor Rick Perry wrote Transportation Commission Chairman John W. Johnson on January 30, 2002 to outline his vision for the Trans Texas Corridor. The governor asked the three-member commission to assemble the Texas Department of Transportation’s top talent to create and deliver a Trans Texas Corridor implementation plan in 90 days.

This is the same Rick Perry who today, as Secretary of Energy, is attempting to build the world’s fastest super-computer to realize the “full potential of AI.” Although Frontier will undoubtedly be shared with other government agencies, NGOs and private researchers, Technocrats within the DoE will have the ultimate tool to assert total control over all energy converted and consumed in the United States. 

 

 




Goodbye World: AI Arms Race Headed Toward Autonomous Killer Robots

The rapidly emerging AI arms race to create autonomous killing machines is Technocrat insanity at its highest peak. To the Technocrat mind, every problem has a scientific solution; so why not let an armed and lethal AI robot do all the work of human soldiers? ⁃ TN Editor

When it comes to deciding to kill a human in a time of war, should a machine make that decision or should another human?

The question is a moral one, brought to the foreground by the techniques and incentives of modern technology. It is a question whose scope falls squarely under the auspices of international law, and one which nations have debated for years. Yet it’s also a collective action problem, one that requires not just states, but also companies and workers in companies to come to an agreement to forgo a perceived advantage. The danger is not so much in making a weapon, but in making a weapon that can choose targets independently of the human responsible initiating its action.

In a May 8 report by Pax — a nonprofit with the explicit goal of protecting civilians from violence, reducing armed conflict, and building a just peace — authors look at the existing state of artificial intelligence in weaponry and urge nations, companies and workers to think about how to prevent an AI arms race, instead of thinking about how to win one. Without corrective action, the report warns, the status quo could lead all participants into a no-win situation, with any advantage gained from developing an autonomous weapon temporary and limited.

“We see this emerging AI arms race and we think if nothing happens that that is a major threat to humanity,” said Frank Slijper, one of the authors on the report. “There is a window of opportunity to stop an AI arms race from happening. States should try to prevent an AI arms race and work toward international regulation. In the meantime, companies and research institutes have a major responsibility themselves to make sure that that work in AI and related fields is not contributing to potential lethal autonomous weapons.”

The report is written with a specific eye toward the seven leading AI powers. These include the five permanent members of the UN security council: China, France, Russia, the United Kingdom and the United States. In addition, the report details the artificial intelligence research of Israel and South Korea, both countries whose geographic and political postures have encouraged the development of military AI.

“We identified the main players in terms of use and research and development efforts on both AI and military use of AI in increasingly autonomous weapons. I couldn’t think of anyone, any state we would have missed out from these seven,” says Slijper. “Of course, there’s always a number eight and the number nine.”

For each covered AI power, the report examines the state of AI, the role of AI in the military, and what is known of cooperation between AI developers in the private sector or universities and the military. With countries like the United States, where military AI programs are named, governing policies can be pointed to, and debates over the relationship of commercial AI to military use is known, the report details that process. The thoroughness of the research is used to underscore Pax’s explicitly activist mission, though it also provides a valuable survey of the state of AI in the world.

As the report maintains throughout, this role of AI in weaponry isn’t just a question for governments. It’s a question for the people in charge of companies, and a question for the workers creating AI for companies.

“Much of it has to do with the rather unique character of AI-infused weapons technology,” says Slijper. “Traditionally, a lot of the companies now working on AI were working on it from a purely civilian perspective to do good and to help humanity. These companies weren’t traditionally military producers or dominant suppliers to the military. If you work for an arms company, you know what you’re working for.”

In the United States, there’s been expressed resistance to contributing to Pentagon contracts from laborers in the tech sector. After Google worker outcry after learning of the company’s commitment to Project Maven, which developed a drone-footage processing AI for the military, the company’s leadership agreed to sunset the project. (Project Maven is now managed by the Peter Thiel-backed Andruil.)

Microsoft, too, experienced worker resistance to military use of its augmented reality tool HoloLens, with some workers writing a letter stating that in the Pentagon’s hands, the sensors and processing of the headset made it dangerously close to a weapon component. The workers specifically noted that they had built HoloLens “to help teach people how to perform surgery or play the piano, to push the boundaries of gaming, and to connect with the Mars Rover,” all of which is a far cry from aiding the military in threat identification on patrol.

“And I think it is for a lot of people working in the tech sector quite disturbing that, while initially, that company was mainly or only working on civilian applications of that technology, now more and more they see these technologies also been used for military projects or even lethal weaponry,” said Slijper.

Slijper points to the Protocol on Blind Weapons as a way the international community regulated a technology with both civilian and military applications to ensure its use fell within the laws of war.

Read full story here…




Augmented Or Artificial: When Reality Isn’t Real

Like it or not, Augmented Reality is rapidly advancing to the point that people will not be able to tell the difference between real and unreal. Where boundaries and privacy do not exist, this will create a dystopian world. ⁃ TN Editor

The martial arts actor Jet Li turned down a role in the Matrix and has been invisible on our screens because he does not want his fighting moves 3D-captured and owned by someone else. Soon everyone will be wearing 3D-capable cameras to support augmented reality (often referred to as mixed reality) applications. Everyone will have to deal with the sorts of digital-capture issues across every part of our life that Jet Li avoided in key roles and musicians have struggled to deal with since Napster. AR means anyone can rip, mix and burn reality itself.

Tim Cook has warned the industry about “the data industrial complex” and advocated for privacy as a human right. It doesn’t take too much thinking about where some parts of the tech industry are headed to see AR ushering in a dystopian future where we are bombarded with unwelcome visual distractions, and our every eye movement and emotional reaction is tracked for ad targeting. But as Tim Cook also said, “it doesn’t have to be creepy.” The industry has made data-capture mistakes while building today’s tech platforms, and it shouldn’t repeat them.

Dystopia is easy for us to imagine, as humans are hard-wired for loss aversion. This hard-wiring refers to people’s tendency to prefer avoiding a loss versus an equal win. It’s better to avoid losing $5 than to find $5. It’s an evolutionary survival mechanism that made us hyper-alert for threats. The loss of being eaten by a tiger was more impactful than the gain of finding some food to eat. When it comes to thinking about the future, we instinctively overreact to the downside risk and underappreciate the upside benefits.

How can we get a sense of what AR will mean in our everyday lives, that is (ironically) based in reality?

When we look at the tech stack enabling AR, it’s important to note there is now a new type of data being captured, unique to AR. It’s the computer vision-generated, machine-readable 3D map of the world. AR systems use it to synchronize or localize themselves in 3D space (and with each other). The operating system services based on this data are referred to as the “AR Cloud.” This data has never been captured at scale before, and the AR Cloud is 100 percent necessary for AR experiences to work at all, at scale.

Fundamental capabilities such as persistence, multi-user and occlusions outdoor all need it. Imagine a super version of Google Earth, but machines instead of people use it. This data set is entirely separate to the content and user data used by AR apps (e.g. login account details, user analytics, 3D assets, etc.).

The AR Cloud services are often thought of as just being a “point cloud,” which leads people to imagine simplistic solutions to manage this data. This data actually has potentially many layers, all of them providing varying degrees of usefulness to different use cases. The term “point” is just a shorthand way of referring to a concept, a 3D point in space. The data format for how that point is selected and described is unique to every state-of-the-art AR system.

The critical thing to note is that for an AR system to work best, the computer vision algorithms are tied so tightly to the data that they effectively become the same thing. Apple’s ARKit algorithms wouldn’t work with Google’s ARCore data even if Google gave them access. Same for HoloLens, Magic Leap and all the startups in the space. The performance of open-source mapping solutions are generations behind leading commercial systems.

So we’ve established that these “AR Clouds” will remain proprietary for some time, but exactly what data is in there, and should I be worried that it is being collected?

Read full story here…




China Scores Ideological Coup As It Exports Social Engineering Technologies

Cheap manufactured products are not China’s primary exports, but social engineering technologies are. Authoritarian regimes are eagerly adopting China’s dystopian surveillance and censorship technologies, using data and manipulation to maintain their power. ⁃ TN Editor

A swathe of the world is adopting China’s vision for a tightly controlled internet over the unfettered American approach, a stunning ideological coup for Beijing that would have been unthinkable less than a decade ago.

Vietnam and Thailand are among the Southeast Asian nations warming to a governance model that twins sweeping content curbs with uncompromising data controls — because it helps preserve the regime in power. A growing number of the region’s increasingly autocratic governments watched enviously the emergence of Chinese corporate titans from Tencent Holdings Ltd. to Alibaba Group Holding Ltd. — in spite of draconian online curbs. And now they want the same.

The more free-wheeling Silicon Valley model once seemed unquestionably the best approach, with stars from Google to Facebook to vouch for its superiority. Now, a re-molding of the internet into a tightly controlled and scrubbed sphere in China’s image is taking place from Russia to India. Yet it’s Southeast Asia that’s the economic and geopolitical linchpin to Chinese ambitions and where U.S.-Chinese tensions will come to a head: a region home to more than half a billion people whose internet economy is expected to triple to $240 billion by 2025.

“For authoritarian countries in general, the idea of the state being able to wall off to some extent its internet is deeply appealing,” said Howard French, author of “Everything Under the Heavens: How the Past Helps Shape China’s Push for Global Power. “This is about the regimes’ survival in an authoritarian situation. So that’s why they like to do this. They want to be able to insulate themselves against shocks.”

The Chinese model is gaining traction just as the American one comes under fire. Facebook and Twitter were used to manipulate the 2016 U.S. election, YouTube was criticized for failing to detect child porn, and American social media allowed a gunman to live-stream the worst mass shooting in New Zealand’s history for 10 minutes or more before severing it. Against the backdrop of wider fears about U.S. social media failings, Beijing’s approach now seems a reasonable alternative, or reasonable enough that self-serving governments can justify its adoption.

Vietnam’s controversial version went into effect Jan. 1 — a law BSA/The Software Alliance, which counts Apple Inc. and Microsoft Corp. among its members — called chilling and ineffectual. Indonesia, the region’s largest economy, already requires data be stored locally. The Philippines has stepped up what critics call a media crackdown, arrested the head of media outlet Rappler Inc. after it grew critical of President Rodrigo Duterte. And last year, the government of former Malaysian Prime Minister Najib Razak introduced a fake news law used to probe his chief opponent, though the current government may yet repeal it.

One of the latest to buy into the rationale is Thailand, which on Feb. 28 passed a cyber security bill modeled on China’s that grants the government the right to seize data and electronic equipment without a court order in the interests of national security. Introduced just weeks ahead of Thailand’s first democratic election since a 2014 military coup, it stoked concerns it could be used to stifle dissent, though the government says it shouldn’t affect companies “with good conduct.” The Asia Internet Coalition, an organization that groups the likes of Alphabet Inc.’s Google, Amazon.com Inc., Facebook Inc. and Twitter Inc., condemned a bill Amnesty International warns could be used to “cage the internet.”

The crux of a Chinese internet model is data sovereignty: information of citizens must be stored in-country and accessible on demand to the authorities, a concept enshrined in Chinese law since 2017. That’s raising hackles in Washington, which aims to counter Beijing’s sway — a longer-term struggle that may be the single most important episode in world affairs since the collapse of the Soviet Union. Escalating tensions between the two richest economies will impact just about every country across the planet — economically and socially.

Read full story here…