Army Developing AI Missiles That Identify Their Own Targets

Technocrats at defense contractors have developed a hybrid targeting system using drones and AI that find their own targets, then coordinate with artillery-launch missiles for destruction.

There has never been a weapon created in the history of mankind that was not used in battle. ⁃ TN Editor

The U.S. Army is working on a new artillery shell capable of locating enemy targets, including moving tanks and armored vehicles. The shell, called Cannon-Delivered Area Effects Munition (C-DAEM), is designed to replace older weapons that leave behind unexploded cluster bomblets on the battlefield that might pose a threat to civilians. The shell is designed to hit targets even in situations where GPS is jammed and friendly forces are not entirely sure where the enemy is.

In the 1980s, the U.S. Army fielded dual purpose improved conventional munition (DPICM) artillery rounds. DPICM was basically the concept of cluster bombs applied to artillery, with a single shell packing dozens of tennis ball-sized grenades or bomblets. DPICM shells were designed to eject the bomblets over the battlefield, dispersing them over a wide area. The bomblets were useful unprotected infantry troops and could knock out a tank or armored vehicle’s treads, weapons, or sensors, disabling it.

DPICM made artillery more lethal than ever, but there was a cost nobody foresaw: unexploded dud bomblets often littered battlefields, becoming a danger to civilians long after the war was over. An international movement to ban cluster bombs and artillery came about, and though the U.S. isn’t a signatory it has pledged not to use munitions with a dud rate greater than one percent. Dud rates for such weapons often reach five percent or more.

Hitting tanks and armored vehicles with artillery from long range is hard, but DPICM made it easy. Now that DPICM is gone the Army wants something new to replace it, something that trades showering an area with bomblets with an artillery round that intelligently seeks out enemy targets on its own. That new weapon is C-DAEM.

C-DAEM is a development of the Army’s Excalibur 155-millimeter artillery round. Excalibur is a GPS-guided artillery round, capable of hitting targets dozens of miles away using the Global Positioning System. Defense contractor Raytheon, maker of the Excalibur, claims it can land within 6.5 feet of the intended target—close enough to hit or damage a stationary armored vehicle.

C-DAEM will be able to hit moving tanks and other armored vehicles—something existing artillery shells can’t do. It will also be able to seek and destroy vehicle targets when their precise location isn’t known. As New Scientist explains, “The weapons will have a range of up to 60 kilometres, taking more than a minute to arrive, and will be able to search an area of more than 28 square kilometres for their targets. They will have a method for slowing down, such as a parachute or small wings, which they will use while scanning and classifying objects below.”

The new artillery round will also be capable of operating in so-called GPS-denied environments, where enemy forces may attempt to locally interfere with the Global Positioning System. Although U.S. forces lean heavily on GPS they are also training to operate without it. Russia, one potential adversary, is developing GPS jamming and spoofing capabilities that could make battlefield GPS useless or unreliable.

Read full story here…




Irish State Ordered To Delete ‘Unlawful’ Data On 3.2m Citizens

The National ID card is a holy grail for Technocrats who want to track all human activity, and is being heavily promoted in the U.S. as well. Here, Technocrats in the Irish government suffered a huge setback. ⁃ TN Editor

The State has been told it must delete data held on 3.2 million citizens, which was gathered as part of the roll-out of the Public Services Card, as there is no lawful basis for retaining it.

In a highly critical report on its investigation into the card, the Data Protection Commission found there was no legal reason to make individuals obtain the card in order to access State services such as renewing a driving licence or applying for a college grant.

While the card will still be sought from people accessing some services directly administered by the Department of Social Protection, such as claiming social welfare payments, the commission’s report represents a major blow to the scope of the project, which has proved politically contentious and faced strong opposition from data-privacy campaigners.

Helen Dixon, the Data Protection Commissioner, told The Irish Times that forcing people to obtain such a card for services other than those provided by the department was “unlawful from a data-processing point of view”.

It has directed that the department cease processing applications for cards needed for such functions.

Ms Dixon said there had been a “fundamental misunderstanding” of what was permitted by the legislation underpinning the card.

She said the department assumed that the legislation included a “legal basis for public sector bodies to mandatorily demand the card, and it doesn’t, once you conduct the legal analysis”.

“What we can see when we trace through it is that practice in implementation has now diverged from the legislation that underpins it,” she added.

Enforcement action

Ms Dixon found the retention of data gathered during the application process for the total of 3.2 million cards issued to date was unlawful.

“We’ve made significant findings around the data relating to the supporting documentation retained, and proposed to be retained indefinitely by the department,” she said. This documentation can include personal information on issues such as refugee status and changes to gender as well as people’s utility bills.

“There’s a whole range of documentation and the indefinite retention of it in circumstances where the Minister has satisfied herself as to identity already … We believe there is no lawful basis for that.”

The department would face enforcement action, including potentially being taken to court by the commission, if it fails to act on the recommendations of the report.

The data would still be required during the application process, but must be destroyed after that, she said.

The commission also found that there was insufficient transparency around the card, and that the department had not made enough easily understood information available to the public.

Read full story here…




Transhuman Quest: AI Chips Implanted In Brain

Make no mistake about this, implantable tech is about Transhumanism, not science. Transhumans like Musk and Ray Kurzweil dream of achieving immortality by uploading their brain to the cloud. ⁃ TN Editor

Last month, Elon Musk’s Neuralink, a neurotechnology company, revealed its plans to develop brain-reading technology over the next few years. One of the goals for Musk’s firm is to eventually implant microchip-devices into the brains of paralyzed people, allowing them to control smartphones and computers.

Although this Black Mirror-esque technology could hold potentially life-changing powers for those living with disabilities, according to Cognitive Psychologist Susan Schneider, it’s not such a great idea, and I can’t help but feel relieved, I’m with Schneider on this.

Musk, who’s also the Chief Executive of both Tesla and SpaceX, aims to make implanting AI in the brain as safe and commonplace as laser eye surgery. But, how would this work? In a video presented unveiled at the California Academy of Science, Musk said the implant would record information emitted by neurons in the brain.

The tiny processors will connect to your brain via tiny threads significantly thinner than a human hair (about 4 to 6 μm in width). These sensors will fit on the surface of your skull and then relay information to a wearable computer that sits behind your ear, called The Link. With this all in place, your brain can then connect to your iPhone via an app — we are truly living in the future and it’s terrifying.

Musk isn’t the only person radicalizing the future of our brain’s. For example, Ray Kurzweil, the futurist and Director of Engineering at Google, said he expects that we’ll be able to back our brains up to the cloud by 2045 — ultimately making us immortal.

But as Schneider points out, we shouldn’t fully invest our trust in the suggestion that humans can merge with AI. Instead, more research should be done around the possibilities and consequences of merging technology with the human brain.

Read full story here…




Google Whistleblower

The Whistleblower Who Exposed Google’s Deep Conspiracy To Overthrow The U.S. Government

Zachary Vorhies discovered the pure evil intent of Google when he realized that it intended to overthrow the U.S. government. He put his career on the line to expose hundreds of internal Google documents. ⁃ TN Editor
 

A Google insider who anonymously leaked internal documents to Project Veritas made the decision to go public in an on-the-record video interview. The insider, Zachary Vorhies, decided to go public after receiving a letter from Google, and after he says Google allegedly called the police to perform a “wellness check” on him.

Along with the interview, Vorhies asked Project Veritas to publish more of the internal Google documents he had previously leaked. Said Vorhies:

“I gave the documents to Project Veritas, I had been collecting the documents for over a year. And the reason why I collected these documents was because I saw something dark and nefarious going on with the company and I realized that there were going to not only tamper with the elections, but use that tampering with the elections to essentially overthrow the United States.”

In June of 2019, Project Veritas published internal Google documents revealing “algorithmic unfairness.” Vorhies told Project Veritas these were documents that were widely available to full-time Google employees:

“These documents were available to every single employee within the company that was full-time. And so as a fulltime employee at the company, I just searched for some keywords and these documents started to pop up. And so once I started finding one document and started finding keywords for other documents and I would enter that in and continue this cycle until I had a treasure trove and archive of documents that clearly spelled out the system, what they’re attempting to do in very clear language.”

Shortly after the report including the “algorithmic unfairness” documents was published, Vorhies received a letter from Google containing several “demands.” Vorhies told Project Veritas that he complied with Google’s demands, which included a request for any internal Google documents he may have personally retained. Vorhies also said he sent those documents to the Department of Justice Antitrust Division.

After having been identified by an anonymous account (which Vorhies believes belongs to a Google employee,) on social media as a “leaker,” Vorhies was approached by law enforcement at his residence in California. According to Vorhies, San Francisco police received a call from Google which prompted a “wellness check.”

Vorhies described the incident to Project Veritas:

“they got inside the gate, the police, and they started banging on my door… And so the police decided that they were going to call in additional forces. They called in the FBI, they called in the SWAT team. And they called in a bomb squad.”

“[T]his is a large way in which [Google tries to] intimidate their employees that go rogue on the company…”

Partial video of the incident was provided to Project Veritas. San Francisco police confirmed to Project Veritas that they did receive a “mental health call,” and responded to Vorhies’ address that day.

“Google Snowden moment”

Project Veritas has released hundreds of internal Google documents leaked by Vorhies. Among those documents is a file called “news black list site for google now.” The document, according to Vorhies, is a “black list,” which restricts certain websites from appearing on news feeds for an Android Google product. The list includes conservative and progressive websites, such as newsbusters.org and mediamatters.org. The document says that some sites are listed with or because of a “high user block rate.”

Read full story here…




Amazon’s Facial Recognition Software Now Identifies Fear

Amazon’s Rekognition software that is widely sold to law enforcement agencies, adds ‘Fear’ to  its emotional recognition of  ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’. ⁃ TN Editor

Amazon said this week its facial recognition software can detect a person’s fear.

Rekognition is one of many Amazon Web Services (AWS) cloud services available for developers. It can be used for facial analysis or sentiment analysis, which identifies different expressions and predicts emotions from images of people’s faces. The service uses artificial intelligence to “learn” from the reams of data it processes.

The tech giant revealed updates to the controversial tool on Monday that include improving the accuracy and functionality of its face analysis features such as identifying gender, emotions and age range.

“With this release, we have further improved the accuracy of gender identification,” Amazon said in a blog post. “In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear.’”

Artificial intelligence researchers have invested plenty of resources to try and read a person’s emotions by analyzing their facial features, movements, voice and more. Some tech companies involved in the space include MicrosoftAffectiva and Kairos.

But some experts have pointed out that, while there is scientific evidence suggesting there are correlations between facial expressions and emotions, the way people communicate major emotions varies across cultures and situations. Sometimes, similar types of facial movements can express more than one category of emotions, and so researchers have warned “it is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be scientific facts.”

The availability of facial recognition technology has also raised concerns about its potential use in surveillance and for the possibility that it could intrude on privacy.

Read full story here…




drone

FAA Approves Food Delivery Drones In North Carolina

Technocrats who would gladly replace human delivery drivers with drones are populating the skies with noisy and intrusive drones. Convenience aside, people will consider them a massive nuisance.  ⁃ TN Editor

The Federal Aviation Administration (FAA) has granted Israeli drone maker Flytrex and North Carolina-based drone services firm Causey Aviation Unmanned approval for a drone-based food delivery pilot, according to a press release emailed to Supply Chain Dive.

The team will deliver food via drone in Holly Springs, NC, as part of the FAA’s UAS Integration Pilot Program (IPP) program in partnership with the North Carolina Department of Transportation (NCDOT) and the Town of Holly Springs.

The drones will travel along a single fixed route from a distribution center to an outdoor recreational area over mostly unpopulated areas, though the route does cross a highway. The FAA approved this route. Flytrex drones have been delivering food in Iceland in partnership with a local e-commerce site since 2017.

With this plan, drones are beginning to show similarities to other introductions of autonomous vehicles into supply chains. Repeated fixed routes, or “milk runs,” are quickly becoming a hallmark of early autonomous vehicle applications. Walmart, for one, is moving groceries between a Walmart grocery pickup location and a Walmart Neighborhood Market, in Bentonville, AR, via autonomous van.

Fixed routes substantially decrease the variables the vehicle may encounter, and in the case of the North Carolina flight plan, minimizes the number of people the drone will fly over. Another player looking to deliver food via drone is Uber Eats.

Read full story here…




Seattle Green New Deal

Seattle Resolves To Launch Green New Deal

Seattle follows the lead from Los Angeles and New York to jump headlong into Green New Deal mania to replace Capitalism and Free Enterprise with Sustainable Development, aka Technocracy. ⁃ TN Editor

Councilmember Mike O’Brien (District 6, Northwest Seattle) and his Council colleagues unanimously passed in an 8-0 vote Resolution 31895 relating to a Green New Deal for Seattle.

The resolution calls for the passage of a federal Green New Deal, and affirms Seattle’s commitments to “…ensuring that our City can effectively respond to the climate crisis, transition away from its dependency on fossil fuels, and protect our most vulnerable residents while building Seattle’s climate resiliency.”

“We have 10 years to radically transform our city and our economy to eliminate fossil fuels,” said O’Brien. “If you don’t think the climate crisis is on our front step, remember the reality that people are developing asthma because of our air quality. Our tribal communities are having to move due to rising sea levels. All of us have to deal with summers filled with smoke due to forest fires. This Green New Deal resolution alone won’t solve the crisis, but I believe it is possible for Seattle to lead in solving the climate crisis by eliminating fossil fuel use in our city over the next decade and creating a clean economy.”

O’Brien reiterated the dire need to take bold action during the August 5 Council Briefingand reminded his colleagues of the grave consequences of inaction during a news conference August 6.

“We cannot continue to fight climate change with soft action. We have to be bold,” said Nancy Huizar, Climate Justice Organizer for Got Green. “Through canvassing efforts by Got Green, we heard our community’s demands for fair green jobs, transit, healthcare and childcare, healthy food, and renewable energy. This resolution establishes these goals, and ensures our community’s needs are being heard.”

The Sierra Club defines the Green New Deal as mobilizing “vast public resources to help us transition from an economy built on exploitation and fossil fuels to one driven by dignified work and clean energy.”

“Seattle is positioned to be a national leader in addressing climate change by setting the goal of being climate pollution-free by 2030,” said Matt Remle, a member of the Standing Rock Sioux Tribe and co-founder of Mazaska Talks. “Seattle is also setting a strong example by ensuring in its legislation its intent to work with local and regional indigenous tribes on assessing the impacts of climate change, and centering native voices when addressing those impacts.”

Selected highlights of the Resolution include making Seattle climate pollution-free by 2030; prioritizing public investments in neighborhoods that have historically been underinvested in and disproportionately burdened by environmental hazards and other injustices; exploring the creation of Free, Prior, and Informed consent policies with federally recognized tribal nations; and, creating a fund and establish dedicated revenue sources for achieving the Green New Deal that will be used to make investments in communities, along with an associated accountability body.

Read full story here…




Facebook Busted: Audio Chats Transcribed By Paid Contractors

It is inconceivable that Mark Zuckerberg can lie with impunity to Congress about not snooping on users audio interactions, while the company is doing exactly that. No accountability, no investigation and no indictments.  ⁃ TN Editor

Facebook Inc. has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained — only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

On Wednesday, the Irish Data Protection Commission, which takes the lead in overseeing Facebook in Europe, said it was examining the activity for possible violations of the EU’s strict privacy rules.

Shares of the social-media giant were down 1.3% at 7:49 a.m. in New York during pre-market trading.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.

Big tech companies including Amazon.com Inc. and Apple Inc. have come under fire for collecting audio snippets from consumer computing devices and subjecting those clips to human review, a practice that critics say invades privacy. Bloomberg first reported in April that Amazon had a team of thousands of workers around the world listening to Alexa audio requests with the goal of improving the software, and that similar human review was used for Apple’s Siri and Alphabet Inc.’s Google Assistant. Apple and Google have since said they no longer engage in the practice and Amazon said it will let users opt out of human review.

The social networking giant, which just completed a $5 billion settlement with the U.S. Federal Trade Commission after a probe of its privacy practices, has long denied that it collects audio from users to inform ads or help determine what people see in their news feeds. Chief Executive Officer Mark Zuckerberg denied the idea directly in Congressional testimony.

“You’re talking about this conspiracy theory that gets passed around that we listen to what’s going on on your microphone and use that for ads,” Zuckerberg told U.S. Senator Gary Peters in April 2018. “We don’t do that.”

In follow-up answers for Congress, the company said it “only accesses users’ microphone if the user has given our app permission and if they are actively using a specific feature that requires audio (like voice messaging features.)” The Menlo Park, California-based company doesn’t address what happens to the audio afterward.

Facebook hasn’t disclosed to users that third parties may review their audio. That’s led some contractors to feel their work is unethical, according to the people with knowledge of the matter.

Read full story here…




Buddhist

Japan Temple: Robot Takes Over Role As Buddhist Priest

Religious Technocrats in Japan have solved their boring human priest problems by creating a tireless robot who serves in the Buddhist temple to lead worshippers. ⁃ TN Editor

A 400-year-old temple in Japan is attempting to hot-wire interest in Buddhism with a robotic priest it believes will change the face of the religion — despite critics comparing the android to “Frankenstein’s monster.”

The android Kannon, based on the Buddhist deity of mercy, preaches sermons at Kodaiji temple in Kyoto, and its human colleagues predict that with artificial intelligence it could one day acquire unlimited wisdom.

“This robot will never die, it will just keep updating itself and evolving,” priest Tensho Goto told AFP.

“That’s the beauty of a robot. It can store knowledge forever and limitlessly.

“With AI we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism,” added Goto.

The adult-sized robot began service earlier this year and is able to move its torso, arms and head.

But only its hands, face and shoulders are covered in silicone to replicate human skin.

Clasping its hands together in prayer and speaking in soothing tones, the rest of the droid’s mechanical parts are clearly visible.

Wiring and blinking lights fill the cranial cavity of its open-top head and snake around its gender-neutral, aluminium body.

A tiny video camera installed in the left eye completes an eerie, cyborg-like frame seemingly lifted straight out of a dystopian Hollywood sci-fi thriller.

Developed at a cost of almost $1m in a joint project between the Zen temple and renowned robotics professor Hiroshi Ishiguro at Osaka University, the humanoid — called Mindar — teaches about compassion and of the dangers of desire, anger and ego.

“You cling to a sense of selfish ego,” it warns worshippers. “Worldly desires are nothing other than a mind lost at sea.”

With religion’s influence on daily life flat-lining in Japan, Goto hopes Kodaiji’s robot priest will be able to reach younger generations in a way traditional monks can’t.

“Young people probably think a temple is a place for funerals or weddings,” he said, trying to explain the disconnect with religion.

Read full story here…




Skynet

How Close Is Skynet AI? Too Close!

Mimicking Terminator’s science fiction AI called Skynet, GEOINT’s Sentient system learns on its own and autonomously points diverse sensor/surveillance systems to get what it wants and in real-time.

Deputy Director of the National Reconnaissance office says that “Sentient catalogs normal patterns, detects anomalies, and helps forecast and model adversaries’ potential courses of action… Sentient is a thinking system.”

While this is amazing technology for the battlefield, the military is currently turning it on American soil in conjunction with various law enforcement agencies, including the Department of Homeland Security. If not stopped, this will lead to a total Scientific Dictatorship, aka Technocracy. ⁃ TN Editor

 the final session of the 2019 Space Symposium in Colorado Springs, attendees straggled into a giant ballroom to listen to an Air Force official and a National Geospatial-Intelligence Agency (NGA) executive discuss, as the panel title put it, “Enterprise Disruption.” The presentation stayed as vague as the title until a direct question from the audience seemed to make the panelists squirm.

Just how good, the person wondered, had the military and intelligence communities’ algorithms gotten at interpreting data and taking action based on that analysis? They pointed out that the commercial satellite industry has software that can tally shipping containers on cargo ships and cars in parking lots soon after their pictures are snapped in space. “When will the Department of Defense have real-time, automated, global order of battle?” they asked.

“That’s a great question,” said Chirag Parikh, director of the NGA’s Office of Sciences and Methodologies. “And there’s a lot of really good classified answers.”

He paused and shifted in his seat. “What’s the next question?” he asked, smiling. But he continued talking, describing how “geospatial intelligence” no longer simply means pictures from satellites. It means anything with a timestamp and a location stamp, and the attempt to integrate all that sundry data.

Then, Parikh actually answered this question: When would that translate to near-instantaneous understanding and strategy development?

“If not now,” he said, “very soon.”

Parkih didn’t mention any particular programs that might help enable this kind of autonomous, real-time interpretation. But an initiative called Sentient has relevant capabilities. A product of the National Reconnaissance Office (NRO), Sentient is (or at least aims to be) an omnivorous analysis tool, capable of devouring data of all sorts, making sense of the past and present, anticipating the future, and pointing satellites toward what it determines will be the most interesting parts of that future. That, ideally, makes things simpler downstream for human analysts at other organizations, like the NGA, with which the satellite-centric NRO partners.

Until now, Sentient has been treated as a government secret, except for vague allusions in a few speeches and presentations. But recently released documents — many formerly classified secret or top secret — reveal new details about the program’s goals, progress, and reach.

Research related to Sentient has been going on since at least October 2010, when the agency posted a request for Sentient Enterprise white papers. A presentation says the program achieved its first R&D milestone in 2013, but details about what that milestone actually was remain redacted. (Deputy director of NRO’s Office of Public Affairs Karen Furgerson declined to comment on this timing in an email to The Verge.) A 2016 House Armed Services Committee hearing on national security space included a quick summary of this data-driven brain, but public meetings haven’t mentioned it since. In 2018, a presentation posted online claimed Sentient would go live that year, although Furgerson told The Verge it was currently under development.

The NRO has not said much about Sentient publicly because it is a classified program,” says Furgerson in an email, “and NRO rarely appears before Congress in open hearings.”

The agency has been developing this artificial brain for years, but details available to the public remain scarce. “It ingests high volumes of data and processes it,” says Furgerson. “Sentient catalogs normal patterns, detects anomalies, and helps forecast and model adversaries’ potential courses of action.” The NRO did not provide examples of patterns or anomalies, but one could imagine that things like “not moving a missile” versus “moving a missile” might be on the list. Those forecasts in hand, Sentient could turn satellites’ sensors to the right place at the right time to catch ill will (or whatever else it wants to see) in action. “Sentient is a thinking system,” says Furgerson.

It’s not all dystopian: the documents released by the NRO also imply that Sentient can make satellites more efficient and productive. It could also free up humans to focus on deep analysis rather than tedious needle-finding. But it could also contain unquestioned biases, come to dubious conclusions, and raise civil liberties concerns. Because of its secretive nature, we don’t know much about those potential problems.

“The NRO’s and the Intelligence Community’s standard practice is to NOT disclose sensitive sources and methods, as such disclosure introduces high risk of adversary nations’ countering them,” says Furgerson. “Such loss harms our nation and its allies; it decreases U.S. information advantage and national security. For those reasons, details about Sentient remain classified and what we can say about it is limited.”

Read full story here…