Baltimore Surveillance

Baltimore To Fight Crime By Airplane Surveillance Of Entire City

Baltimore was busted in 2016 for conducting a secret aerial surveillance dragnet of the entire city with a single airplane. Not its back at the table looking for three airplanes to blanket the city in real-time. ⁃ TN Editor
 

The head of an aerial surveillance company is pitching Baltimore officials on flying not one but three camera-laden planes above the city simultaneously, covering most of the city and its violent crime, he said in emails obtained by The Baltimore Sun.

A pair of Texas donors have stepped forward to help fund three planes and extra police, 40 local analysts and oversight personnel if there is city buy-in, the records and interviews show. The effort aims to “demonstrate the effectiveness” of such an all-seeing surveillance system in fighting crime in the city.

The enlarged scope of the three-year, $6.6 million surveillance pitch was welcomed by supporters and denounced by detractors contacted by The Sun.

Ross McNutt of Ohio-based Persistent Surveillance Systems said in emails to officials in Mayor Bernard C. “Jack” Young’s office that most City Council members had expressed their support for the surveillance planes, though several council members denied it. No decision has been made.

Each plane would be capable of recording up to 32 square miles at a time, and each would fly 45 to 50 hours a week, McNutt said.

“With these three coverage areas, we would be able to cover areas that include 80 to 90 percent of the murders and shootings in Baltimore,” McNutt wrote in an email last month to Sheryl Goldstein, Young’s deputy chief of staff.

The work would cost $2.2 million a year, said McNutt, whose company previously flew a single surveillance plane over Baltimore as part of a secret pilot program in 2016.

That funding would cover the cost of putting the planes up, additional police officers to work cases aided by the surveillance, independent oversight of the program’s privacy measures and a University of Baltimore-led evaluation of the program’s “effectiveness in supporting investigations and deterring crime in the community,” McNutt wrote.

McNutt said the program costs would be covered by Texas philanthropists Laura and John Arnold, who also funded the 2016 pilot program. John Arnold, in a statement, confirmed his strong interest in funding the program but said nothing is certain yet.

“While we have not formally committed to additional funding, we have expressed significant interest in a proposal to restart the program if it has support from Baltimore city leaders and the broader community,” he said. “We will wait to see a formal proposal before making a firm commitment.”

Read full story here…




AI

75 Nations Now Use AI Facial Recognition For Surveillance

Technocrats have run amok throughout the world while spreading dystopian technology for tracking citizens everywhere. Few have noticed their strategy, but now the shocking numbers are rolling in.  ⁃ TN Editor

A growing number of countries are following China’s lead in deploying artificial intelligence to track citizens, according to a research group’s report published Tuesday.

The Carnegie Endowment for International Peace says at least 75 countries are actively using AI tools such as facial recognition for surveillance.

The index of countries where some form of AI surveillance is used includes liberal democracies such as the United States and France as well as more autocratic regimes.

Relying on a survey of public records and media reports, the report says Chinese tech companies led by Huawei and Hikvision are supplying much of the AI surveillance technology to countries around the world. Other companies such as Japan’s NEC and U.S.-based IBM, Palantir and Cisco are also major international providers of AI surveillance tools.

Hikvision declined comment Tuesday. The other companies mentioned in the report didn’t immediately return requests for comment.

The report encompasses a broad range of AI tools that have some public safety component. The group’s index doesn’t distinguish between legitimate public safety tools and unlawful or harmful uses such as spying on political opponents.

“I hope citizens will ask tougher questions about how this type of technology is used and what type of impacts it will have,” said the report’s author, Steven Feldstein, a Carnegie Endowment fellow and associate professor at Boise State University.

Many of the projects cited in Feldstein’s report are “smart city” systems in which a municipal government installs an array of sensors, cameras and other internet-connected devices to gather information and communicate with one another. Huawei is a lead provider of such platforms, which can be used to manage traffic or save energy, but which are increasingly also used for public surveillance and security, Feldstein said.

Read full story here…




IARPA

IARPA Seeks Long-Range Biometric Identification Tech

Close-up biometric identification is not enough for the Technocrat-laden Intelligence community. Tech is being extended to use Images from drones and long-range cameras to positively identify people. ⁃ TN Editor

The intelligence community is working to build biometric identification systems that can single out individuals from hundreds of yards away or more, a feat that’s virtually impossible using the technology that exists today.

Ultimately, the tech would let spy agencies rapidly identify people using cameras deployed on far off rooftops and unmanned aircraft, according to the Intelligence Advanced Research Projects Activity, the research arm for the CIA and other intelligence agencies.

Facial recognition and other types of biometric tech have improved significantly in recent years, but even today’s most advanced systems become less reliable without a crystal clear view of their subject. Even when the person is standing nearby and looking directly into the camera, facial recognition tech can be prone to errors.

But the intelligence community is trying to overcome those limitations in two ways: gathering more extensive training data and creating systems that lean on multiple types of data to identify people.

On Friday, IARPA started looking for researchers to participate in Biometric Recognition and Identification at Altitude and Range, or BRIAR program, which aims to develop identification tools that work from vantage points high above or far away from their subjects. While the program is still getting off the ground, the tech it seeks to develop could significantly enhance the government’s ability to surveil adversaries—and citizens—using biometric data.

“Further research in the area of biometric recognition and identification at altitude and range may support protection of critical infrastructure and transportation facilities, military force protection and border security,” officials wrote in the request for information.

Teams interested in participating in the program must respond by Oct. 21.

In the request for information, IARPA asked teams for a wide variety of datasets that could help train biometric technology to work in less than ideal conditions. Today, the range of facial recognition and other identification systems is limited by a lack of training data, they said, and more datasets would help researchers build more versatile and powerful tools.

Specifically, IARPA asked for images of individuals taken from more than 300 meters away or at pitch angles above 20 degrees, as well as biometric research datasets captured by drones and other aircraft.

Read full story here…




Big Brother Cometh: Massive License Plate Database Exceeds 150 Million

Throwing legality and the Constitution aside, Technocrats lust after data on society. Collecting data in real-time is the holy grail of AI that is used for instant analysis and reporting of actionable offenses. ⁃ TN Editor

Our worst fears about automatic license plate readers (ALPR) are much worse than we could have imagined.

Two months ago, I warned everyone that police in Arizona were using ALPR’s to “grid” entire neighborhoods. But this story brings public surveillance to a whole new level.

Last month, Rekor Systems announced that they had launched the Rekor Public Safety Network (RPSN) which gives law enforcement real-time access to license plates.

“Any state or local law enforcement agency participating in the RPSN will be able to access real-time data from any part of the network at no cost. The Company is initially launching the network by aggregating vehicle data from customers in over 30 states. With thousands of automatic license plate reading cameras currently in service that capture approximately 150 million plate reads per month, the network is expected to be live by the first quarter of 2020.”

RPSN is a 30 state real-time law enforcement license plate database of more than 150 million people.

And the scary thing about it is; it is free.

“We don’t think our participants should be charged for accessing information from a network they contribute to, especially when it provides information that has proven its value in solving crimes and closing cases quickly,” said Robert A. Berman, President and CEO, Rekor.

Want to encourage law enforcement to spy on everyone? Give them free access to a massive license plate database.

RPSN’s AI software uses machine learning to predict when and where a hotlisted person or a person of interest will be.

“Rekor’s software, powered by artificial intelligence (“AI”) and machine learning, can also be added to existing law enforcement security camera networks to search for law enforcement related hotlists as well as Amber Alerts and registered sex offender motor vehicles.”

Rekor admits that police in thirty states are probably spying on more than 150 million license plates each month.

The Westchester County New York Police Department’s Real Time Crime Center alone, collects “more than 25 million license plates each month.”

An article in Traffic Technology Today revealed that Rekor will go to great lengths to convince police departments to track millions of motorists. “In 2020, the RPSN will be fully compliant with the federal 2019 NDAA law, which bans the use of certain foreign manufactured cameras used in critical infrastructure.”

Rekor’s 2019 NDAA sales pitch, is both disturbing and despicable. It reveals just where they and law enforcement stand when it comes to using ALPR’s to spy on millions of motorists.

Police use license plate readers to track motorists in real-time

An article in The Newspaper revealed how police in Louisiana use license plate readers to track motorists in real-time.

Eric J. Richard had been driving his white Buick LaCrosse on Interstate 10, when he was stopped by Louisiana State Police Trooper Luke Leger for allegedly following a truck too closely. During the roadside interrogation, the trooper asked where Richard was coming from.

“I was coming from my job right there in Vinton,” Richard replied. The trooper had already looked up the travel records for Richard’s car and already knew it had crossed into Louisiana from Texas earlier in the day. Based on this “apparent lie,” the trooper extended the traffic stop by asking more questions and calling in a drug dog.

The article goes on to say that police had no reason to track Mr. Richard, but they did so because they could. And that should frighten everyone.

Rekor lets law enforcement know where your friends and family are, where your doctor’s office is, where you worship and where you buy groceries.

How is that for Orwellian?

It is time to face the facts: ALPR’s are not about public safety, they are a massive surveillance system designed to let Big Brother track our every movement.

Read full story here…




Technocracy’s Final Frontier: The Takeover Of Your Body

Technocrats see your body as a holy grail of data collection that can in turn be weaponized against you to manipulate your behavior, thinking patters, buying decisions and life planning. ⁃ TN Editor

Aram Sinnreich recently went grocery shopping at a Whole Foods Market in his hometown of Washington, D.C., and realized he had left his wallet at home. He had no cards and no cash, but he had no reason to worry — at least, not about paying for his food. “I used my iPhone to pay, and I unlocked it with my face,” he said.

That’s when it struck him: We are just one small step away from paying with our bodily features alone. With in-store facial-recognition machines, he wouldn’t even need his smartphone. Sinnreich, associate professor of communication studies at American University, said he got a glimpse of the future that day.

Biometric technology is infiltrating every other aspect of our digital lives. Next stop: replacing your wallet.

Biometric mobile wallets — payment technologies using our faces, fingerprints or retinas — already exist. Notable technology companies including Apple AAPL, -0.01% and Amazon AMZN, -0.39% await a day when a critical mass of consumers is sufficiently comfortable walking into a store and paying for goods without a card or device, according to Sinnreich, author of “The Essential Guide to Intellectual Property.”

Removing the last physical barrier — smartphones, watches, smart glasses and credit cards — between our bodies and corporate America is the final frontier in mobile payments. “The deeper the tie between the human body and the financial networks, the fewer intimate spaces will be left unconnected to those networks,” Sinnreich said.

Companies are refining biometric services

After a slow start, the global mobile-payment market is expected to record a compound annual growth rate of 33%, reaching $457 billion in 2026, according to market-research firm IT Intelligence Markets. As payments move from cash to credit cards to smartphones, financial-technology companies, known as fintechs, have been honing their biometric services.

Biometric technology, meanwhile, is infiltrating every other aspect of our digital lives. Juniper Research forecasts that mobile biometrics will authenticate $2 trillion in in-store and remote mobile-payments transactions in 2023, 17 times more than the estimated $124 billion in such transactions last year.

Juniper, a U.K.-based firm that provides research on the global high-tech communications sector, said it expects growth to be driven both by “industry standardization initiatives” like Visa’s Secure Remote Commerce and by the introduction by smartphone vendors of different forms of biometric authentication.

“Using biometrics as a method of payment is going to be pretty popular in the future,” said Hannah Zimmerman, associate attorney with Fey LLC in Leawood, Kan. She said this will be propelled by “the globalization of commerce” and the fact that companies in the U.S. will want to find new ways to facilitate cross-border transactions.

Frictionless payments lead to more spending

It will make shopping easier for consumers and, if studies on mobile payments provide a barometer, more lucrative for companies. A study carried out by researchers at the University of Illinois at Urbana-Champaign found that the number of actual purchases increased by almost one quarter when people used Alipay mobile payments.

The number of purchases increased by 24% when people used Alipay mobile payments.

Using a mobile wallet made people likely to spend more on food, entertainment and travel, the university study found. In dollar terms, people using mobile payments spent an average of 2.4% more than those who did not use them. One theory: If we don’t handle credit cards or cash, we don’t consider a transaction’s consequences.

People who use Amazon’s Echo smart speaker spend 66% more on average at the online retailer than other consumers, according to a survey of 2,000 Amazon customers from Chicago-based research firm Consumer Intelligence Research Partners. Of course, people who have the money to buy smart speakers may also have more to spend.

Still, it provides a window into the world of frictionless spending: Echo owners spend $1,700 annually at Amazon versus $1,300 among Amazon Prime members — who must pay a $99 a year subscription — and $1,000 for all Amazon customers in the U.S. Some people may have both Echo devices and Prime accounts. (Amazon did not respond to a request for comment.)

Facial recognition is already widely used

Facial recognition has already made its way into financial services. Mastercard MA, -0.24% and Visa V, +0.55% have security features that require people to use their faces to log into their accounts on their phones. Apple’s iPhone X enables people to use “Face ID” to unlock their phones, and Samsung’s SSNLF, +0.00% 005930, +1.31% Galaxy S8 and S8+ has an iris scanner. Amazon’s Rekognition facial-recognition service can also identify both objects and people.

Between 2018 and 2024, the facial-recognition industry is projected to double to $9 billion.

The facial-recognition market is projected to double to $9 billion between 2018 and 2024, according to Mordor Intelligence, a consulting and analytics firm.

Juniper predicts that 80% of smartphones will have some form of biometric hardware by 2023, representing just over 5 billion smartphones. That has traditionally meant fingerprint sensors, but facial recognition and iris scanning will become more prominent over the next five years, with adoption surpassing 1 billion devices, Juniper forecasts.

China’s biggest mobile-payment platforms, Ant Financial Services Group, the Alibaba-controlled BABA, -1.26% entity that operates Alipay, and Tencent Holdings Ltd. TCEHY, +0.16% 700, +0.59%, which runs WeChat Pay, have already launched facial-recognition machines at points of sale. They typically require customers to register for the first time via SMS.

In 2017, KPro, a KFC brand in Hangzhou, China, introduced Alipay facial-recognition technology at points of sale. Today, KFC YUM, +0.89% uses its Alipay’s “Smile to Pay” facial recognition technology in more than 700 stores across China. (Before making their very first payment, customers must log in using their phone.)

Read full story here…




Facial Recognition: Facebook Finally Gives Control Back To Users

After fines, legal battles and cancelled users, Facebook finally succumbs to pressure by giving control over facial recognition features to users by offering opt-in/out. New users will be set to opt-out by default. ⁃ TN Editor

Facebook on Tuesday said facial recognition technology applied to photos at the social network will be an opt-in feature.

The change that began rolling out to users around the world came as the leading social network remains under pressure to better protect privacy and user data, including biometric information.

Nearly two years ago, Facebook introduced a face recognition feature that went beyond suggesting friends to tag in pictures or videos but could let user know when they were in images they had permission to see elsewhere on the service.

Facebook is doing away with a “tag” suggestion setting in favor of an overall facial recognition setting which will be off by default, according to a post by artificial intelligence applied research lead Srinivas Narayanan.

“Facebook’s face recognition technology still does not recognize you to strangers,” Narayanan said.

“We don’t share your face recognition information with third parties. We also don’t sell our technology.”

People new to Facebook or who had the “tag” feature operating will get word from the social network about the face recognition setting along with an easy way to turn it on if they wish, according to Narayanan.

“People will still be able to manually tag friends, but we won’t suggest you to be tagged if you do not have face recognition turned on,” Narayanan said.

Read full story here…

Related Story:

Facebook will no longer scan user faces by default




The Last Frontier: Big Tech Wants To Read Your Thoughts

Controlling what you do is one thing, but digging into what you think is an order of magnitude more disturbing, with ethical, moral and privacy considerations at the top of the list. ⁃ TN Editor

Not content with monitoring almost everything you do online, Facebooknow wants to read your mind as well. The social media giant recently announced a breakthrough in its plan to create a device that reads people’s brainwaves to allow them to type just by thinking. And Elon Musk wants to go even further. One of the Tesla boss’s other companies, Neuralink, is developing a brain implant to connect people’s minds directly to a computer.

Musk admits that he takes inspiration from science fiction and that he wants to make sure humans can “keep up” with artificial intelligence. He seems to have missed the part of sci-fi that acts as a warning for the implications of technology.

These mind-reading systems could affect our privacy, security, identity, equality and personal safety. Do we really want all that left to companies with philosophies such as that of Facebook’s former mantra, “move fast and break things”?

Though they sound futuristic, the technologies needed to make brainwave-reading devices are not that dissimilar to the standard MRI (magnetic resonance imaging) and EEG (electroencephalography) neuroscience tools used in hospitals all over the world. You can already buy a kit to control a drone with your mind, so using one to type out words is, in some ways, not that much of a leap. The advance will likely be due to the use of machine learning to sift through huge quantities of data collected from our brains and find the patterns in neuron activity that link thoughts to specific words.

A brain implant is likely to take a lot longer to develop, and it’s important to separate out the actual achievements of Neuralink from media hype and promotion. But Neuralink has made simultaneous improvements in materials for electrodes and robot-assisted surgery to implant them, packaging the technology neatly so it can be read via USB.

acebook and Neuralink’s plans may build on established medical practice. But when companies are collecting thoughts directly from our brains, the ethical issues are very different.

Any system that could collect data directly from our brains has clear privacy risks. Privacy is about consent. But it is very difficult to give proper consent if someone is tapping directly into our thoughts. Silicon Valley companies (and governments) already surreptitiously gather as much data on us as they can and use it in ways we’d rather they didn’t. How sure can we be that our random and personal thoughts won’t be captured and studied alongside the instructions we want to give the technology?

Read full story here…




Irish State Ordered To Delete ‘Unlawful’ Data On 3.2m Citizens

The National ID card is a holy grail for Technocrats who want to track all human activity, and is being heavily promoted in the U.S. as well. Here, Technocrats in the Irish government suffered a huge setback. ⁃ TN Editor

The State has been told it must delete data held on 3.2 million citizens, which was gathered as part of the roll-out of the Public Services Card, as there is no lawful basis for retaining it.

In a highly critical report on its investigation into the card, the Data Protection Commission found there was no legal reason to make individuals obtain the card in order to access State services such as renewing a driving licence or applying for a college grant.

While the card will still be sought from people accessing some services directly administered by the Department of Social Protection, such as claiming social welfare payments, the commission’s report represents a major blow to the scope of the project, which has proved politically contentious and faced strong opposition from data-privacy campaigners.

Helen Dixon, the Data Protection Commissioner, told The Irish Times that forcing people to obtain such a card for services other than those provided by the department was “unlawful from a data-processing point of view”.

It has directed that the department cease processing applications for cards needed for such functions.

Ms Dixon said there had been a “fundamental misunderstanding” of what was permitted by the legislation underpinning the card.

She said the department assumed that the legislation included a “legal basis for public sector bodies to mandatorily demand the card, and it doesn’t, once you conduct the legal analysis”.

“What we can see when we trace through it is that practice in implementation has now diverged from the legislation that underpins it,” she added.

Enforcement action

Ms Dixon found the retention of data gathered during the application process for the total of 3.2 million cards issued to date was unlawful.

“We’ve made significant findings around the data relating to the supporting documentation retained, and proposed to be retained indefinitely by the department,” she said. This documentation can include personal information on issues such as refugee status and changes to gender as well as people’s utility bills.

“There’s a whole range of documentation and the indefinite retention of it in circumstances where the Minister has satisfied herself as to identity already … We believe there is no lawful basis for that.”

The department would face enforcement action, including potentially being taken to court by the commission, if it fails to act on the recommendations of the report.

The data would still be required during the application process, but must be destroyed after that, she said.

The commission also found that there was insufficient transparency around the card, and that the department had not made enough easily understood information available to the public.

Read full story here…




Amazon’s Facial Recognition Software Now Identifies Fear

Amazon’s Rekognition software that is widely sold to law enforcement agencies, adds ‘Fear’ to  its emotional recognition of  ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’. ⁃ TN Editor

Amazon said this week its facial recognition software can detect a person’s fear.

Rekognition is one of many Amazon Web Services (AWS) cloud services available for developers. It can be used for facial analysis or sentiment analysis, which identifies different expressions and predicts emotions from images of people’s faces. The service uses artificial intelligence to “learn” from the reams of data it processes.

The tech giant revealed updates to the controversial tool on Monday that include improving the accuracy and functionality of its face analysis features such as identifying gender, emotions and age range.

“With this release, we have further improved the accuracy of gender identification,” Amazon said in a blog post. “In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear.’”

Artificial intelligence researchers have invested plenty of resources to try and read a person’s emotions by analyzing their facial features, movements, voice and more. Some tech companies involved in the space include MicrosoftAffectiva and Kairos.

But some experts have pointed out that, while there is scientific evidence suggesting there are correlations between facial expressions and emotions, the way people communicate major emotions varies across cultures and situations. Sometimes, similar types of facial movements can express more than one category of emotions, and so researchers have warned “it is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be scientific facts.”

The availability of facial recognition technology has also raised concerns about its potential use in surveillance and for the possibility that it could intrude on privacy.

Read full story here…




Skynet

How Close Is Skynet AI? Too Close!

Mimicking Terminator’s science fiction AI called Skynet, GEOINT’s Sentient system learns on its own and autonomously points diverse sensor/surveillance systems to get what it wants and in real-time.

Deputy Director of the National Reconnaissance office says that “Sentient catalogs normal patterns, detects anomalies, and helps forecast and model adversaries’ potential courses of action… Sentient is a thinking system.”

While this is amazing technology for the battlefield, the military is currently turning it on American soil in conjunction with various law enforcement agencies, including the Department of Homeland Security. If not stopped, this will lead to a total Scientific Dictatorship, aka Technocracy. ⁃ TN Editor

 the final session of the 2019 Space Symposium in Colorado Springs, attendees straggled into a giant ballroom to listen to an Air Force official and a National Geospatial-Intelligence Agency (NGA) executive discuss, as the panel title put it, “Enterprise Disruption.” The presentation stayed as vague as the title until a direct question from the audience seemed to make the panelists squirm.

Just how good, the person wondered, had the military and intelligence communities’ algorithms gotten at interpreting data and taking action based on that analysis? They pointed out that the commercial satellite industry has software that can tally shipping containers on cargo ships and cars in parking lots soon after their pictures are snapped in space. “When will the Department of Defense have real-time, automated, global order of battle?” they asked.

“That’s a great question,” said Chirag Parikh, director of the NGA’s Office of Sciences and Methodologies. “And there’s a lot of really good classified answers.”

He paused and shifted in his seat. “What’s the next question?” he asked, smiling. But he continued talking, describing how “geospatial intelligence” no longer simply means pictures from satellites. It means anything with a timestamp and a location stamp, and the attempt to integrate all that sundry data.

Then, Parikh actually answered this question: When would that translate to near-instantaneous understanding and strategy development?

“If not now,” he said, “very soon.”

Parkih didn’t mention any particular programs that might help enable this kind of autonomous, real-time interpretation. But an initiative called Sentient has relevant capabilities. A product of the National Reconnaissance Office (NRO), Sentient is (or at least aims to be) an omnivorous analysis tool, capable of devouring data of all sorts, making sense of the past and present, anticipating the future, and pointing satellites toward what it determines will be the most interesting parts of that future. That, ideally, makes things simpler downstream for human analysts at other organizations, like the NGA, with which the satellite-centric NRO partners.

Until now, Sentient has been treated as a government secret, except for vague allusions in a few speeches and presentations. But recently released documents — many formerly classified secret or top secret — reveal new details about the program’s goals, progress, and reach.

Research related to Sentient has been going on since at least October 2010, when the agency posted a request for Sentient Enterprise white papers. A presentation says the program achieved its first R&D milestone in 2013, but details about what that milestone actually was remain redacted. (Deputy director of NRO’s Office of Public Affairs Karen Furgerson declined to comment on this timing in an email to The Verge.) A 2016 House Armed Services Committee hearing on national security space included a quick summary of this data-driven brain, but public meetings haven’t mentioned it since. In 2018, a presentation posted online claimed Sentient would go live that year, although Furgerson told The Verge it was currently under development.

The NRO has not said much about Sentient publicly because it is a classified program,” says Furgerson in an email, “and NRO rarely appears before Congress in open hearings.”

The agency has been developing this artificial brain for years, but details available to the public remain scarce. “It ingests high volumes of data and processes it,” says Furgerson. “Sentient catalogs normal patterns, detects anomalies, and helps forecast and model adversaries’ potential courses of action.” The NRO did not provide examples of patterns or anomalies, but one could imagine that things like “not moving a missile” versus “moving a missile” might be on the list. Those forecasts in hand, Sentient could turn satellites’ sensors to the right place at the right time to catch ill will (or whatever else it wants to see) in action. “Sentient is a thinking system,” says Furgerson.

It’s not all dystopian: the documents released by the NRO also imply that Sentient can make satellites more efficient and productive. It could also free up humans to focus on deep analysis rather than tedious needle-finding. But it could also contain unquestioned biases, come to dubious conclusions, and raise civil liberties concerns. Because of its secretive nature, we don’t know much about those potential problems.

“The NRO’s and the Intelligence Community’s standard practice is to NOT disclose sensitive sources and methods, as such disclosure introduces high risk of adversary nations’ countering them,” says Furgerson. “Such loss harms our nation and its allies; it decreases U.S. information advantage and national security. For those reasons, details about Sentient remain classified and what we can say about it is limited.”

Read full story here…