Location Technology Is Key To Smart City Surveillance

This story’s sponsor, Here, asks “Can data make cities more human?” Yet, it’s all about them, not citizens: “It’s really a utopia or oblivion moment – it depends on us architects where we want to go.” In reality, citizens don’t care where Technocrats want to go. ⁃ TN Editor

Around the world a quiet revolution is transforming the way cities deliver services to their residents.

Although cities have long used isolated Internet of Things (IoT) technologies like smart streetlights or meters, the information they collect has typically been siloed within departments, which has created inefficiencies and made services tough to coordinate.

But today’s technology is changing the picture dramatically. Cities are now using location data and services as building blocks for applications that share information internally and interact with residents, nonprofits, and business partners. A dynamic new ecosystem has sprung up, improving everything from emergency response times to budgeting, traffic management, public health, and the environment.

“Location technology is bringing cities a digital canvas of reality, helping them to make better sense of operations, identify gaps in services, and create new solutions,” says Edzard Overbeek, CEO of HERE Technologies, a leader in mapping and location technology.

“In the past, urban design was top down — architects, engineers, and planners implemented their solutions,” he says. “In the 21st century, we need a new approach. A city should evolve in a natural way, by a system of trial and error, letting citizens decide which projects they want.”

Here are some of the ways location technology is transforming city services.

1. Emergency response.

In the past, emergency operators determined callers’ location by looking up the address where the phone was registered, then relaying the information to responders. Addresses were often out of date or irrelevant to the incident location. Location and sensor data have changed everything.

Now cities get GPS information from cell phones. Many have city vehicle tracking, cameras on streetlights and utility poles, and microphones that detect the location and intensity of gunshots.

Some first responders use indoor venue maps from HERE that guide them on the fastest route to someone in need and the locations of fire extinguishers, defibrillators, and medical kits. Police officers wear holster sensors that tell the department when they have drawn a gun, which can speed backup response.

Cities are also using IoT sensors to coordinate services after hurricanes or floods. Some use machine learning to predict when and where the next disaster might occur.

In the future, connected cars may automatically generate accident reports to responders when they collide. Ambulances may control traffic lights to get to the scene faster or send out robots to defuse bombs or gather more information.

2. Utilities.

With smart meters and geolocation, cities can “see” and analyze in real time how people use energy and water consumption levels and make better decisions about managing resources. Sensors can detect a water leak and send a technician to fix it before the customer is hit with a sky-high bill.

In developing countries, clean water is especially precious, and leaky pipes are the largest source of water waste. With sensors and analytics, cities can cut those losses by up to 25%, saving up to 80 liters of water per person each day, a McKinsey report found. It’s just one way technology can improve the lives of the underserved.

“In the future, social justice and equity will be a central focus of urban planning,” says Julian Agyeman, a professor of urban and environmental policy and planning at Tufts University.

“The thing that really excites me now is seeing synergies. And the best example I know is the transformation of Medellin, Colombia, where the public utility and private companies have worked together with a philosophy of empowering people, especially in lower-income neighborhoods.”

Medellin’s mobile data portal lets citizens view and communicate information about utilities, traffic, public transit, and more, bringing in data and feedback from socioeconomic groups often ignored.

3. Environment.

Cities are using location data in a wide range of applications to improve the environment. Some are placing sensors on trash cans to make garbage collection more efficient, while Cambridge, Massachusetts is collaborating with Senseable City Lab to do much more.

Sensors mounted on the city’s garbage trucks collect and transmit information about potholes, gas leaks, and air quality along their routes. “With only three garbage trucks you can cover the whole city at least once a week,” Carlo Ratti, director of the MIT Senseable City Lab. “It allows city officials to more accurately detect abnormalities in the environment and be more responsive.”

In Baltimore, where asthma rates are among the highest in the US, 250 pollution sensors measure temperature, relative humidity, ozone, and nitrogen dioxide throughout the city. They send real-time information to city officials, who can then address air quality on a hyper-local level.

MIT’s Open Agriculture Initiative studies how to increase food production in urban areas and make its transport more efficient, lowering carbon emissions.

Location data can also be used to spot and prevent environmental degradation on a wide scale. In Colombia, the InfoAmazonia platform uses information from satellites and crowdsourcing to track construction projects that threaten the Amazon’s sensitive ecosystem. It could help the country meet its goal of reducing forest clearing to net zero by 2030.

4. Public health.

Electronic health records and apps may be common in advanced nations, but poorer countries lack these technologies, making it hard to create accurate epidemiological profiles and suitable facility development plans.

That’s where the IoT and location data come in. In Cartagena, Colombia, where many people live far from healthcare providers, authorities are using remote patient monitoring to keep people in touch with doctors and capture more knowledge about local populations, which could lead to better disease prevention and proactive care. Developing cities that use location-based infectious disease surveillance systems can reduce premature deaths and disabilities by 5%, according to the McKinsey report.

5. Civic engagement.

Cities are adopting IT platforms allowing residents to get information and engage with officials without having to attend evening meetings.

Dublin’s CiviQ platform tracks opinions on public issues and planning proposals. A location-based commenting system gives officials and residents alike a sense of how political dynamics operate in different parts of the city.

Analyze Boston, the city’s open-data hub, posts information about city services ranging from how long it takes to fulfill service requests to how many people use city libraries. Residents can also use an app to send information about the location of potholes or other problems directly to the city’s road-repair department.

Cary, North Carolina, placed sensors in its community-center parking lot to tell officials how spaces are being used, which helps them plan smarter parking.

6. Participatory budgeting.

Participatory budgeting allows citizens to decide how certain segments of municipal money are spent. The concept originated in Brazil and has spread to cities across the US and in Canada. Participants work directly with elected officials and city administrators in deciding how to invest resources in their community.

A group in New York voted to spend $30 million on air-conditioning for school classrooms. Oakland, California, residents voted for block grants for homeless services, legal advice for tenants, support for non-native speakers, and youth-apprenticeship programs.

By using technology to bring citizens into the heart of their operations, cities are ditching their reputation as distant and inefficient bureaucracies and becoming responsive engines of change. For many people, including Agyeman, it can’t happen soon enough.

“The city is not produced — it is coproduced,” he says. “The sooner we realize that and enact the right policies, the better off we’ll be.”

Innovation in location technology and services is rapidly creating a new reality for companies and governments around the world. As the world’s leading location platform, HERE Technologies can help you unlock new opportunities to transform your business.




Cities Are Adopting Real-Time Facial Surveillance Systems

Because there is no Federal legislation preventing its use, cities are gobbling up China-style facial recognition systems. Individual cities can easily block this surveillance technology, but citizens are sound asleep and completely oblivious to the destruction of their own civil liberties. ⁃ TN Editor

Civil Liberties Activists trying to inspire alarm about the authoritarian potential of facial recognition technology often point to China, where some police departments use systems that can spot suspects who show their faces in public. A report from Georgetown researchers on Thursday suggests Americans should also focus their concern closer to home.

The report says agencies in Chicago and Detroit have bought real-time facial recognition systems. Chicago claims it has not used its system; Detroit says it is not using its system currently. But no federal or state law would prevent use of the technology.

According to contracts obtained by the Georgetown researchers, the two cities purchased software from a South Carolina company, DataWorks Plus, that equips police with the ability to identify faces from surveillance footage in real time. A description on the company’s website says the technology, called FaceWatch Plus, “provides continuous screening and monitoring of live video streams.” DataWorks confirmed the existence of the systems, but did not elaborate further.

Facial recognition has long been used on static images to identify arrested suspects and detect driver’s license fraud, among other things. But using the technology with real-time video is less common. It has become practical only through recent advances in AI and computer vision, although it remains significantly less accurate than facial recognition under controlled circumstances.

Privacy advocates say ongoing use of the technology in this way would redefine the traditional anonymity of public spaces. “Historically we haven’t had to regulate privacy in public because it’s been too expensive for any entity to track our whereabouts,” says Evan Selinger, a professor at the Rochester Institute of Technology. “This is a game changer.”

According to the report, Detroit first purchased a facial recognition system capable of real-time analysis in July 2017 as part of a three-year contract related to an unusual community policing program called Project Greenlight. To deter late-night crime, gas stations and other businesses hooked up cameras that fed live surveillance footage to police department analysts. The program expanded over the years to stream footage to police from more than 500 locations, including churches and reproductive health clinics.

Documents unearthed by Georgetown show that real time facial recognition was supposed to help automate elements of Project Greenlight. In a letter to the Georgetown researchers provided by the department to WIRED, police chief James Craig said officers were not using the technology’s real-time capabilities, limiting the use of facial recognition thus far to still images of suspects. The department did not say whether it used real-time facial recognition in the past.

Chicago’s adoption of FaceWatch Plus goes back to at least 2016, the report says. According to a description of the program—found in DataWorks Plus’ pitch to Detroit—the “project objective” involved tapping into Chicago’s 20,000 street and transit cameras. Chicago police told the researchers the system was never turned on. (The department did not respond to additional questions from WIRED.) Illinois is one of only three states with biometric-identify laws that require consent from people before companies collect biometric markers, like fingerprints and face data. But public agencies are exempted.

Georgetown’s findings show how the lack of federal rules on facial recognition may create a patchwork of surveillance regimes inside the US. San Francisco supervisors voted to ban city use of facial recognitionon Tuesday. In Chicago and Detroit, citizens in public are watched by cameras that could be connected to software checking every face passing by. Police in Orlando and New York City are testing similar technology in pilot projects.

Read full story here…




UK Pedestrian Fined $115 For Avoiding Facial Recognition Camera

Britain has privacy laws similar to the U.S, but that didn’t restrain police from stopping and fining a resident for trying to cover his face to avoid being photographed by an AI camera on a public street. Every pedestrian was being photo’d and compared to a master database of wanted persons. ⁃ TN Editor

Police fined a pedestrian £90 for disorderly behaviour after he tried to cover his face when he saw a controversial facial recognition camera on a street in London.

Officers set up the camera on a van in Romford, East London, which then cross-checked photos of faces of passers-by against a database of wanted criminals.

But one man was unimpressed about being filmed and covered his face with his hat and jacket, before being stopped by officers who took his picture anyway.

After being pulled aside, the man told police: ‘If I want to cover me face, I’ll cover me face. Don’t push me over when I’m walking down the street.’

It comes just weeks after it was claimed the new technology incorrectly identified members of the public in 96 per cent of matches made between 2016 and 2018.

he cameras have been rolled out in a trial in parts of Britain, with the Met making its first arrest last December when shoppers in London’s West End were scanned.

But their use has sparked a privacy debate, with civil liberties group Big Brother Watch branding the move a ‘breach of fundamental rights to privacy and freedom of assembly’. Police argue they are necessary to crack down on spiralling crime.

Officers previously insisted people could decline to be scanned, before later clarifying that anyone trying to avoid scanners may be stopped and searched.

It was first deployed by South Wales Police ahead of the Champions League final in Cardiff in 2007, but wrongly matched more than 2,000 people to possible criminals.

Police and security services worldwide are keen to use facial recognition technology to bolster their efforts to fight crime and identify suspects.

But they have been hampered by the unreliability of the software, with some trials failing to correctly identify a single person.

The technology made incorrect matches in every case during two deployments at Westfield shopping centre in Stratford last year, according to Big Brother Watch. It was also reportedly 96 per cent accurate in eight uses by the Met from 2016 to 2018

In Romford, the man was fined £90 at the scene by officers, who also arrested three other people during the day thanks to the technology, according to BBC Click.

After being stopped he asked an officer: ‘How would you like it if you walked down the street and someone grabbed your shoulder? You wouldn’t like it, would you?

The officer told him: ‘Calm yourself down or you’re going in handcuffs. It’s up to you. Wind your neck in.’ But the man replied: ‘You wind your neck in.’

After being fined, the man told a reporter: ‘The chap told me down the road – he said they’ve got facial recognition. So I walked past like that (covering my face).

‘It’s a cold day as well. As soon as I’ve done that, the police officer’s asked me to come to him. So I’ve got me back up. I said to him ‘f*** off’, basically.

‘I said ‘I don’t want me face shown on anything. If I want to cover me face, I’ll cover me face, it’s not for them to tell me not to cover me face.

‘I’ve got a now £90 fine, here you go, look at that. Thanks lads, £90. Well done.’

Silkie Carlo, the director of civil liberties group Big Brother Watch, was at the scene holding a placard saying ‘stop facial recognition’ – before she asked an officer about the man they had taken aside: ‘What’s your suspicion?’

The officer replied: ‘The fact that he’s walked past clearly masking his face from recognition and covered his face. It gives us grounds to stop him and verify.’

Ivan Balhatchet, the Metropolitan Police’s covert and intelligence lead, said: ‘We ought to explore all technology to see how it can make people safer, how it can make policing more effective.

Read full story here…




China Claims Its Social Credit System Has ‘Restored Morality’

All of China’s 1.4 billion citizens are enrolled in its facial recognition and Social Credit System, resulting in 13 million being blacklisted; as a result, China is bragging that it has ‘restored morality’. ⁃ TN Editor

China’s state-run newspaper Global Times revealed in a column defending the nation’s authoritarian “social credit system” Monday that the communist regime had blacklisted 13.49 million Chinese citizens for being “untrustworthy.”

The article did not specify what these individuals did to find themselves on the list, though the regime has revealed the system assigns a numerical score to every Chinese citizen based on how much the Communist Party approves of his or her behavior. Anything from jaywalking and walking a dog without a leash to criticizing the government on the internet to more serious, violent, and corrupt crimes can hurt a person’s score. The consequences of a low credit score vary, but most commonly appear to be travel restrictions at the moment.

China is set to complete the implementation of the system in the country in 2020. As the date approaches, the government’s propaganda arms have escalated its promotion as necessary to live in a civilized society. Last week, the Chinese Communist Youth League released a music video titled “Live Up to Your Word” featuring well-known Chinese actors and musicians who cater to a teenage audience. The song in the video urged listeners to “be a trustworthy youth” and “give thumbs up to integrity” by abiding by the rules of the Communist Party. While it did not explicitly say the words “social credit system,” observers considered it a way to promote the behavior rewarded in social credit points

Monday’s Global Times piece claimed it will “restore morality” by holding bad citizens accountable, with “bad” solely defined in the parameters set by Communist Party totalitarian chief Xi Jinping. The federal party in Beijing is also establishing a points-based metric for monitoring the performance of local governments, making it easier to keep local officials in line with Xi’s agenda.

“As of March, 13.49 million individuals have been classified as untrustworthy and rejected access to 20.47 million plane tickets and 5.71 million high-speed train tickets for being dishonest,” the Global Times reported, citing the government’s National Development and Reform Commission (NDRC). Among the new examples the newspaper highlights as dishonest behavior are failing to pay municipal parking fees, “eating on the train,” and changing jobs with “malicious intent.”

China had previously revealed that, as of March, the system blocked an unspecified number of travelers from buying over 23 million airplane, train, and bus tickets nationwide. That report did not say how many people the travel bans affected, as the same person could presumably attempt to buy more than one ticket or tickets for multiple means of transportation. The system blocked over three times the number of plane tickets as train tickets, suggesting the government is suppressing international travel far more than use of domestic vehicles. At the time of the release of the initial numbers in March, estimates found China had tripled the number of people on its no-fly list, which predates the social credit system.

The Chinese also reportedly found that some of the populations with the highest number of system violations lived in wealthy areas, suggesting Xi is targeting influential businesspeople with the system to keep them under his command.

In addition to limited access to travel, another punishment the Chinese government rolled out in March was the use of an embarrassing ringtone to alert individuals of a low-credit person in their midst. The ringtone would tell those around a person with low credit to be “careful in their business dealings” with them.

In the system, all public behavior, the Global Times explained Monday, will be divided into “administrative affairs, commercial activities, social behavior, and the judicial system” once the system is complete. No action will be too small to impact the score.

“China’s ongoing construction of the world’s largest social credit system will help the country restore social trust,” the article argued.

Read full story here…




US Police Capture 117 Million In Facial Recognition Systems

Massive nationwide study in 2006 reveals that thirty-six percent of Americans are in a facial recognition database, and the number is growing rapidly. Law enforcement is mostly unregulated and agencies are free to drift toward a police state reality. ⁃ TN Editor

There is a knock on your door. It’s the police. There was a robbery in your neighborhood. They have a suspect in custody and an eyewitness. But they need your help: Will you come down to the station to stand in the line-up?

Most people would probably answer “no.” This summer, the Government Accountability Office revealed that close to 64 million Americans do not have a say in the matter: 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm.

But the FBI is only part of the story. Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities.

This report closes these gaps. The result of a year-long investigation and over 100 records requests to police departments around the country, it is the most comprehensive survey to date of law enforcement face recognition and the risks that it poses to privacy, civil liberties, and civil rights. Combining FBI data with new information we obtained about state and local systems, we find that law enforcement face recognition affects over 117 million American adults. It is also unregulated. A few agencies have instituted meaningful protections to prevent the misuse of the technology. In many more cases, it is out of control.

The benefits of face recognition are real. It has been used to catch violent criminals and fugitives. The law enforcement officers who use the technology are men and women of good faith. They do not want to invade our privacy or create a police state. They are simply using every tool available to protect the people that they are sworn to serve. Police use of face recognition is inevitable. This report does not aim to stop it.

Rather, this report offers a framework to reason through the very real risks that face recognition creates. It urges Congress and state legislatures to address these risks through commonsense regulation comparable to the Wiretap Act. These reforms must be accompanied by key actions by law enforcement, the National Institute of Standards and Technology (NIST), face recognition companies, and community leaders.

Key Findings

Our general findings are set forth below. Specific findings for 25 local and state law enforcement agencies can be found in our Face Recognition Scorecard, which evaluates these agencies’ impact on privacy, civil liberties, civil rights, transparency and accountability. The records underlying all of our conclusions are available online.

Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos. Roughly one in two American adults has their photos searched this way.

A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible. While some agencies, like the San Diego Association of Governments, limit themselves to more targeted use of the technology, others are embracing high and very high risk deployments.

Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic.

Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras, bought technology that can do so, or expressed a written interest in buying it. Nearly all major face recognition companies offer real-time software.

No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences. The Maricopa County Sheriff’s Office enrolled all of Honduras’ driver’s licenses and mug shots into its database. The Pinellas County Sheriff’s Office system runs 8,000 monthly searches on the faces of seven million Florida drivers—without requiring that officers have even a reasonable suspicion before running a search. The county public defender reports that the Sheriff’s Office has never disclosed the use of face recognition in Brady evidence.

There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech.

Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing. One major face recognition company, FaceFirst, publicly advertises a 95% accuracy rate but disclaims liability for failing to meet that threshold in contracts with the San Diego Association of Governments. Unfortunately, independent accuracy tests are voluntary and infrequent.

Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time. We found only eight face recognition systems where specialized personnel reviewed and narrowed down potential matches. The training regime for examiners remains a work in progress.

Police face recognition will disproportionately affect African Americans. Many police departments do not realize that. In a Frequently Asked Questions document, the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either.

Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy.

Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. The Pinellas County Sheriff’s Office system is almost 15 years old and may be the most frequently used system in the country. When asked if his office audits searches for misuse, Sheriff Bob Gualtieri replied, “No, not really.” Despite assurances to Congress, the FBI has not audited use of its face recognition system, either. Only nine of 52 agencies (17%) indicated that they log and audit their officers’ face recognition searches for improper use. Of those, only one agency, the Michigan State Police, provided documentation showing that their audit regime was actually functional.

Read full story here…




Experts: Latin America Using Chinese Tech To ‘Exert Social Control’

China expands its Technocracy as it simultaneously undermines free governments. Agressive Chinese suppliers with their sophisticated social control tech products have now penetrated Argentina, Bolivia, Ecuador, Panama and Venezuela. ⁃ TN Editor

The ongoing proliferation of Chinese surveillance and information technologies in Latin America can be used to exert social control, erode democratic governance, and challenge U.S. and regional strategic interest, expert witnesses told a House panel on Thursday.

Margaret Myers, the director of the Asia and Latin America program at the Inter-American Dialogue, told the House Foreign Affairs Subcommittee on the Western Hemisphere via written testimony that the governments of Argentina, Bolivia, Ecuador, Panama, and Venezuela have all implemented “Chinese-made intelligent monitoring” technologies.

Myers described the move as “exceedingly troubling,” adding:

These systems are described by Chinese suppliers as promoting of citizen safety and security, but if used to exert social control (as they are in China or currently in Venezuela through the ZTE- backed “fatherland card”), can have critical implications for privacy and democratic governance.

Most notably in Venezuela, Reuters recently reported that China’s ZTE technology had enabled socialist dictator Nicolás Maduro to use the so-called “fatherland card” to collect personal data and track the behavior of citizens.

Most notably in Venezuela, Reuters recently reported that China’s ZTE technology had enabled socialist dictator Nicolás Maduro to use the so-called “fatherland card” to collectpersonal data and track the behavior of citizens.

The socialist policies of Chinese-backed Maduro and his predecessor plunged Venezuela into a humanitarian, security, and political abyss. The United States and about 50 other countries have come out in support of interim President Juan Guaido.

In his written testimony, Christopher Walker, the vice president for studies and analysis at the National Endowment for Democracy (NED), noted that despite the risks, the use of Chinese technology is likely to continue growing in Latin America.

He testified:

For many countries in Latin America, as in other developing economies around the world, the opportunity to import advanced technologies can be highly attractive. We can anticipate that governments across the region will continue to pursue such opportunities and welcome investments from China in this sphere. However, the wider societies of countries throughout the region must approach such technology-related deals with open eyes and with the information necessary to make fully informed decisions.

Brian Fonseca, the director of the Jack D. Gordon Institute for Public Policy at Florida International University, noted in his prepared remarks that the proliferation of Chinese surveillance and IT technologies are challenging the interest of the United States and the Western Hemisphere as a whole.

Read full story here…




The Guardian: Is China-Style Surveillance Coming To The West?

More and more mainstream journalists are writing about China’s main dystopian export: all-seeing surveillance. While not perceived as Technocracy per se, they are connecting the dots as they watch multiple nations following in China’s footsteps. ⁃ TN Editor

In 2005 I was chased, by car, from Shanghai to Hangzhou by Chinese secret police. My crime? Setting up meetings with Chinese writers.

I was there working on a report for PEN International on the organizations that cater to literary writers. What issues did writers care about? What activities did they engage in?

The car tailing us bobbed in and out of traffic to keep up, and later slowed when it looked like it would overtake us. It was a frightening experience although my companion from PEN and I were not arrested, and we suffered no consequences from the surveillance and pursuit.

On the other hand, the Chinese writers we were to meet with the night before in a Shanghai restaurant, had been detained and questioned. One was taken to tea. The other dinner at KFC. Anything to prevent them meeting with us.

We could only hope that our efforts to learn more about these writers and support them in their work would not bring them any real harm. And the experience left me with an enduring admiration for their courage to even agree to meet with us in the first place.

But that was 15 years ago.

If we were to return to China to do a similar report today, who knows if we would even know we were being watched?

In a very short time, China’s surveillance capability has become immensely sophisticated and now extends beyond keeping tabs on political dissidents to developing a system for monitoring the behavior of the entire population.

You could, in fact, argue that the technologies that once promised to be a liberating force are now just as easily deployed to stifle dissent, entrench authoritarianism and shame and prosecute those the Orwellian government of President Xi Jinping deems out of line.

Since the massacre that ended the Tiananmen Square pro-democracy protests in 1989, digital technology has given the Chinese government new, more stealthy modes of silencing, oppressing and disappearing dissidents, and stifling historical discourse.

This includes censoring online even mentions of 4 June, and an ever-changing catalogue of words and phrases that, depending on circumstances, are deemed threatening, including “feminism”, “1984”, “I disagree” and certainly anything that might draw attention to Uighur or Tibetan rights, or the independence of Taiwan.

Twitter – and many social media platforms people use freely elsewhere – is banned in China, and many people who have found ways to work around its censorship have been detained as recently as this year.

According to Amnesty International, China “has the largest number of imprisoned journalists and cyber-dissidents in the world” which is, of course, related to it having “the world’s most sophisticated system for controlling and surveilling the web”, as CNN has reported.

While we once hoped the internet would deliver us freedom of expression, the ability to communicate freely across borders and even be a channel for dissenting views, we now see the very opposite is occurring.

Worse, the Chinese model is now being exported. Wired magazine has reported that China is “exporting its techno-dystopian model to other counties … Since January 2017, Freedom House counted 38 countries where Chinese firms have built internet infrastructure, and 18 countries using AI surveillance developed by the Chinese.”

Read full story here…




surveillance

The Future Of Surveillance Is About Behaviors, Not Faces

With lack of regulations and legislation, ubiquitous surveillance is way beyond simple biometric identification and is now focussing on behaviors, including pre-crime analysis. Facial expressions, eye movements, gait, respiration, etc., are fed to AI algorithms to sense mood, personality and emotions. ⁃ TN Editor

In 1787, English philosopher Jeremy Bentham came up with an idea for a prison that would cost a fraction of the cost of other contemporary jails to run with virtually no internal crime. His theoretical prison, the panopticon, was curved, the cells facing inward toward a center point where a guard tower would stand. The windows in the guard tower were to be darkened on one side. This way, a single guard would be able to observe the behavior of all the prisoners. But more importantly, the prisoners would never know whether the guard had his or her gaze trained on them. The end result, every individual within the prison internalizes a sense of being watched all the time and behaves accordingly.

This idea of the panopticon has become a stand-in for the threat of ubiquitous surveillance, due mostly to Bentham’s choice of setting — a prison. But Bentham aimed not to frighten people, but to furnish a way to manage a scarce resource: the attention of law enforcement.

A new trend in video surveillance technology is turning Bentham’s panopticon into reality, but not in the way he imagined. Instead of a prison, the new panopticon would focus the attention of law enforcement on a person when her behavior becomes relevant to the guard tower. Imagine it were possible to recognize not the faces of people who had already committed crimes, but the behaviors indicating a crime that was about to occur.

Multiple vendors and startups attending ISC West, a recent security technology conference in Las Vegas, sought to serve a growing market for surveillance equipment and software that can find concealed guns, read license plates and other indicators of identity, and even decode human behavior.

A company called ZeroEyes out of Philadelphia markets a system to police departments that can detect when a person is entering a given facility carrying a gun. It integrates with any number of closed-circuit surveillance systems. But machine learning algorithms don’t just come out of a box knowing how to recognize a firearm any more than a drug dog arrives from the breeder knowing the difference between marijuana and oregano. To teach the algorithm, a team from the company shows up on location and proceeds to stage mock attacks. Slowly, the algorithm begins to learn what a gun looks like in that specific setting, depending on light, angles, and other conditions. They’re currently working with New York City Schools and have a contract with U.S. Customs and Border Patrol, but are not yet deployed to the border, said Kenny Gregory, a software engineer at the company.

Automated firearm detection is one solution to a growing problem that has no clear policy cure: curbing mass shootings and gun violence. While some polls show that 70 percent of Americans support stricter gun laws, that number is far lower, about 31 percent, among conservatives. And so the political debate, while fiery, has stalled. Gun-detection algorithms that alert security personnel when an armed person arrives might reduce the number of victims — though likely not as much as if there were no armed shooter in the first place.

It’s a predictive indicator of potential violence, rather than a lagging indicator, such as facial recognition, and carries less political baggage. More and more, cities and police departments are experimenting with facial recognition to detect the presence of suspects in real time. They’re meeting with stiff resistance from privacy advocates in San Francisco, where some lawmakers are looking to block deployment, and elsewhere.

Read full story here…



Alexa Has Been Eavesdropping On You The Whole Time

Surveillance is always for some other benefit like “training the AI program” but once your data is recorded, stored, transcribed and analyzed, it can and will eventually be used against you. ⁃ TN Editor

Would you let a stranger eavesdrop in your home and keep the recordings? For most people, the answer is, “Are you crazy?”

Yet that’s essentially what Amazon has been doing to millions of us with its assistant Alexa in microphone-equipped Echo speakers. And it’s hardly alone: Bugging our homes is Silicon Valley’s next frontier.

Many smart-speaker owners don’t realize it, but Amazon keeps a copy of everything Alexa records after it hears its name. Apple’s Siri, and until recently Google’s Assistant, by default also keep recordings to help train their artificial intelligences.

So come with me on an unwelcome walk down memory lane. I listened to four years of my Alexa archive and found thousands of fragments of my life: spaghetti-timer requests, joking houseguests and random snippets of “Downton Abbey.” There were even sensitive conversations that somehow triggered Alexa’s “wake word” to start recording, including my family discussing medication and a friend conducting a business deal.

For as much as we fret about snooping apps on our computers and phones, our homes are where the rubber really hits the road for privacy. It’s easy to rationalize away concerns by thinking a single smart speaker or appliance couldn’t know enough to matter. But across the increasingly connected home, there’s a brazen data grab going on, and there are few regulations, watchdogs or common-sense practices to keep it in check.

Let’s not repeat the mistakes of Facebook in our smart homes. Any personal data that’s collected can and will be used against us. An obvious place to begin: Alexa, stop recording us.

“Eavesdropping” is a sensitive word for Amazon, which has battled lots of consumer confusion about when, how and even who is listening to us when we use an Alexa device. But much of this problem is of its own making.

Alexa keeps a record of what it hears every time an Echo speaker activates. It’s supposed to only record with a “wake word” – “Alexa!” – but anyone with one of these devices knows they go rogue. I counted dozens of times when mine recorded without a legitimate prompt. (Amazon says it has improved the accuracy of “Alexa” as a wake word by 50 percent over the past year.)

What can you do to stop Alexa from recording? Amazon’s answer is straight out of the Facebook playbook: “Customers have control,” it says – but the product’s design clearly isn’t meeting our needs. You can manually delete past recordings if you know exactly where to look and remember to keep going back. You cannot actually stop Amazon from making these recordings, aside from muting the Echo’s microphone (defeating its main purpose) or unplugging the darn thing.

Amazon founder and chief executive Jeff Bezos owns The Washington Post, but I review all tech with the same critical eye.

Amazon says it keeps our recordings to improve products, not to sell them. (That’s also a Facebook line.) But anytime personal data sticks around, it’s at risk. Remember the family that had Alexa accidentally send a recording of a conversation to a random contact? We’ve also seen judges issue warrants for Alexa recordings.

Alexa’s voice archive made headlines most recently when Bloomberg discovered Amazon employees listen to recordings to train its artificial intelligence. Amazon acknowledged some of those employees also have access to location information for the devices that made the recordings.

Saving our voices is not just an Amazon phenomenon. Apple, which is much more privacy-minded in other aspects of the smart home, also keeps copies of conversations with Siri. Apple says voice data is assigned a “random identifier and is not linked to individuals” – but exactly how anonymous can a recording of your voice be? I don’t understand why Apple doesn’t give us the ability to say not to store our recordings.

Read full story here…




Australia

Darwin, Australia Adopts China’s Tech for Total Social Control

TN has warned that China intends to aggressively export its draconian social control system. It has found an unlikely sale in Australia, which is as unlikely as the U.S. to follow China’s lead toward Technocracy. China is marketing in the U.S. as well. ⁃ TN Editor

Australia is preparing to debut its version of the Chinese regime’s high-tech system for monitoring and controlling its citizens. The launch, to take place in the northern city of Darwin, will include systems to monitor people’s activity via their cell phones.

The new system is based on monitoring programs in Shenzhen, China, where the Chinese Communist Party (CCP) is testing its Social Credit System. Officials on the Darwin council traveled to Shenzhen, according to NT News, to “have a chance to see exactly how their Smart Technology works prior to being fully rolled out.”

In Darwin, they’ve already constructed “poles, fitted with speakers, cameras and Wi-Fi,” according to NT News, to monitor people, their movements around the city, the websites they visit, and what apps they use. The monitoring will be done mainly by artificial intelligence, but will alert authorities based on set triggers.

Just as in China, the surveillance system is being branded as a “smart city” program, and while Australian officials claim its operations are benign, they’ve announced it functions to monitor cell phone activity and “virtual fences” that will trigger alerts if people cross them.

“We’ll be getting sent an alarm saying, ‘There’s a person in this area that you’ve put a virtual fence around.’ … Boom, an alert goes out to whatever authority, whether it’s us or police to say ‘look at camera five,’” said Josh Sattler, the Darwin council’s general manager for innovation, growth, and development services, according to NT News.

The nature of the “virtual fences” and what type of activity will sound an alarm still isn’t being made clear.

The system is being promoted as mostly benign. Sattler said it will tell the government “where people are using Wi-Fi, what they’re using Wi-Fi for, are they watching YouTube, etc. All these bits of information we can share with businesses. … We can let businesses know, ‘Hey, 80 percent of people actually use Instagram within this area of the city, between these hours.’”

The CCP’s smart city Social Credit System is able to monitor each person in the society, tracking every element of their lives—including their friends, online purchases, daily behavior, and other information—and assigns each person a citizen score that determines their level of freedom in society.

The tool is a core piece of the CCP’s programs to monitor and persecute dissidents, including religious believers and people who oppose the ruling communist system.

Chinese human rights lawyer Teng Biao, a visiting scholar at New York University, described the Social Credit System as a new form of tyranny, meant to reactivate the CCP’s totalitarian hold on society.

“In the past, there was the Nazi totalitarianism and Mao Zedong’s totalitarian system, but a totalitarian system powered by the internet and contemporary technology has not existed before,” Teng said in a recent interview with The Epoch Times.

“The CCP is now taking the first step to build such a high-tech totalitarian system, by using credit ratings and monitoring and recording every detail in people’s daily life, which is very frightening.”

The regime also isn’t interested in keeping the technology within its own borders. It’s exporting the system, and its “China model” of totalitarian government, as a service of its “One Belt, One Road” program. When the CCP builds its infrastructure abroad, its surveillance and social control programs are part of the package.

Read full story here…