dystopia

Dystopia: Your Facial Record Follows You Everywhere

This is dystopia on steroids as cloud-based facial recognition builds databases of “offenders” and then shares it with other stores. If you get busted in one place, you might be denied to ever shop again. ⁃ TN Editor

At my bodega down the block, photos of shoplifters sometimes litter the windows, a warning to would-be thieves that they’re being watched.

Those unofficial wanted posters come and go, as incidents fade from the owner’s memory.

But with facial recognition, getting caught in one store could mean a digital record of your face is shared across the country. Stores are already using the technology for security purposes and can share that data — meaning that if one store considers you a threat, every business in that network could come to the same conclusion.

One mistake could mean never being able to shop again.

While that may be good news for shopkeepers, it raises concerns about potential overreach. It’s just one example of how facial recognition straddles the line between being a force for good and being a possible violation of personal privacy. Privacy advocates fear that regulations can’t keep up with the technology — found everywhere from your phone to selfie stations — leading to devastating consequences.

“Unless we really rein in this technology, there’s a risk that what we enjoy every day — the ability to walk around anonymous, without fearing that you’re being tracked and identified — could be a thing of the past,” said Neema Singh Guliani, the American Civil Liberties Union’s senior legislative counsel.

The technology is appearing in more places every day. Taylor Swift uses it at her concerts to spot potential stalkers, with cameras hidden in kiosks for selfies. It’s being used in schools in Sweden to mark attendance and at airports in Australiafor passengers checking in. Supermarkets in the UK are using it to determine whether customers are old enough to buy beer. Millions of photos uploaded onto social media are being used to train facial recognition without people’s consent.

Revenue from facial recognition is expected to reach $10 billion by 2025, more than double the market’s total in 2018. But despite that forecast for rapid growth, there’s no nationwide regulation on the technology in the US. The gap in standards means that it’s possible the technology being used at US borders could have the same accuracy rate as facial recognition used to take selfies at a concert.

Accuracy rates matter — it’s the difference between facial recognition determining you’re a threat or an innocent bystander, but there’s no standard on how precise the technology needs to be.

Without any legal restrictions, companies can use facial recognition without limits. That means being able to log people’s faces without telling customers their data is being collected.

Two facial recognition providers told CNET that they don’t check on their customers to make sure they’re using the data properly. There are no laws requiring them to.

“So far, we haven’t been able to convince our legislators that this is a big problem and will be an even larger problem in the future,” said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. “The time is now to regulate this technology before it becomes embedded in our everyday lives.”

Faced everywhere

At the International Security Conference in New York last November, I walked past booths with hundreds of surveillance cameras. Many of them used facial recognition to log my gaze.

These companies want this technology to be part of our daily routines — in stores, in offices and in apartment buildings. One company, Kogniz, boasted it was capable of automatically enrolling people as they enter a camera’s view.

“Preemptively catalogues everyone ever seen by the camera so they can be placed on a watchlist,” Kogniz’s business card says.

his technology is available and advertised as a benefit to stores without any privacy concerns in mind. As more stores adopt this dragnet approach to facial recognition, data on your appearance could be logged anywhere you go.

California-based video surveillance startup Kogniz launched in 2016 and now has about 30 retail and commercial customers, with thousands of security cameras using its facial recognition technology. Stores use Kogniz’s facial recognition to identify known shoplifters.

If a logged person tries entering the store, Kogniz’s facial recognition will be able to detect that and flag security, Daniel Putterman, the company’s co-founder and director, said in an interview.

And it’s not just for that one location.

“We are a cloud system, so we’re inherently multi-location,” Putterman said.

If someone is barred from one store because of facial recognition, that person could potentially be prevented from visiting another branch of that same store ever again.

Kogniz also offers a feature called “collaborative security,” which lets clients opt in to share facial recognition data with other customers and share potential threats across locations. That would mean facial recognition could detect you in a store you’ve never even visited to before.

Read full story here…




Homeland Security To Scan Your Face At 20 Top Airports

TN has stated for years that intel agencies have gone rogue when forcing surveillance outside of legislative control or even advice and consent. They are building a comprehensive national biometric database of all citizens. ⁃ TN Editor

In March 2017, President Trump issued an executive order expediting the deployment of biometric verification of the identities of all travelers crossing US borders. That mandate stipulates facial recognition identification for “100 percent of all international passengers,” including American citizens, in the top 20 US airports by 2021. Now, the United States Department of Homeland Security is rushing to get those systems up and running at airports across the country. But it’s doing so in the absence of proper vetting, regulatory safeguards, and what some privacy advocates argue is in defiance of the law.

According to 346 pages of documents obtained by the nonprofit research organization Electronic Privacy Information Center — shared exclusively with BuzzFeed News and made public on Monday as part of Sunshine Week — US Customs and Border Protection is scrambling to implement this “biometric entry-exit system,” with the goal of using facial recognition technology on travelers aboard 16,300 flights per week — or more than 100 million passengers traveling on international flights out of the United States — in as little as two years, to meet Trump’s accelerated timeline for a biometric system that had initially been signed into law by the Obama administration. This, despite questionable biometric confirmation rates and few, if any, legal guardrails.

These same documents state — explicitly — that there were no limits on how partnering airlines can use this facial recognition data. CBP did not answer specific questions about whether there are any guidelines for how other technology companies involved in processing the data can potentially also use it. It was only during a data privacy meeting last December that CBP made a sharp turn and limited participating companies from using this data. But it is unclear to what extent it has enforced this new rule. CBP did not explain what its current policies around data sharing of biometric information with participating companies and third-party firms are, but it did say that the agency “retains photos … for up to 14 days” of non-US citizens departing the country, for “evaluation of the technology” and “assurance of the accuracy of the algorithms” — which implies such photos might be used for further training of its facial matching AI.

“CBP is solving a security challenge by adding a convenience for travelers,” a spokesperson said in an emailed response to a detailed list of questions from BuzzFeed News. “By partnering with airports and airlines to provide a secure stand-alone system that works quickly and reliably, which they will integrate into their boarding process, CBP does not have to rebuild everything from the ground up as we drive innovation across the travel experience.”

The documents also suggest that CBP skipped portions of a critical “rulemaking process,” which requires the agency to solicit public feedback before adopting technology intended to be broadly used on civilians, something privacy advocates back up. This is worrisome because — beyond its privacy, surveillance, and free speech implications — facial recognition technology is currently troubled by issues of inaccuracy and bias. Last summer, the American Civil Liberties Union reported that Amazon’s facial recognition technology falsely matched 28 members of Congress with arrest mugshots. These false matches were disproportionately people of color.

“I think it’s important to note what the use of facial recognition [in airports] means for American citizens,” Jeramie Scott, director of EPIC’s Domestic Surveillance Project, told BuzzFeed News in an interview. “It means the government, without consulting the public, a requirement by Congress, or consent from any individual, is using facial recognition to create a digital ID of millions of Americans.”

“CBP took images from the State Department that were submitted to obtain a passport and decided to use them to track travelers in and out of the country,” Scott said.

Read full story here…




Sanctuary No More: Surveillance Tech Is Invading Your Car

Millimeter-wave radar can ‘see’ things inside you car that a camera cannot detect, tracking even more of your driving habits and things like your respiration and heart rate. When cars communicate to other cars, how will this data be protected? ⁃ TN Editor

Your car has long been a sanctuary on wheels but that won’t be true for much longer. Car manufacturers will soon be adding radar and lasers inside the cabin to monitor who you are and what you do, and that will mean even more of your personal habits being tracked.

Millimeter-wave radar is perhaps the most intriguing in-car detection tech. The apparatus itself can be compact because its frequency is so high and therefore its emitted wavelength is short. I’ve seen several examples about the size of a card deck that could easily be mounted on the headliner of a car and look down at its cabin. Texas Instruments has been showing carmakers how that view can not only detect objects in the vehicle but also classify them as an adult, a child or even a dog.

Mount that same radar inside a seat and you can detect small movements in the driver’s body that, with enough computing power, can be translated into a reading of respiration rate.

Startup Caaresys imagines its radar-based system monitoring the respiration and heart rates of everyone in the car, with a particular focus on sensing a child that might be hidden from view in the back and potentially left behind in the car.

Unlike other smart sensing scenarios, like an Amazon Go store where cameras dominate, car interiors are cramped and filled with obstructions like seats. Radar has the benefit of being able to see through a lot of things that block a camera’s view. And while much of the current work is aimed at human-driven cars, the same radar technology could detect passenger orientation in future self-driving cars, which may not always be facing forward. Airbags and other passive safety systems could use smart radar sensing to configure themselves in a crash based on who is facing which direction.

Even biometrics that aren’t necessarily gathered by the car can be used to monitor you. BSecur said each of us has a unique ECG/EKG signature that can be used to unlock a car or monitor our condition while driving. That heart pattern could be tracked for the car’s processors by sensors in the steering wheel or via a smartwatch.

Not all car-cabin radar is about our bodies and faces: Vayyar is working with auto mechatronics company Brose to sense obstacles in the path of a car door, including nearby poles, walls, cyclists or other parked cars, and then block the door from opening too far via a limiter. That’s bad news for paintless dent-removal guys, but great news for cyclists who live in fear of being doored.

Radar is far from the only game in town though. Cameras are already used in a few cars to monitor a driver’s gaze and eye state for inattention and drowsiness. The Cadillac CT6 pushed this envelope with its Super Cruise partial autonomy that watches the driver’s face to determine if they are looking away from the road for too long. A huge number of similar applications are about to arrive from Mazda, Hyundai, Kia, BMW and autonomous EV startup Byton.

Read full story here…




mind reading

Zuckerberg: Facebook Wants To Build A Mind-Reading Machine

Just wait until police get ahold of this technology and require you to ‘don the helmet’ during routine traffic stops. Oh wait, this is only Facebook, not the government. ⁃ TN Editor

For those of us who worry that Facebook may have serious boundary issues when it comes to the personal information of its users, Mark Zuckerberg’s recent comments at Harvard should get the heart racing.

Zuckerberg dropped by the university last month ostensibly as part of a year of conversations with experts about the role of technology in society, “the opportunities, the challenges, the hopes, and the anxieties.” His nearly two-hour interview  with Harvard law school professor Jonathan Zittrain in front of Facebook cameras and a classroom of students centered on the company’s unprecedented position as a town square for perhaps 2 billion people. To hear the young CEO tell it, Facebook was taking shots from all sides—either it was indifferent to the ethnic hatred festering on its platforms or it was a heavy-handed censor deciding whether an idea was allowed to be expressed.

Zuckerberg confessed that he hadn’t sought out such an awesome responsibility. No one should, he said. “If I was a different person, what would I want the CEO of the company to be able to do?” he asked himself. “I would not want so many decisions about content to be concentrated with any individual.”

Instead, Facebook will establish its own Supreme Court, he told Zittrain, an outside panel entrusted to settle thorny questions about what appears on the platform. “I will not be able to make a decision that overturns what they say,” he promised, “which I think is good.”

All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.

The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly in these pages. No typing—no speaking, even—to distract you or slow you down as you interact with digital additions to the landscape: driving instructions superimposed over the freeway, short biographies floating next to attendees of a conference, 3D models of furniture you can move around your apartment.

The Harvard audience was a little taken aback by the conversation’s turn, and Zittrain made a law-professor joke about the constitutional right to remain silent in light of a technology that allows eavesdropping on thoughts. “Fifth amendment implications are staggering,” he said to laughter. Even this gentle pushback was met with the tried-and-true defense of big tech companies when criticized for trampling users’ privacy—users’ consent. “Presumably,” Zuckerberg said, “this would be something that someone would choose to use as a product.”

In short, he would not be diverted from his self-assigned mission to connect the people of the world for fun and profit. Not by the dystopian image of brain-probing police officers. Not by an extended apology tour. “I don’t know how we got onto that,” he said jovially. “But I think a little bit on future tech and research is interesting, too.”

Of course, Facebook already follows you around as you make your way through the world via the GPS in the smartphone in your pocket, and, likewise, follows you across the internet via code implanted in your browser. Would we really let Facebook inside those old noggins of ours just so we can order a pizza faster and with more toppings? Zuckerberg clearly is counting on it.

Read full story here…




emotions

Why You Should Be Worried About Machines Reading Your Emotions

Reading emotions is akin to phrenology, or reading the bumps on your head to predict mental traits. Both are based on simplistic and faulty assumptions which could falsely scar an individual for life. ⁃ TN Editor

Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short.

While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception.

But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling.

Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face.

In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.

But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science.

Emotion detection technology requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyze and interpret the emotional content of those facial features.

Typically, the second step employs a technique called supervised learning, a process by which an algorithm is trained to recognize things it has seen before. The basic idea is that if you show the algorithm thousands and thousands of images of happy faces with the label “happy” when it sees a new picture of a happy face, it will, again, identify it as “happy”.

A graduate student, Rana el Kaliouby, was one of the first people to start experimenting with this approach. In 2001, after moving from Egypt to Cambridge University to undertake a PhD in computer science, she found that she was spending more time with her computer than with other people. She figured that if she could teach the computer to recognize and react to her emotional state, her time spent far away from family and friends would be less lonely.

Kaliouby dedicated the rest of her doctoral studies to work on this problem, eventually developing a device that assisted children with Asperger syndrome read and respond to facial expressions. She called it the “emotional hearing aid”.

In 2006, Kaliouby joined the Affective Computing lab at the Massachusetts Institute of Technology, where together with the lab’s director, Rosalind Picard, she continued to improve and refine the technology. Then, in 2009, they co-founded a startup called Affectiva, the first business to market “artificial emotional intelligence”.

Read full story here…




U.S. Post Office Spying On Everyone Who Sends Mail

The U.S. Post Office is a loose cannon when it comes to spying on Americans: it answers to no-one and can perform warrantless collection of data on every single piece of mail. Furthermore, it can and will give any data to other federal agencies who request it. ⁃ TN Editor
 

It’s called the “Mail Cover Program” and it’s run by the U.S. Postal Service (USPS). Yes, even the Post Office is spying on us, writes John Kiriakou.

You may remember that last year some nut was arrested for mailing bombs to prominent Democrats, media outlets, and opponents of Donald Trump. Less than a week after the bombs went out, a suspect was arrested. Almost immediately, video turned up of him at a Trump rally, wearing a “Make America Great Again” hate and chanting for the camera. He was soon tried, convicted, and jailed. End of story.

But it wasn’t the end of the story. The investigation into the bomb incidents focused attention on an almost unknown federal surveillance program—one that poses a direct threat to the privacy and constitutional rights of every American. It’s called the “Mail Cover Program” and it’s run by the U.S. Postal Service (USPS). Yes, even the Post Office is spying on us.

The Mail Cover Program allows postal employees to photograph and send to federal law enforcement organizations (FBI, DHS, Secret Service, etc.) the front and back of every piece of mail the Post Office processes. It also retains the information digitally and provides it to any government agency that wants it—without a warrant.

In 2015, the USPS Inspector General issued a report saying that, “Agencies must demonstrate a reasonable basis for requesting mail covers, send hard copies of request forms to the Criminal Investigative Service Center for processing, and treat mail covers as restricted and confidential…A mail cover should not be used as a routine investigative tool. Insufficient controls over the mail cover program could hinder the Postal Inspection Service’s ability to conduct effective investigations, lead to public concerns over privacy of mail, and harm the Postal Service’s brand.”

Return to Sender

Not only were the admonitions ignored, the mail cover program actually expanded after the report’s release. Indeed, in the months after that report was issued, there were 6,000 requests for mail cover collection. Only 10 were rejected, according to the Feb. 2019 edition of Prison Legal News (P.34-35) .

I have some personal experience with the Mail Cover Program. I served 23 months in prison for blowing the whistle on the CIA’s illegal torture program. After having been locked up for two months, I decided to commission a card from a very artistically-inclined prisoner for my wife’s 40th birthday. I sent it about two weeks early, but she never received it. Finally, about four months later, the card was delivered back to me with a yellow “Return to Sender – Address Not Known” sticker on it. But underneath that sticker was a second yellow sticker. That one read, “Do Not Deliver. Hold For Supervisor. Cover Program.”

Why was I under Postal Service Surveillance? I have no idea. I had had my day in court. The case was over. But remember, the Postal Service doesn’t have to answer to anybody – my attorneys, my judge, even its own Inspector General. It doesn’t need a warrant to spy on me (or my family) and it doesn’t have to answer even to a member of Congress who might inquire as to why the spying was happening in the first place.

The problem is not just the sinister nature of a government agency (or quasi-government agency) spying on individuals with no probable cause or due process, although those are serious problems. It’s that the program is handled so poorly and so haphazardly that in some cases surveillance was initiated against individuals for no apparent law enforcement reason and that surveillance was initiated by Postal Service employees not even authorized to do so. Again, there is no recourse because the people under surveillance don’t even know that any of this is happening.

Read full story here…

John Kiriakou is a former CIA counterterrorism officer and a former senior investigator with the Senate Foreign Relations Committee. John became the sixth whistleblower indicted by the Obama administration under the Espionage Act—a law designed to punish spies. He served 23 months in prison as a result of his attempts to oppose the Bush administration’s torture program.




As Your Phone And TV Track You, Political Campaigns Listen In

Who isn’t tracking you these days? Technocrats thrive on data, without which their precious AI programs will sit there like inert rocks. National privacy legislation is desperately needed. ⁃ TN Editor

It was a crowded primary field and Tony Evers, running for governor, was eager to win the support of officials gathered at a Wisconsin state Democratic party meeting, so the candidate did all the usual things: he read the room, he shook hands, he networked.

The digital fence enabled Evers’ team to push ads onto the iPhones and Androids of all those attending the meeting. Not only that, but because the technology pulled the unique identification numbers off the phones, a data broker could use the digital signatures to follow the devices home. Once there, the campaign could use so-called cross-device tracking technology to find associated laptops, desktops and other devices to push even more ads.

Welcome to the new frontier of campaign tech — a loosely regulated world in which simply downloading a weather app or game, connecting to Wi-Fi at a coffee shop or powering up a home router can allow a data broker to monitor your movements with ease, then compile the location information and sell it to a political candidate who can use it to surround you with messages.

“We can put a pin on a building, and if you are in that building, we are going to get you,” said Democratic strategist Dane Strother, who advised Evers. And they can get you even if you aren’t in the building anymore, but were simply there at some point in the last six months.

Campaigns don’t match the names of voters with the personal information they scoop up — although that could be possible in many cases. Instead, they use the information to micro-target ads to appear on phones and other devices based on individual profiles that show where a voter goes, whether a gun range, a Whole Foods or a town hall debate over Medicare.

The spots would show up in all the digital places a person normally sees ads — whether on Facebook or an internet browser such as Chrome.

As a result, if you have been to a political rally, a town hall, or just fit a demographic a campaign is after, chances are good your movements are being tracked with unnerving accuracy by data vendors on the payroll of campaigns. The information gathering can quickly invade even the most private of moments.

Antiabortion groups, for example, used the technology to track women who entered waiting rooms of abortion clinics in more than a half dozen cities. RealOptions, a California-based network of so-called pregnancy crisis centers, along with a partner organization, had hired a firm to track cell phones in and around clinic lobbies and push ads touting alternatives to abortion. Even after the women left the clinics, the ads continued for a month.

That effort ended in 2017 under pressure from Massachusetts authorities, who warned it violated the state’s consumer protection laws. But such crackdowns are rare.

Data brokers and their political clients operate in an environment in which technology moves much faster than Congress or state legislatures, which are under pressure from Silicon Valley not to strengthen privacy laws. The RealOptions case turned out to be a harbinger for a new generation of political campaigning built around tracking and monitoring even the most private moments of people’s lives.

Read full story here…




Google Nest

Oops! Google Failed To Disclose ‘Secret’ Microphone In ‘Nest’ Security System

Shocking revelations about Big Tech deception is turning into a trend that seems endemic. The term “pathological liars” is gaining strength as  an accurate descriptor. Google, Facebook, Twitter, Amazon, etc., are openly revealing their love for Technocracy. ⁃ TN Editor

  • In early February, Google announced that Assistant would work with its home security and alarm system, Nest Secure.
  • The problem: Users didn’t know a microphone existed on their Nest security devices to begin with.
  • On Tuesday, a Google representative told Business Insider the company had made an “error.”
  • “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the person said. “That was an error on our part.”

In early February, Google announced that its home security and alarm system Nest Secure would be getting an update. Users, the company said, could now enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn’t know a microphone existed on their security device to begin with.

The existence of a microphone on the Nest Guard, which is the alarm, keypad, and motion-sensor component in the Nest Secure offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google representative told Business Insider the company had made an “error.”

Google says “the microphone has never been on and is only activated when users specifically enable the option.”

It also said the microphone was originally included in the Nest Guard for the possibility of adding new security features down the line, like the ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for future updates — like its Assistant integration — the news comes as consumers have grown increasingly wary of major tech companies and their commitment to consumer privacy.

Read full story here…




DNA

FBI Plotting To Keep DNA Of Entire US Population

All 17 Intel agencies in the U.S, including the FBI. are captive to Technocrats who are on a rampage for data. All data. Once a national DNA database is built, it will leak to corporate giants who are already lusting to get their hands on it. In the meantime, a police state and Scientific Dictatorship are forming right before our eyes. ⁃ TN Editor

The FBI is creating a “nation of suspects” by putting every American citizen’s DNA on file, according to shocking claims by a US think tank.

President Donald Trump has signed the Rapid DNA Act into law which means the police can routinely take DNA samples from people who are arrested but not yet convicted of a crime.

The law, which was signed in 2017 and comes into effect this year, will require several states to connect Rapid DNA machines to Codis – the national DNA database controlled by the FBI.

These machines, which are portable and about the same size as a desktop printer, are expected to become as routine a process as taking fingerprints.

But John W. Whitehead from The Rutherford Institute believes it is a sinister development which will make everyone a suspect.

Speaking to Daily Star Online, he said: “The fact of the matter is that these machines are not full-proof.

“But we could look at a situation in which someone could be arrested, have their mouth swabbed and then be charged within hours after generating a DNA profile.

“We are looking at the erosion of the concept of innocent before proven guilty because it will allow police to go on fishing expeditions.

“When you sit on a park bench, you shed DNA. That is now up for grabs by police who could swab it, and run it through a DNA database. If they find a match, or if misconduct occurred anywhere in the vicinity where your DNA was found, you might find yourself charged with a crime you never committed merely because you happened to be in the wrong place at the wrong time.

“Even people who aren’t charged with major crimes could have their DNA put on file.

“People who are just seen as suspicious could have their genetic makeup stored in a criminal database.”

John added that until recently the government was required to adhere to certain restrictions on how, when and where it could access someone’s DNA.

But now it has been changed with the US Supreme Court ruling and heralds in a loss of privacy, he claims.

Read full story here…




Sean Parker

Claim: Amazon Has ‘No Limit’ On How It Can Listen To And Store Private Conversations

This might be a case of the kettle calling the pot black, but the point is well-taken: Amazon has no guarantee that it doesn’t listen to and store every conversation that you have in front of its Alexa device.  ⁃ TN Editor
 

Sean Parker, founding president of Facebook, worries more about Amazon violating your privacy than Facebook.

Parker said on Wednesday there is “no limit” to how Amazon is storing and listening to private conversations, adding that these recordings “could potentially be used against you in a court of law or for other purposes.”

“If you’re having a conversation in front of an Alexa-enabled device, Amazon is not guaranteeing you any privacy,” Parker said in a discussion on stage with CNBC’s Hadley Gamble at the Milken Institute MENA Summit.

Amazon came under fire last year when an Echo device reportedly secretly recorded a family’s conversation and sent it to a random person. Amazon blamed the incident on Alexa misinterpreting a set of commands.

A spokesperson for Amazon told CNBC that customer trust is of the utmost importance, sit takes privacy seriously.

“By default, Echo devices are designed to only capture audio after it detects the wake word. Only after the wake word is detected does audio get streamed to the cloud, and the stream closes immediately after Alexa processes a customer request,” the spokesperson said in an emailed statement.

“No audio is stored or saved on the device. Customers can also review and delete voice recordings.”

Speaking to CNBC last month, Amazon’s VP of Voice Pete Thompson said that his company was taking security and data privacy extremely seriously.

“Even when we put Alexa into our partner products that’s something that we mandate of how they can do, how they can use this stuff. Obviously it is early days on how voice works and some of the biggest challenges is when you speak to it hands free, and you are talking to it from a distance. We try very hard to tune it, to make sure we’ve only heard ‘Alexa’ and then that’s when it wakes up .. we have to keep improving that,” he said.

Facebook, meanwhile, is facing mounting regulatory concerns amid questions over how it collects user data. Last week, Germany’s antirust watchdog ruled Facebook cannot combine data from its various apps, like WhatsApp and Instagram, without voluntary user consent.

Read full story here…