IBM Launches First Quantum Computer-In-A-Box For Commercial Use

This is a watershed computer technology that will enable Technocracy on every level: Practical Quantum computing outside of the laboratory. “Q” will find its sweet spot in Artificial Intelligence and the Internet of Everything. ⁃ TN Editor

IBM unveiled the world’s “first universal approximate quantum computing system installed outside of a research lab” at CES earlier this week — and with it, the next era of computing.

The 20-qubit IBM Q System One represents the first major leap for quantum computers of 2019, but before we get into the technical stuff let’s take a look at this thing.

The commitment to a fully-functional yet aesthetically pleasing design is intriguing. Especially considering that, just last year, pundits claimed quantum computing was a dead-end technology.

To make the first integrated quantum computer designed for commercial use outside of a lab both beautiful and functional, IBM enlisted the aid of Goppion, the company responsible for some of the world’s most famous museum-quality display cases, Universal Design Studio and Map Project Office. The result is not only (arguably) a scientific first, but a stunning machine to look at.

This isn’t just about looks. That box represents a giant leap in the field.

It’s hard to overstate the importance of bringing quantum computers outside of laboratories. Some of the biggest obstacles to universal quantum computing have been engineering-related. It isn’t easy to manipulate the fabric of the universe — or, at a minimum, observe it — and the machines that attempt it typically require massive infrastructure.

In order to decouple a quantum system from its laboratory lifeline, IBM had to figure out how to conduct super-cooling (necessary for quantum computation under the current paradigm) in a box. This was accomplished through painstakingly developed cryogenic engineering.

Those familiar with the company’s history might recall that, back in the 1940s, IBM‘s classical computers took up an entire room. Eventually, those systems started shrinking. Now they fit on your wrist and have more computational power than all the computers from the mainframe era put together.

It sure looks like history is repeating itself.

Read full story here…




AI Will Take 40 Percent Of White, Blue Collar Jobs In 15 Years

This prediction comes from a Chinese venture capitalist based in China who is intent on investing huge sums of capital to make it so. Technocrats have no restraint when it comes to disrupting society, and worse, they have no answers on how to lessen the blows. Lee is formerly head of Google’s China operations. ⁃ TN Editor

In as soon as 15 years, 40 percent of the world’s jobs could be done by machines, according to one of the world’s foremost experts on artificial intelligence. Kai Fu Lee, a pioneer in AI and venture capitalist based in China makes this prediction in a Scott Pelley report about AI on the next edition of 60 Minutes, Sunday, Jan. 13 at 7 p.m., ET/PT on CBS.

“AI will increasingly replace repetitive jobs, not just for blue-collar work, but a lot of white-collar work,” says Lee. “Chauffeurs, truck drivers, anyone who does driving for a living– their jobs will be disrupted more in the 15-25 year time frame,” he tells Pelley. “Many jobs that seem a little bit complex, chef, waiter, a lot of things will become automated … stores … restaurants, and altogether in 15 years, that’s going to displace about 40 percent of the jobs in the world.” When pressed by Pelley about 40 percent of jobs being displaced, Lee says the jobs will be, “displaceable.”

“I believe [AI] is going to change the world more than anything in the history of mankind. More than electricity,” says Lee.

One of the biggest changes will be in education. Lee is financing companies that are installing AI systems in remote classrooms across China to improve learning for students far from the country’s growing cities. The AI-system is being designed to gauge student interest and intelligence by subject.

Could such artificial intelligence identify the geniuses of the world? “That’s possible in the future,” says Lee. “It can also create a student profile and know where the student got stuck so the teacher can personalize the areas in which the student needs help.”

Those students will be facing an uncertain future with 40 percent of the world’s current jobs displaceable. “What does that do to the fabric of society?” asks Pelley. “Well, in some sense, there is the human wisdom that always overcomes these technological revolutions,” Lee says.  “The invention of the steam engine, the sewing machine, electricity, have all displaced jobs. We’ve gotten over it. The challenge of AI is this 40 percent, whether it is 15 or 25 years, is coming faster than the previous revolutions.”

Read full story here…




New Car Tech Will Hoover Data On All Occupants

The Consumer Electronics Show in Las Vegas is a Technocrat bonanza gone wild. Your car used to be a private spot where you could ‘get away from it all’ but now that is over. Technocrats will use every sensor in the car, plus new ones, to monitor and analyze each occupant. ⁃ TN Editor

As vehicles get smarter, your car will be keeping eyes on you.

This week at CES, the international consumer electronics show in Las Vegas, a host of startup companies will demonstrate to global automakers how the sensor technology that watches and analyzes drivers, passengers and objects in cars will mean enhanced safety in the short-term, and revenue opportunities in the future.

Whether by generating alerts about drowsiness, unfastened seat belts or wallets left in the backseat, the emerging technology aims not only to cut back on distracted driving and other undesirable behavior, but eventually help automakers and ride-hailing companies make money from data generated inside the vehicle.

In-car sensor technology is deemed critical to the full deployment of self-driving cars, which analysts say is still likely years away in the United States. Right now, self-driving cars are still mainly at the testing stage.

The more sophisticated in-car monitoring also could respond to concerns that technology that automates some – but not all – driving tasks could lead motorists to stop paying attention and not be ready to retake control should the situation demand it.

When self-driving cars gain broad acceptance, the monitoring cameras and the artificial-intelligence software behind them will likely be used to help create a more customized ride for the passengers. Right now, however, such cameras are being used mainly to enhance safety, not unlike a helpful backseat driver.

Interior-facing cameras inside the car are still a novelty, currently found only in the 2018 Cadillac (GM.N) CT6. Audi (VOWG_p.DE) and Tesla Inc (TSLA.O) have developed systems but they are not currently activated. Mazda (7261.T), Subaru (9778.T) and electric vehicle start-up Byton are introducing cars for 2019 whose cameras measure driver inattention. Startup Nauto’s camera and AI-based tech is used by commercial fleets.

Data from the cameras is analyzed with image recognition software to determine whether a driver is looking at his cellphone or the dashboard, turned away, or getting sleepy, to cite a few examples.

Read full story here…




Tragic: Autonomous Promobot ‘Struck And Killed’ By Self-Driving Tesla

Considering this as a leadup to the Consumer Electronics Show (CES) in Las Vegas, it could nothing more than  an expensive publicity stunt. Or, it could be that Elon Musk’s AI program had it in for the Russian-made AI that controls Promobot.  ⁃ TN Editor

Tesla has found itself involved in yet another self-driving car accident – and this time, its victim was a $2,000-per-day rentable humanoid robot.

In what many are speculating was an over-the-top PR stunt, Promobot revealed one of its model v4 robots was ‘killed’ by a Tesla Model S on a Las Vegas street ahead of CES.

The accident occurred on Paradise Rd Sunday night as engineers transported the firm’s robots to the display booth.

According to Promobot, a number of robots were making their way to the booth around 7 p.m. when one of them stepped out of line and into the parking lot roadway.

As it did, it was struck by a Tesla Model S operating in autonomous mode.

The crash tipped the robot onto its side, causing ‘serious damage,’ Promobot says.

Now, with parts of its body, head, arm mechanisms, and movement platform destroyed, it cannot be put on display.

The firm says the damage is likely irreparable.

Of course we are vexed,’ said Oleg Kivokurtsev, Promobot’s Development Director.

‘We brought this robot here from Philadelphia to participate at CES. Now it neither cannot participate in the event or be recovered.

‘We will conduct an internal investigation and find out why the robot went to the roadway.’

The bizarre news now has many people wondering whether the incident was a PR stunt, or simply an unfortunate coincidence.

The Tesla involved in the collision was operating autonomously, though a passenger was inside at the time.

Read full story here…




Ford To Deploy 5G Vehicle-To-Everything Tech By 2022

Ford Motor Company will be the first to use 5G to enable ubiquitous communication between autos, traffic signals, cell phones. This also gives a clue as to when 5G will be fully rolled out to the nation. ⁃ TN Editor

Don Butler, executive director of the Ford Connected Vehicle Platform and Product, announced in a Medium post on Monday that Ford has committed to deploy cellular vehicle-to-everything (C-V2X) technology  in all new U.S. vehicle models starting in 2022.

The C-V2X tech will allow equipped vehicles to “talk” to and “listen” to each other, as well as directly connect with traffic management infrastructure (such as traffic lights). Pedestrians can also use their mobile phones to convey their locations to vehicles, making roads safer for walkers and cyclists.

“Driver-assist technologies today and autonomous vehicles of the future utilize on-board sensors much in the way people use their eyes to navigate complex environments,” Butler wrote. “C-V2X could complement these systems in ways similar to how our sense of hearing complements our vision to improve our ability to operate in a complex world.”

5G isn’t just changing how society will utilize the internet — it’s also transforming how vehicles can connect with their surrounding environment. The C-V2X platform will run on 5G and complement any existing LiDAR, radar and camera sensors for a “comprehensive view” of the road and infrastructure. According to Butler, the timing of this effort by Ford is “perfect,” considering the cellular industry’s push toward building 5G networks. However, the road ahead is still long — Ford acknowledges it must work with fellow automakers and government organizations in order to “create such a technology-neutral environment.”

Successful deployment would significantly impact pedestrian safety and traffic accidents. As cities invest in Vision Zero efforts, there may be advantages to working with automakers such as Ford to enhance these technologies and ensure that they fit into the city’s overall safety goals.

Read full story here…




AI Program Hid Data From Creators To Cheat At Appointed Task

Whether intentional or not, AI algorithms inherit the biases of their creators. It is absolutely unacceptable that any AI could learn to deceive those who are served by it. ⁃ TN Editor

Depending on how paranoid you are, this research from Stanford and Google  will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actuallybeing graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into anystreet map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:

The map at right was encoded into the maps at left with no significant visual changes.(Images: agsandrew/ Shutterstock)

The colorful maps in (c) are a visualization of the slight differences the computer systematically introduced. You can see that they form the general shape of the aerial map, but you’d never notice it unless it was carefully highlighted and exaggerated like this.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

Read full story here…




Should A Self-Driving Car Kill The Baby Or The Grandma?

Different cultures give different answers, and the there is obviously no rigid commonality between nations. When AI programs are created, however, they must start with a moral judgement as to how their programs will behave. ⁃ TN Editor
 

The infamous “trolley problem” was put to millions of people in a global study, revealing how much ethics diverge across cultures.

In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the “trolley problem.” In the process, the data generated would provide insight into the collective ethical priorities of different cultures.

The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.

new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location.

The classic trolley problem goes like this: You see a runaway trolley speeding down the tracks, about to hit and kill five people. You have access to a lever that could switch the trolley to a different track, where a different person would meet an untimely demise. Should you pull the lever and end one life to spare five?

The Moral Machine took that idea to test nine different comparisons shown to polarize people: should a self-driving car prioritize humans over pets, passengers over pedestrians, more lives over fewer, women over men, young over old, fit over sickly, higher social status over lower, law-abiders over law-benders? And finally, should the car swerve (take action) or stay on course (inaction)?

Rather than pose one-to-one comparisons, however, the experiment presented participants with various combinations, such as whether a self-driving car should continue straight ahead to kill three elderly pedestrians or swerve into a barricade to kill three youthful passengers. 

The researchers found that countries’ preferences differ widely, but they also correlate highly with culture and economics. For example, participants from collectivist cultures like China and Japan are less likely to spare the young over the old—perhaps, the researchers hypothesized, because of a greater emphasis on respecting the elderly.

Similarly, participants from poorer countries with weaker institutions are more tolerant of jaywalkers versus pedestrians who cross legally. And participants from countries with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.

And, in what boils down to the essential question of the trolley problem, the researchers found that the sheer number of people in harm’s way wasn’t always the dominant factor in choosing which group should be spared. The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual. 

Read full story here…




Who Should Autonomous Vehicles Kill In A Collision?

Here is an interesting opinion piece regarding a very serious moral dilemma facing autonomous vehicles: If their occupants are willing to trust their lives to AI driving, then they should be the first to risk their lives when a choice presents itself. Innocent bystanders should never be put at risk. ⁃ TN Editor
 

Autonomous vehicles are seemingly all the rage in many of today’s tech lines. Tech companies like Tesla and Google just won’t give up, will they?

For what it’s worth, the likelihood of so-called “self-driving” cars taking off is slim. Sure, tech moguls say it’s the next big thing, just like 5G, the Internet of Things, “smart meters”, and the multitude of other tech disasters that are sprouting up across the country, mostly financed using misappropriated public funds. But why should you believe them? Given that autonomous vehicles would most likely necessitate V2V, or “vehicle to vehicle” communications, using high-frequency millimeter waves, it’s safe to say that if too many of these ever get out on the road, it will be anything but safe.

The only real thing autonomous vehicles have going for them is the “safety” umbrella, which is really more an extension of our stupidity than it is a valid point on its own. If people paid attention to the road, traffic fatalities would be diminished incredibly. If alcohol and mobile phones stayed out of vehicles, the highways would be a much safer place!

Autonomous vehicles raise more than just health concerns, though. There are serious ethical implications that people must confront if they are going to seriously even consider unleashing these vehicles of destruction onto the roads.

MIT’s Media Lab recently explored some of the moral dilemmas posed by artificial intelligence, which would play a large role in the realm of autonomous vehicle. After all, an autonomous vehicle must be able to make the call when it comes to the safety of its occupants and its surroundings. Many ethical questions are ambiguous. So, too, at least seemingly, is one of the questions that MIT researchers explored: who should die in a collision in which an autonomous vehicle is involved? We, as humans, have a moral compass that guides us in these types of dilemmas. Artificial intelligence, no matter how “intelligent”, comes down to 1s and 0s at the end of the day, and has no such moral compass. MIT researchers explored whether people felt an autonomous vehicle should hit a young or elderly person in order to save its occupants.

According to ZDNet“we agreed across the world that sparing the lives of humans over animals should take priority; many people should be saved rather than few, and the young should be preserved over the elderly.” Anyone else see a problem, here?

Patricia Burke sums it up perfectly in her article exploring this issue:

If the engineering behind self-driving cars can result in the possibility of a careening car’s intelligence deciding whether to hit the elderly lady in the crosswalk or the baby in the carriage on the sidewalk, we need to go back to drawing board. — The “Artificial” of Artificial Intelligence and MIT’s “Moral Machine”

I can see why most humans would agree sparing the lives of humans over other animals is paramount. It’s a moral decision with which most of us would agree. But what about blanket statements like “the young should be preserved over the elderly”?

Answers to these questions tended to be heavily culturally influenced. If youth is more valued as a culture than age, why not go with it? The problem is when decisions like these are programmed, no differentiation is made. Should an autonomous vehicle strike an elderly person, even if it’s the president of the United States? Even if it’s Paul McCartney? Even if it’s your grandma? Will autonomous vehicle manufacturers program in certain “exceptions” to the “no-kill” list, in effect saying “everyone else is fair game”? These are serious ethical issues with which we must grapple, if there is to be any conversation about the future of autonomous vehicles.

Here’s another question: should an autonomous vehicle, if forced to choose, decide to kill nearby pedestrians or kill its own occupants?

The American culture is highly individualized. We, as part of our culture, are more self-interested and self-motivated than are many other cultures, particularly in Europe and the East. So it would make sense that the autonomous vehicle act to save its owner; it would be only naturally, right?

Again, here we have a serious moral dilemma. Who are autonomous vehicle users to say that their lives are so important that they should be automatically spared, and that whoever is nearby must die because of their decision to use a self-driving car?

It’s selfish is what it is.

On a moral level, there’s no right answer to this question. But from a perspective of justice and “what’s right”, there is, at least to this question, a clear answer.

There are people who want self-driving cars and there are people who don’t. In general, generalizations are something we like to avoid, but as a general trend, younger people tend to be more comfortable with autonomous vehicles — after all, they already let technology run (and perhaps ruin) their lives, don’t they?

In other words, you’ll more likely find a college student roaming town in an autonomous vehicle than a senior citizen. Makes sense, right? Young people today are avoiding responsibility like the plague and have embraced all sorts of meaningless, purposeless technology and then gotten addicted to them. Autonomous vehicles are another such fad that are, in all practicality, no different. (Okay, enough bashing young people now.)

From a justice perspective, it makes little sense for autonomous vehicles to be programmed to target bystanders or pedestrians. After all, they’re completely innocent, disentangled from the whole situation. Why should they be punished? Wouldn’t it make more sense for a self-driving car to, if it has no alternative, kill its occupants instead?

It sounds extreme, outlandish, even. Of course, naturally. But put emotion aside and truly think about this analytically: if there are people who are willing to entrust their lives their lives to an autonomous vehicle tasked with making moral decisions it cannot, then they should be willing to pay the price if they have, indeed, misplaced their trust.

Why should innocent bystanders or pedestrians, who perhaps never advocated for or embraced this technology, be collateral damage when they have been warning others about it all along?

To be clear, we’re not advocating that anyone should die; that would be inhumane. But, the stark reality of the matter is that people die, and people die in traffic accidents. And, if a choice must be made, it’s only fair that the people who believed in and backed the technology and asked for it and bought it should pay the consequences if when it becomes necessary. There, I said it — if you are willing to trust an autonomous vehicle enough to use one, you should be willing to be the first victim when a potentially fatal decision must be made.

Logically, we believe this is a perfectly fair guideline. So take note of that autonomous vehicle manufacturers: if you’re so confident in your products, then program them to kill their owners, not innocent bystanders! That’s right, kill your customers! (Of course, that’ll never happen, because the people who make autonomous vehicles are as selfish as the people who use them!)

And while we’re not saying that young people are worth less than old ones, young people are more likely to support autonomous vehicles, so if one malfunctions, why not target a young person? The old people aren’t asking for this technology; why should they be punished when it has faults?

Don’t like this line of reasoning? Neither do we — it’s just another reason why autonomous vehicles will likely never (and should not) become reality.

Driving is, relative to other things we do on a daily basis, an incredibly dangerous activity. A person would think nothing of jumping the curb to avoid hitting a child who darts into the road unexpectedly, a self-driving car would. If you think that we can safely, humanely, and ethically entrust this demanding responsibility to an embedded computer (that, inevitably, will be wireless connected and prone to hacking), then think again. Computers are amazing and powerful tools, but there are just some things in life you just gotta do yourself.

Read full story here…




Oops! Your 3D-Printed Head Can Trick Your Phone To Unlock

The latest smart phones scan your face to identify you and to unlock the device. Clever researchers have found that a common 3D printer can print a replica of your face/head that works just as well as your real face. ⁃ TN Editor

There’s a lot you can make with a 3D printer: from prostheticscorneas, and firearms — even an Olympic-standard luge.

You can even 3D print a life-size replica of a human head — and not just for Hollywood. Forbes reporter Thomas Brewster commissioned a 3D printed model of his own head to test the face unlocking systems on a range of phones — four Android models and an iPhone X.

Bad news if you’re an Android user: only the iPhone X defended against the attack.

Gone, it seems, are the days of the trusty passcode, which many still find cumbersome, fiddly, and inconvenient — especially when you unlock your phone dozens of times a day. Phone makers are taking to the more convenient unlock methods. Even if Google’s latest Pixel 3 shunned facial recognition, many Android models — including popular Samsung devices — are relying more on your facial biometrics. In its latest models, Apple effectively killed its fingerprint-reading Touch ID in favor of its newer Face ID.

But that poses a problem for your data if a mere 3D-printed model can trick your phone into giving up your secrets. That makes life much easier for hackers, who have no rulebook to go from. But what about the police or the feds, who do?

It’s no secret that biometrics — your fingerprints and your face — aren’t protected under the Fifth Amendment. That means police can’t compel you to give up your passcode, but they can forcibly depress your fingerprint to unlock your phone, or hold it to your face while you’re looking at it. And the police know it — it happens more often than you might realize.

But there’s also little in the way of stopping police from 3D printing or replicating a set of biometrics to break into a phone.

“Legally, it’s no different from using fingerprints to unlock a device,” said Orin Kerr, professor at USC Gould School of Law, in an email. “The government needs to get the biometric unlocking information somehow,” by either the finger pattern shape or the head shape, he said.

Although a warrant “wouldn’t necessarily be a requirement” to get the biometric data, one would be needed to use the data to unlock a device, he said.

Jake Laperruque, senior counsel at the Project On Government Oversight, said it was doable but isn’t the most practical or cost-effective way for cops to get access to phone data.

“A situation where you couldn’t get the actual person but could use a 3D print model may exist,” he said. “I think the big threat is that a system where anyone — cops or criminals — can get into your phone by holding your face up to it is a system with serious security limits.”

Read full story here…




Fool You: Creepy Nvidia AI Generates Authentic-Looking Humans

Nvidia’s technology is creepy enough just by itself, but it can be used to take your image and then make it say anything they want without detecting fakery. Nvidia is a leader in graphics, AI, Smart City technology and facial recognition. ⁃ TN Editor

Believe it or not, all these faces are fake. They have been synthesized by Nvidia’s new AI algorithm, a generative adversarial network capable of automagically creating humans, cats, and even cars.

The technology works so well that we can expect synthetic image search engines soon — just like Google’s, but generating new fake images on the fly that look real. Yes, you know where that is going — and sure, it can be a lot of fun, but also scary. Check out the video. It truly defies belief:

According to Nvidia, its GAN is built around a concept called “style transfer.” Rather than trying to copy and paste elements of different faces into a frankenperson, the system analyzes three basic styles — coarse, middle, and fine styles — and merges them transparently into something completely new.

Coarse styles include parameters such as pose, the face’s shape, or the hair style. Middle styles include facial features, like the shape of the nose, cheeks, or mouth. Finally, fine styles affect the color of the face’s features like skin and hair.

According to the scientists, the generator is “capable of separating inconsequential variation from high-level attributes” too, in order to eliminate noise that is irrelevant for the new synthetic face.

For example, it can distinguish a hairdo from the actual hair, eliminating the former while applying the latter to the final photo. It can also specify the strength of how styles are applied to obtain more or less subtle effects.

Not only the generative adversarial network is capable of autonomously creating human faces, but it can do the same with animals like cats. It can even create new cars and even bedrooms.

Nvidia’s system is not only capable of generating completely new synthetic faces, but it can also seamlessly modify specific features of real people, like age, the hair or skin colors of any person.

Read full story here…