AI Algorithms Are Writing Stories, Replacing Reporters

If words are a medium of creative human communication, how can an inhuman AI computer program add to knowledge? The answer is that it cannot, and in fact, it dumbs it down. Why should “artificial” replace the real thing? ⁃ TN Editor
 

As reporters and editors find themselves the victims of layoffs at digital publishers and traditional newspaper chains alike, journalism generated by machine is on the rise.

Roughly a third of the content published by Bloomberg News uses some form of automated technology. The system used by the company, Cyborg, is able to assist reporters in churning out thousands of articles on company earnings reports each quarter.

The program can dissect a financial report the moment it appears and spit out an immediate news story that includes the most pertinent facts and figures. And unlike business reporters, who find working on that kind of thing a snooze, it does so without complaint.

Untiring and accurate, Cyborg helps Bloomberg in its race against Reuters, its main rival in the field of quick-twitch business financial journalism, as well as giving it a fighting chance against a more recent player in the information race, hedge funds, which use artificial intelligence to serve their clients fresh facts.

“The financial markets are ahead of others in this,” said John Micklethwait, the editor in chief of Bloomberg.

In addition to covering company earnings for Bloomberg, robot reporters have been prolific producers of articles on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times.

Examples of machine-generated articles from The Associated Press:

TYSONS CORNER, Va. (AP) — MicroStrategy Inc. (MSTR) on Tuesday reported fourth-quarter net income of $3.3 million, after reporting a loss in the same period a year earlier.

MANCHESTER, N.H. (AP) — Jonathan Davis hit for the cycle, as the New Hampshire Fisher Cats topped the Portland Sea Dogs 10-3 on Tuesday.

Last week, The Guardian’s Australia edition published its first machine-assisted article, an account of annual political donations to the country’s political parties. And Forbes recently announced that it was testing a tool called Bertie to provide reporters with rough drafts and story templates.

As the use of artificial intelligence has become a part of the industry’s toolbox, journalism executives say it is not a threat to human employees. Rather, the idea is to allow journalists to spend more time on substantive work.

Read full story here…




AI Deep Learning ‘Godfather’ Yoshua Bengio Alarmed Over Use In China To Dominate Society

A principal inventor of AI, Bengio says “This is the 1984 Big Brother scenario”. Bengio and his fellow Technocrat scientists should have thought about this way before now, but it reflects their Pollyanna-ish view of humanity. ⁃ TN Editor

Yoshua Bengio, a Canadian computer scientist who helped pioneer the techniques underpinning much of the current excitement around artificial intelligence, is worried about China’s use of AI for surveillance and political control.

Bengio, who is also a co-founder of Montreal-based AI software company Element AI, said he was concerned about the technology he helped create being used for controlling people’s behavior and influencing their minds.

“This is the 1984 Big Brother scenario,” he said in an interview. “I think it’s becoming more and more scary.”

Bengio, a professor at the University of Montreal, is considered one of the three “godfathers” of deep learning, along with Yann LeCun and Geoff Hinton. It’s a technology that uses neural networks — a kind of software loosely based on aspects of the human brain — to make predictions based on data. It’s responsible for recent advances in facial recognition, natural language processing, translation, and recommendation algorithms.

Deep learning requires a large amount of data to provide examples from which to learn — but China, with its vast population and system of state record-keeping, has a lot of that.

The Chinese government has begun using closed circuit video cameras and facial recognition to monitor what its citizens do in public, from jaywalking to engaging in political dissent. It’s also created a National Credit Information Sharing Platform, which is being used to blacklist rail and air passengers for “anti-social” behavior and is considering expanding uses of this system to other situations.

“The use of your face to track you should be highly regulated,” Bengio said.

Bengio is not alone in his concern over China’s use-cases for AI. Billionaire George Soros recently used a speech at the World Economic Forum on Jan. 24, to highlight the risks the country’s use of AI poses to civil liberties and minority rights.

Unlike some peers, Bengio, who heads the Montreal Institute for Learning Algorithms (Mila), has resisted the temptation to work for a large, advertising-driven technology company. He said responsible development of AI may require some large technology companies to change the way they operate.

The amount of data large tech companies control is also a concern. He said the creation of data trusts — non-profit entities or legal frameworks under which people own their data and allow it be used only for certain purposes — might be one solution. If a trust held enough data, it could negotiate better terms with big tech companies that needed it, he said Thursday during a talk at Amnesty International U.K.’s office in London.

Read full story here…




The Merging Of Government With Artificial Intelligence

Technocrats are directly encroaching upon government functions. A Federal Data Strategy for AI was created in 2018 providing standards across the entire Federal government on the use of Artificial Intelligence. ⁃ TN Editor

Private businesses already use AI to find efficiencies in their own business and improve the return-on-investment of products and projects.

At the risk of dating myself, one of my favorite movies growing up as a kid was “WarGames” starring Matthew Broderick. I didn’t realize it at the time, but in the climactic scene, the large supercomputer ‘WOPR’ operated by the Defense Department, showed artificial intelligence capabilities. By playing tic-tac-toe against itself, it learned a lesson that prevented global thermonuclear war.

In many ways, Hollywood has warped what many think of when they first hear the term artificial intelligence, or AI. My thoughts used to go to movies like “The Terminator” or “The Matrix” where sentient machines develop the ability to think for themselves and try to overthrow humankind. While this makes for an exciting movie plot, AI has much more tangible—and less threatening—benefits, particularly for government.

In 2018, U.S. Chief Information Officer Suzette Kent announced the creation of the first Federal Data Strategy that will serve as a foundation for how agencies use AI.

Her analogy in describing the need for the strategy was compelling.

“Technology modernization allows us the opportunity to rethink our foundation,” Kent saidat an event announcing the strategy. “We have to move aggressively. We don’t want to build the high-speed train without the track.”

AI can serve as part of that track. As the government collects more and more data, the need for solutions to drive true value from that data grows in importance. AI, in conjunction with big data and analytics, can deliver that baseline value and go beyond traditional solutions to find deeper insights.

Other governments have recognized this as well. For example, the United Arab Emirates was the first nation to appoint a senior cabinet official solely focused on AI empowerment and oversight within the government, appointing a Minister of State for Artificial Intelligencein October 2017. Canada was the first nation to release a national AI strategy. And China has released a 3-year plan to be a leader … if not the leader … in AI.

Understanding AI

So, for those of us whose understanding of AI has heretofore been solely that of the Hollywood blockbuster, AI is the science of training systems to emulate specific human tasks through learning and automation. In short, it’s a technology that makes it possible for machines to learn from experience, adjust to new inputs and perform specific human tasks, such as pattern recognition, finding anomalies in data, image and video analytics, and more. Specific to analytics, AI can help analytics programs in government find connections and trends in the data that human analysts might miss due to scale, complexity, or other factors … and it can do it at a much faster speed. AI can find context in data, gaining insight from previous discoveries to create better outcomes in the future. From an analytics perspective, AI tends to focus in these areas:

  • Machine learning: Machine learning and deep learning find insights hidden in data without explicitly being told where to look or what to conclude. This results in better, faster and more accurate decision-making capabilities.
  • Natural Language Processing: NLP enables understanding, interaction and communication between humans and machines, automatically extracting insights and emerging trends from large amounts of structured and unstructured content.
  • Computer vision: Computer vision analyzes and interprets what’s in an image or video through image processing, image recognition and object detection.
  • Forecasting and optimization: Forecasting helps predict future outcomes, while optimization delivers the best results given resource constraints. This includes enabling large-scale automation for predicting outcomes and optimizing decisions.

Read full story here…




‘Self-Aware’ Autonomous Robot That Can Repair Itself

Hailed as a major scientific breakthrough, this robot mimics a newborn child in discovering its identity and how learn about and relate to its environment. The twist is that it can also repair itself if broken. Autonomous, self-learning robots are the reason that many experts are are warning about existential threats to mankind.  ⁃ TN Editor

Scientists have created a self-aware robot capable of operating on its own without any instructions, in a major scientific breakthrough.

Engineers at Columbia University, in New York, have reached a pinnacle in robotics inventions, inventing a mechanical arm able to programme itself – even after it is malfunctioned.

Professor Hod Lipson, who leads the Creative Machines lab, where the research was carried out, likened the robotic arm to how a “newborn child” adapts to their environment and learns things on its own.

The group of scientists claimed this is the first time a robot has shown the ability to “imagine itself” and work out its purpose, figuring out how to operate without inbuilt mechanics. In the study, published in the journal Science Robotics, Prof Lipson said: “This is perhaps what a newborn child does in its crib, as it learns what it is.

“We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans.

“While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

The mechanical arm was designed with no knowledge of physics, geometry or dynamics.

After spending around 35 hours moving at random, the mechanism was able to grasp intensive computing knowledge and figure out its capabilities.

Shortly after the mechanical arm was able to construct its own biomechanics, allowing it to cleverly pick up and drop objects.

The robot also performed other tasks such as writing using a marker.

The researchers printed a 3D-deformed part to simulate a damaged part, to see if the robot was able to detect the fault and adapt its mechanics.

The arm was able to detect the malfunction, and retrained its system to continue performing tasks despite the damaged part.

Read full story here…




Chefs And Truck Drivers: AI Is Coming For Your Jobs

The Technocrats building human-displacing robots realize that they are creating a new class of ‘unemployables’ that will necessitate Universal Basic Income (UBI). Eventually, AI will be recognized as anti-human. ⁃ TN Editor

Robots aren’t replacing everyone, but a quarter of U.S. jobs will be severely disrupted as artificial intelligence accelerates the automation of existing work, according to a new Brookings Institution report.

The report, published Thursday, says roughly 36 million Americans hold jobs with “high exposure” to automation — meaning at least 70 percent of their tasks could soon be performed by machines using current technology. Among those most likely to be affected are cooks, waiters and others in food services; short-haul truck drivers; and clerical office workers.

“That population is going to need to upskill, reskill or change jobs fast,” said Mark Muro, a senior fellow at Brookings and lead author of the report.

Muro said the timeline for the changes could be “a few years or it could be two decades.” But it’s likely that automation will happen more swiftly during the next economic downturn. Businesses are typically eager to implement cost-cutting technology as they lay off workers.

Some economic studies have found similar shifts toward automating production happened in the early part of previous recessions — and may have contributed to the “jobless recovery” that followed the 2008 financial crisis.

But with new advances in artificial intelligence, it’s not just industrial and warehouse robots that will alter the American workforce. Self-checkout kiosks and computerized hotel concierges will do their part.

Most jobs will change somewhat as machines take over routine tasks, but a majority of U.S. workers will be able to adapt to that shift without being displaced.

The changes will hit hardest in smaller cities, especially those in the heartland and Rust Belt and in states like Indiana and Kentucky, according to the report by the Washington think tank. They will also disproportionately affect the younger workers who dominate food services and other industries at highest risk for automation.

Read full story here…




IBM Launches First Quantum Computer-In-A-Box For Commercial Use

This is a watershed computer technology that will enable Technocracy on every level: Practical Quantum computing outside of the laboratory. “Q” will find its sweet spot in Artificial Intelligence and the Internet of Everything. ⁃ TN Editor

IBM unveiled the world’s “first universal approximate quantum computing system installed outside of a research lab” at CES earlier this week — and with it, the next era of computing.

The 20-qubit IBM Q System One represents the first major leap for quantum computers of 2019, but before we get into the technical stuff let’s take a look at this thing.

The commitment to a fully-functional yet aesthetically pleasing design is intriguing. Especially considering that, just last year, pundits claimed quantum computing was a dead-end technology.

To make the first integrated quantum computer designed for commercial use outside of a lab both beautiful and functional, IBM enlisted the aid of Goppion, the company responsible for some of the world’s most famous museum-quality display cases, Universal Design Studio and Map Project Office. The result is not only (arguably) a scientific first, but a stunning machine to look at.

This isn’t just about looks. That box represents a giant leap in the field.

It’s hard to overstate the importance of bringing quantum computers outside of laboratories. Some of the biggest obstacles to universal quantum computing have been engineering-related. It isn’t easy to manipulate the fabric of the universe — or, at a minimum, observe it — and the machines that attempt it typically require massive infrastructure.

In order to decouple a quantum system from its laboratory lifeline, IBM had to figure out how to conduct super-cooling (necessary for quantum computation under the current paradigm) in a box. This was accomplished through painstakingly developed cryogenic engineering.

Those familiar with the company’s history might recall that, back in the 1940s, IBM‘s classical computers took up an entire room. Eventually, those systems started shrinking. Now they fit on your wrist and have more computational power than all the computers from the mainframe era put together.

It sure looks like history is repeating itself.

Read full story here…




AI Will Take 40 Percent Of White, Blue Collar Jobs In 15 Years

This prediction comes from a Chinese venture capitalist based in China who is intent on investing huge sums of capital to make it so. Technocrats have no restraint when it comes to disrupting society, and worse, they have no answers on how to lessen the blows. Lee is formerly head of Google’s China operations. ⁃ TN Editor

In as soon as 15 years, 40 percent of the world’s jobs could be done by machines, according to one of the world’s foremost experts on artificial intelligence. Kai Fu Lee, a pioneer in AI and venture capitalist based in China makes this prediction in a Scott Pelley report about AI on the next edition of 60 Minutes, Sunday, Jan. 13 at 7 p.m., ET/PT on CBS.

“AI will increasingly replace repetitive jobs, not just for blue-collar work, but a lot of white-collar work,” says Lee. “Chauffeurs, truck drivers, anyone who does driving for a living– their jobs will be disrupted more in the 15-25 year time frame,” he tells Pelley. “Many jobs that seem a little bit complex, chef, waiter, a lot of things will become automated … stores … restaurants, and altogether in 15 years, that’s going to displace about 40 percent of the jobs in the world.” When pressed by Pelley about 40 percent of jobs being displaced, Lee says the jobs will be, “displaceable.”

“I believe [AI] is going to change the world more than anything in the history of mankind. More than electricity,” says Lee.

One of the biggest changes will be in education. Lee is financing companies that are installing AI systems in remote classrooms across China to improve learning for students far from the country’s growing cities. The AI-system is being designed to gauge student interest and intelligence by subject.

Could such artificial intelligence identify the geniuses of the world? “That’s possible in the future,” says Lee. “It can also create a student profile and know where the student got stuck so the teacher can personalize the areas in which the student needs help.”

Those students will be facing an uncertain future with 40 percent of the world’s current jobs displaceable. “What does that do to the fabric of society?” asks Pelley. “Well, in some sense, there is the human wisdom that always overcomes these technological revolutions,” Lee says.  “The invention of the steam engine, the sewing machine, electricity, have all displaced jobs. We’ve gotten over it. The challenge of AI is this 40 percent, whether it is 15 or 25 years, is coming faster than the previous revolutions.”

Read full story here…




New Car Tech Will Hoover Data On All Occupants

The Consumer Electronics Show in Las Vegas is a Technocrat bonanza gone wild. Your car used to be a private spot where you could ‘get away from it all’ but now that is over. Technocrats will use every sensor in the car, plus new ones, to monitor and analyze each occupant. ⁃ TN Editor

As vehicles get smarter, your car will be keeping eyes on you.

This week at CES, the international consumer electronics show in Las Vegas, a host of startup companies will demonstrate to global automakers how the sensor technology that watches and analyzes drivers, passengers and objects in cars will mean enhanced safety in the short-term, and revenue opportunities in the future.

Whether by generating alerts about drowsiness, unfastened seat belts or wallets left in the backseat, the emerging technology aims not only to cut back on distracted driving and other undesirable behavior, but eventually help automakers and ride-hailing companies make money from data generated inside the vehicle.

In-car sensor technology is deemed critical to the full deployment of self-driving cars, which analysts say is still likely years away in the United States. Right now, self-driving cars are still mainly at the testing stage.

The more sophisticated in-car monitoring also could respond to concerns that technology that automates some – but not all – driving tasks could lead motorists to stop paying attention and not be ready to retake control should the situation demand it.

When self-driving cars gain broad acceptance, the monitoring cameras and the artificial-intelligence software behind them will likely be used to help create a more customized ride for the passengers. Right now, however, such cameras are being used mainly to enhance safety, not unlike a helpful backseat driver.

Interior-facing cameras inside the car are still a novelty, currently found only in the 2018 Cadillac (GM.N) CT6. Audi (VOWG_p.DE) and Tesla Inc (TSLA.O) have developed systems but they are not currently activated. Mazda (7261.T), Subaru (9778.T) and electric vehicle start-up Byton are introducing cars for 2019 whose cameras measure driver inattention. Startup Nauto’s camera and AI-based tech is used by commercial fleets.

Data from the cameras is analyzed with image recognition software to determine whether a driver is looking at his cellphone or the dashboard, turned away, or getting sleepy, to cite a few examples.

Read full story here…




Tragic: Autonomous Promobot ‘Struck And Killed’ By Self-Driving Tesla

Considering this as a leadup to the Consumer Electronics Show (CES) in Las Vegas, it could nothing more than  an expensive publicity stunt. Or, it could be that Elon Musk’s AI program had it in for the Russian-made AI that controls Promobot.  ⁃ TN Editor

Tesla has found itself involved in yet another self-driving car accident – and this time, its victim was a $2,000-per-day rentable humanoid robot.

In what many are speculating was an over-the-top PR stunt, Promobot revealed one of its model v4 robots was ‘killed’ by a Tesla Model S on a Las Vegas street ahead of CES.

The accident occurred on Paradise Rd Sunday night as engineers transported the firm’s robots to the display booth.

According to Promobot, a number of robots were making their way to the booth around 7 p.m. when one of them stepped out of line and into the parking lot roadway.

As it did, it was struck by a Tesla Model S operating in autonomous mode.

The crash tipped the robot onto its side, causing ‘serious damage,’ Promobot says.

Now, with parts of its body, head, arm mechanisms, and movement platform destroyed, it cannot be put on display.

The firm says the damage is likely irreparable.

Of course we are vexed,’ said Oleg Kivokurtsev, Promobot’s Development Director.

‘We brought this robot here from Philadelphia to participate at CES. Now it neither cannot participate in the event or be recovered.

‘We will conduct an internal investigation and find out why the robot went to the roadway.’

The bizarre news now has many people wondering whether the incident was a PR stunt, or simply an unfortunate coincidence.

The Tesla involved in the collision was operating autonomously, though a passenger was inside at the time.

Read full story here…




Ford To Deploy 5G Vehicle-To-Everything Tech By 2022

Ford Motor Company will be the first to use 5G to enable ubiquitous communication between autos, traffic signals, cell phones. This also gives a clue as to when 5G will be fully rolled out to the nation. ⁃ TN Editor

Don Butler, executive director of the Ford Connected Vehicle Platform and Product, announced in a Medium post on Monday that Ford has committed to deploy cellular vehicle-to-everything (C-V2X) technology  in all new U.S. vehicle models starting in 2022.

The C-V2X tech will allow equipped vehicles to “talk” to and “listen” to each other, as well as directly connect with traffic management infrastructure (such as traffic lights). Pedestrians can also use their mobile phones to convey their locations to vehicles, making roads safer for walkers and cyclists.

“Driver-assist technologies today and autonomous vehicles of the future utilize on-board sensors much in the way people use their eyes to navigate complex environments,” Butler wrote. “C-V2X could complement these systems in ways similar to how our sense of hearing complements our vision to improve our ability to operate in a complex world.”

5G isn’t just changing how society will utilize the internet — it’s also transforming how vehicles can connect with their surrounding environment. The C-V2X platform will run on 5G and complement any existing LiDAR, radar and camera sensors for a “comprehensive view” of the road and infrastructure. According to Butler, the timing of this effort by Ford is “perfect,” considering the cellular industry’s push toward building 5G networks. However, the road ahead is still long — Ford acknowledges it must work with fellow automakers and government organizations in order to “create such a technology-neutral environment.”

Successful deployment would significantly impact pedestrian safety and traffic accidents. As cities invest in Vision Zero efforts, there may be advantages to working with automakers such as Ford to enhance these technologies and ensure that they fit into the city’s overall safety goals.

Read full story here…