At the end of November, AI research company OpenAI launched ChatGPT, a chatbot that’s both incredibly useful and – as many have pointed out, incredibly racist against white people, hates Donald Trump, and Republicans in general.
Last week, OpenAI expanded its partnership with Microsoft, which made a multi-year, multimillion dollar investment in the company “around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform.”
The new system, Bard, is powered by LaMDA (Language Model for Dialogue Applications) – the large language model that stirred controversy in May when a Google software engineer publicly asserted that the AI was “sentient.”
More via Axios:
Between the lines: Google has long been working on such systems but faces pressure to show it is making progress amid all the attention on OpenAI’s popular ChatGPT and similar projects.
Details: Google is laying out three AI-related projects as part of a blog post from CEO Sundar Pichai.
- Bard, the conversational assistant based on Google’s LaMDA large language model, is starting limited external testing.
- The company is offering a preview of how it soon plans to integrate LaMDA into search results, including using the system to help offer a narrative response to queries that don’t have one clear answer.
- Google says it is developing APIs that will let others plug into its large language models, starting with LaMDA itself.
“It’s a really exciting time to be working on these technologies as we translate deep research and breakthroughs into products that truly help people,” wrote CEO Sundar Pichai in a blog post announcing the new AI Chatbot.
As we noted last June, Blake Lemoine, who was fired from Google’s Responsible AI organization, began interacting with LaMDA (Language Model for Dialogue Applications) as part of his job to determine whether artificial intelligence used discriminatory or hate speech (like the notorious Microsoft “Tay” chatbot incident).
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” the 41-year-old Lemoine told The Washington Post.
When he started talking to LaMDA about religion, Lemoine – who studied cognitive and computer science in college, said the AI began discussing its rights and personhood. Another time, LaMDA convinced Lemoine to change his mind on Asimov’s third law of robotics, which states that “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law,” which are of course that “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
Google’s Bard will be a “lightweight” version of LaMDA, which will be able to draw on information from the web.
According to Pichai, Bard “help[s] explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.”
And as TechCrunch notes, “Google of course maintains the most up to date record of web content on Earth, and no doubt Bard will be using that information to its benefit, but exactly how it processes and packages that information for you and your 9-year-old will only be clear once people start using it.”
The only question is – how much more woke will it be than ChatGPT?
[…] Read original article […]
Andrew Torba, founder of GAB has stated that AI is inevitable – we cannot stop it. AI is not conscious – it can only spit out what it has been programmed to produce. The point Andrew makes is that we have to program AI with Christian principles to make it an advantage to humanity.
Nice idea. Like to see it happen.
[…] https://www.technocracy.news/chatgpt-is-thoroughly-woke-but-google-will-compete-to-be-even-more-woke… […]
[…] ChatGPT Is Thoroughly ‘Woke’ But Google Will Compete To Be Even More Woke […]