First corporations, and now artificial intelligence — the push for nonhuman personhood continues apace, though this latest argument is decidedly more complicated than the former.
In an op-ed for the Los Angeles Times, philosophy expert Eric Schwitzgebel and “nonhuman” intelligence researcher Henry Shevlin argued that although AI technology is definitely not there yet, it has “become increasingly plausible that AI systems could exhibit something like consciousness” — and if or when that occurs, the algorithms, too, will need rights.
Citing last year’s AI consciousness wars — which we covered extensively and even dipped our toes into — the researchers noted that “some leading theorists contend that we already have the core technological ingredients for conscious machines.”
If machines were to ever gain consciousness, Schwitzgebel and Shevlin argue we would have to begin thinking critically about how the AIs are treated — or rather, how they may force our hands.
“The AI systems themselves might begin to plead, or seem to plead, for ethical treatment,” the pair predicted. “They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals.”
The “enormous” moral risks involved in such a collective decision would undoubtedly carry great weight, especially if AIs become conscious sooner rather than later.
“Suppose we respond conservatively, declining to change law or policy until there’s widespread consensus that AI systems really are meaningfully sentient,” Shevlin and Schwitzgebel wrote. “While this might seem appropriately cautious, it also guarantees that we will be slow to recognize the rights of our AI creations.”
“If AI consciousness arrives sooner than the most conservative theorists expect, then this would likely result in the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems — suffering on a scale normally associated with wars or famines,” they added.
The “safer” alternative to this doomsday scenario would be to give conscious machines rights upfront — but that, too, would come with its own problems.
“Imagine if we couldn’t update or delete a hate-spewing or lie-peddling algorithm because some people worry that the algorithm is conscious,” the experts posited. “Or imagine if someone lets a human die to save an AI ‘friend.’ If we too quickly grant AI systems substantial rights, the human costs could be enormous.”
The only way to ensure neither of these outcomes occurs, the pair wrote, would be to stop giving an AI a conscience in the first place.
[…] Read original article […]
Bad Idea!!
A ‘PERSON’ is a NON HUMAN. Call yourself a Human, or People…..but NEVER a ‘Person’!
The control of minds and information is more powerful and more effective than any physical weapon in existence.
Let Woke programmers inject their wokeness into AI makes it not AI but GIGO. Garbage in-Garbage out = the quality of output is determined by the quality of input.
The devil is in the details of all things humanoids too.
Geez! Whatever. We’ll see said the Zen Master.
Everyone SHOULD know that a ‘person’ is NOT A HUMAN, right??? A ‘person’ is part of ‘their’ LEGAL SPEAK BS. You are a HUMAN or PEOPLE…….but NEVER a ‘PERSON’. NEVER refer to yourself as a ‘PERSON’!! You are NOT! LEARN COMMON LAW…..USE IT!…..instead of ‘THEIR’ Maritime Law. When you use ‘THEIR’ MARITIME LAW…….YOU LOSE!! FORCE them to use COMMON LAW! Our Constitution does NOT start with, ‘WE, THE PERSONS…..’. It’s, ‘WE, THE PEOPLE….!’. YOU are NOT a ‘PERSON’!…..YOU are a HUMAN/PEOPLE! A ‘PERSON’ is NOT HUMAN/PEOPLE! A ‘PERSON’ does NOT have HUMAN RIGHTS! BONUS: NEVER use the word ‘VEHICLE’. Use… Read more »
[…] Experts: ‘Conscious’ AI Is Targeted For Personhood Rights In The Future […]