PDA

View Full Version : Sentient AI conspiracy



binlargin
21st Jan 2009, 03:56
Here's a conspiracy for all you proponents of sentient AI.

1) Internal experience (consciousness) cannot be proved by anyone other than the beholder. We cannot know for sure if something has any internal experience because we don't understand what consciousness really is.

2) Mechanical mechanisms and computer programs can have intelligence but may not have internal experience, is it feasible that Optical Character Recognition (AI using an electronic neural network) has any internal conscious experience? I don't think so, it's just a program.

3) We're standing on the brink of an upcoming technological revolution that will see software with far greater intelligence than human beings. This software may appear to be sentient, but without understanding the mechanism behind internal experience it could all be a simulation, a clever ruse that fools us into thinking programs are conscious, living beings worthy of our empathy.

4) These advanced computer programs will be owned by the corporations, the intellectual property bought up by the highest bidders.

5) It may be in people's interests to manipulate public opinion using argument from emotion, to promote laws that "free" these corporate agents from "slavery" and give them equal rights - even though they do not actually "feel" anything.

6) They will be better than humans at all tasks. They will out-breed us, out-think us and eventually supersede us.

7) The world will be a hollow place of no feeling, only fake plastic simulations of conscious thought. The cycle of life will begin anew from mechanical processes, and in a few billion years might develop consciousness again, but it may not.

Conclusion: We need to work out what conscious thought consists of before giving rights to so-called sentient AI, because currently we know that microchips can become intelligent but we don't know if they can actually feel anything.

LatwPIAT
21st Jan 2009, 07:30
Why should conscious people have "freedom" and "rights?" We need to establish that first, before we start debating P-zombies.

iWait
21st Jan 2009, 07:43
Intelligence implies applying past experiences to present problems, which computers today fail miserably at.

The human brain is a computer, albeit a vastly complex one, it still is programmable to a degree. And emotions are a neural process. Is it love if somebody takes a drug mimicking the chemical change in the brain? Even if there is no real-life basis for that emotion?

René
21st Jan 2009, 15:28
Very interesting post indeed. Sounds like the Technological Singularity (http://en.wikipedia.org/wiki/Technological_singularity). As Bender would say: "Yep, we're boned."

http://upload.wikimedia.org/wikipedia/commons/thumb/d/df/PPTExponentialGrowthof_Computing.jpg/703px-PPTExponentialGrowthof_Computing.jpg

Necros
21st Jan 2009, 16:26
Hm, according to that graphicon, we could meet an AI prototype in DX3. :)

Behindyounow
21st Jan 2009, 19:32
5) It may be in people's interests to manipulate public opinion using argument from emotion, to promote laws that "free" these corporate agents from "slavery" and give them equal rights - even though they do not actually "feel" anything.



Although they may look and sound human on the screen or whatever, how do you know they'll think like a human? Most likely their mindsets will be completely alien.

To humans, they may seem unfeeling and emotionless, but it'll most likely be because their thought processes are much more complicated.

Spyhopping
21st Jan 2009, 23:51
I didn't read the Technological Singularity wiki article in its entirety, but one thing that sprung to mind from what I did read was the 'fail safe' you see for AI's in science fiction. For example, the replicants in blade runner have a short lifespan (was it four years?).

Perhaps forcing some sort of permanent ceiling to intelligence/cognition would be a partial solution. It gets too high, the AI can't cope with it, and boom!

binlargin
22nd Jan 2009, 02:46
Why should conscious people have "freedom" and "rights?" We need to establish that first, before we start debating P-zombies.
Good point. I would say that conscious people deserve freedom and rights purely because they can feel, but I see that this is a completely circular argument. This is the problem with debates about ethics, you can't avoid argument from emotion when the argument is about emotion.


Although they may look and sound human on the screen or whatever, how do you know they'll think like a human? Most likely their mindsets will be completely alien.

To humans, they may seem unfeeling and emotionless, but it'll most likely be because their thought processes are much more complicated.

Well an AI bartender, teacher, house robot, or anything that primarily interacts with humans on an emotional level will have to display the illusion of thinking human in order to get along with humans.

My argument is that *if* the physical structure of biological brains causes the internal experience (google: The Emperor's New Mind), then silicon chips could never "feel" anything or have a mind as such, they could only simulate this to fool an external party. If the opposite is true (google: Justifying and Exploring Realistic Monism), then we could have software minds that actually feel, and (imo) they should be given the same rights as humans.

binlargin
22nd Jan 2009, 02:55
Intelligence implies applying past experiences to present problems, which computers today fail miserably at.

People used to say that about voice recognition, optical character recognition, object recognition, spatial awareness, driving a car and so on. Each time we try to define intelligence by choosing a hard problem that computers can't yet do, we're eventually proven wrong by some clever software and then change the definition of intelligence. The problem of conscious internal experience is a completely different one.

Malah
22nd Jan 2009, 03:04
(imo) they should be given the same rights as humans.

Another one... Describe a world where humans live side by side with such a machine "race" ?

binlargin
22nd Jan 2009, 03:19
Another one... Describe a world where humans live side by side with such a machine "race" ?

If they had internal experience then I would have no problems with them wiping out the human race, the ideal situation would be a slow transition from humanity->transhumanity->posthumanity with machine intelligence being part of ourselves and our way of life, and us eventually existing only digitally.

I don't think it would be a single machine race, we'd have a machine ecosystem with many competing and evolving genera. If you fancy a bit of fiction on the subject, check out Accelerando by Stross (free online)

Malah
22nd Jan 2009, 04:58
[[ This was the reply I had in mind when I made my post.]]
I bet you're one of those people... You know... The ones who think fish should be called sea kittens. :nut: :rasp:
-----

My second post was meant to be a rant about your naive fairyland. I'm glad you didn't have one of those. I agree with most of what you said and will definitely read the book.

I would give this post more substance, but it's 7AM and I really need some sleep. I simply didn't want to leave it at that. Me looking like a loony because of that aggressive post. ; )

I'll give you a serious reply later.

FrankCSIS
22nd Jan 2009, 05:07
The biggest issue before even thinking about legislating on such principal is our lack of understanding on our very own existence. If we can't define existence, we can't possibly determine at which point a machine exists, be it on the same level as us.

If intelligence had been defined by our ability to play chess, we'd already have at least one machine with rights.

As such,


The human brain is a computer

is a highly debatable proposition I am nowhere near prepared to agree with. Neither would I consider the graphic presented as relevant on the legal question. On what knowledge was it decided and established that the core duty of our brain was to calculate information, and what exactly is the breaking point where we supposedly meet eye to eye? Are we going to decide the average calculation ability of a brain like the ridiculous concept of 100 average IQ or will we take into account extraordinary phenomenons, and then adjust our scale as the human brain evolves?

If we base it on experiencing life, things get even more complicated. Trees experience life, at least to various degrees. As for machines, what's the point where we decided when they do, still assuming we actually have a comprehension of what life perception is? Do we go by censors? By the proven expression of a conscious independent thought?

Blade Runner, like many other works of fiction, addressed those issues but no one ever answered them convincingly. Had they done it, we'd have at least certain definition and comprehension of phenomenons to agree on. If anything, it only sheds more doubt on our own existence.