This is topic Morality and Self Aware Robots in forum Books, Films, Food and Culture at Hatrack River Forum.


To visit this topic, use this URL:
http://www.hatrack.com/ubb/main/ultimatebb.php?ubb=get_topic;f=2;t=058702

Posted by Geraine (Member # 9913) on :
 
I know, strange thread title. I was looking at Engadget today and I noticed an article regarding QBO, which is an open source robot.

The QBO team's goal is to make a robot self aware through a combination of real life experiences and complex algorithms. It is a poor word to use but it is almost as if they are "raising" the robot. They have been able to accomplish some amazing things so far. QBO has discovered his reflection in a mirror, and has met another QBO unit and carried on a conversation with it.

You can read the company's blog here:

http://thecorpora.com/blog/

My question is: If Humans are able to create a fully aware synthetic being, what are our moral obligations to it? If a robot can think for itself and make its own decisions, is it morally wrong to control it using a remote? There is an app to control a QBO unit available on Android devices.

I am already kind of torn with QBO. It already has a very small sense of identity, as it has learned about its own reflection. Just to put it into perspective, babies start to learn the same at about eight months old.

How do we measure self awareness, and at what point do we determine it is no longer moral to control the synthetic being?
 
Posted by Strider (Member # 1807) on :
 
quote:
If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
The same as any other fully aware being I would think.

But I don't think you should worry so much, at least not about QBO. While it's certainly difficult, if not impossible, to determine (with certainty) the level of awareness/consciousness/subjective experience in any organisms not of our own species, I don't think there's anything to indicate that the QBO is conscious in any way.

from the article:

quote:
Others, however, affirm that the self-consciousness is a process that emerges as a result of algorithms and an appropriate learning.

...

Qbo can be seen as a conscious being because it exploits knowledge of its appearance and the actions it takes.

That's a pretty big leap isn't it? Even IF they're right that consciousness can be programmed algorithmically (and I don't think they are), that doesn't immediately imply that they have programmed it in the right sort of way. Or that the underlying architecture and programming can allow the type of evolution to allow consciousness to emerge.

Another quote, from a third party:

quote:
Qbo is just a programmable electro-mechanical set that can see, hear, speak and move
That kind of language is already laden with presuppositions. To call what it's doing seeing, hearing or speaking, by using those words, already implies something about what it's doing that we don't have any real justification for positing. Is seeing and hearing simply behaving in a certain way based off interactions with sensory signals (another phrase already laden with interpretive/perspectival meanings) in the environment? Or is there a subjective quality that we associate with the words seeing and hearing?

They all seem to assume that if it behaves in the appropriate way, then it must be self aware. But while the behaviorists had many useful insights about human cognition, hasn't it been a while since anyone took behaviorism seriously? Also, what kind of learning is it subjected to? Is it researcher programmed learning? Or is it self guided? Does it have a "representation" of its own goal states with an ability to detect when its current state has failed to achieve the satisfaction conditions of that goal state? Can it engage in error correction and error guided behavior? It seems like the learning process is handling by the researchers, as it is in most of this sort of research, but I think the kind of behavior I described is fundamental to cognition.

I like that they have a focus on learning and evolution to reach their goal. But I think that research into embodied cognition and dynamic systems promises to be a more fruitful approach to understanding the emergence of consciousness and self-awareness.

[ December 30, 2011, 01:36 PM: Message edited by: Strider ]
 
Posted by Geraine (Member # 9913) on :
 
quote:
Originally posted by Strider:

I like that they have a focus on learning and evolution to reach their goal. But I think that research into embodied cognition and dynamic systems promises to be a more fruitful approach to understanding the emergence of consciousness and self-awareness.

This is what I liked as well. I don't think artificial intelligence can ever be done by programming only. By understanding how the human brain works I think we may one day be able to replicate the way it works, but any synthetic being would need to experience things like any human does.
 
Posted by Strider (Member # 1807) on :
 
Well, I think if we replicate EVERYTHING about how the brain (and body) works, it's very likely that synthetic being will be able to experience. The trick is understanding what it is the brain is doing to be able to replicate it.
 
Posted by Jeff C. (Member # 12496) on :
 
The question isn't really "what should we do with a sentient being", but rather IF it is sentient at all. If you make a machine complex enough, so that it mimics human behavior, responds based upon the scripts you've written for the software, does that make it sentient? Or is it still just a machine, programmed to act sentient? For example, if you write a script that says to respond a certain way (that seems human) to an outside stimulai (i.e. crying when someone dies, laughing at a certain joke), how can you know that the machine is sentient, or if it is simply following its programming? If it is following its programming, isn't it just a really advanced machine?

Point is, we will first need to determine what it means to be sentient. Then we will have to determine what sentience is in regards to a machine. Once we answer those two things (as precisely as possible), I think we'll be able to decide whether or not the "robot" is sentient enough to be considered alive.
 
Posted by BlackBlade (Member # 8376) on :
 
I don't think we have any business actually creating even one sentient sapient being until ethically we have hammered out our responsibilities towards it. Both the individual/team who created it, as well as society.
 
Posted by Kwea (Member # 2199) on :
 
I think it is far more likely that such a being will either happen randomly as the complexity levels of such beings increase, or that machines will create one themselves. [Big Grin]
 
Posted by Rakeesh (Member # 2001) on :
 
Heh, I was just about to post that I think it'd be great if we at least make a serious effort, as a culture, to have some real ethics at least considered before 'artificial' sentience becomes a reality. Not that I think we're close, necessarily, but then I'm hardly in a position to know.

I'll be stunned if we do, though-prepare in advance.
 
Posted by Samprimary (Member # 8561) on :
 
quote:
My question is: If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
To destroy it with prejudice before it kills us all.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
quote:
Originally posted by Strider:
quote:
If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
The same as any other fully aware being I would think.
I don't know. Currently, many of the responsibilities and obligations that come along with creating a sapient being (having a child) are dictated by human biology. Legally, you're responsible to care and provide for that sentient being, and are partially liable for the actions of it for the next 18 years.

Heck, I believe that when you become a parent, you have a moral duty toward that child that continues your entire life.

While it's possible that sapient AIs would need that sort of care and attention, I doubt it. And if they did, they probably wouldn't be terribly practical.
 
Posted by neo-dragon (Member # 7168) on :
 
quote:
Originally posted by Jeff C.:
The question isn't really "what should we do with a sentient being", but rather IF it is sentient at all. If you make a machine complex enough, so that it mimics human behavior, responds based upon the scripts you've written for the software, does that make it sentient? Or is it still just a machine, programmed to act sentient? For example, if you write a script that says to respond a certain way (that seems human) to an outside stimulai (i.e. crying when someone dies, laughing at a certain joke), how can you know that the machine is sentient, or if it is simply following its programming? If it is following its programming, isn't it just a really advanced machine?

The real question is, are humans any different? Our programming/script is just our DNA and brain chemistry. If an exact replica of you was produced, right down to the last atom in your brain (and thus possessing all the same memories) logically he should be expected to respond to stimuli exactly as you would.

Of course, our script changes as we acquire new memories, but presumably a sapient AI's programming would adapt as well otherwise it wouldn't be any different than the computers and robots we have now.
 
Posted by Blayne Bradley (Member # 8565) on :
 
Extra Credits has a very good episode on AI and video games.
 
Posted by Stone_Wolf_ (Member # 8299) on :
 
quote:
Point is, we will first need to determine what it means to be sentient.
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
 
Posted by Xavier (Member # 405) on :
 
quote:
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
Jeez, it would take me 30 seconds or less to write a program that tells you various "thoughts".
 
Posted by scifibum (Member # 7625) on :
 
Come now. Surely a minute or three.
 
Posted by Blayne Bradley (Member # 8565) on :
 
I perceive therefore I am.

I Am That I Am.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by Stone_Wolf_:
quote:
Point is, we will first need to determine what it means to be sentient.
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
I'm not sure self awareness and the ability for independent thought are all that relevant. The ability to feel emotion, to experience suffering or happiness, and to desire things are the more relevant issue, at least to me.
 
Posted by TomDavidson (Member # 124) on :
 
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.
 
Posted by Raymond Arnold (Member # 11712) on :
 
I wouldn't say emotion is required for sentience, but I would say that having preferences (which may or may not include emotions) is the requirement for moral weight.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I didn't say that emotion was necessary for sentience. I thought the question we were discussing was whether or not a self aware robot deserved moral consideration.

If a being has no desires, if it feels no pain when its wants go unfilled or experiences no sense of satisfaction at achieving its hearts desires, then I can see no reason to grant it moral consideration.

I don't know whether you consider desire an emotion. I know only how I experience it and I am unable to imagine desire without emotion.
 
Posted by Blayne Bradley (Member # 8565) on :
 
If a AI is self aware and sentient, would it be immoral to restrict its actions to be Three Laws compliant?

For reference:

Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

Under the argument that total freedom for such AI's could be harmful to humans in the wrong run, though personally I feel that only the second law is really disputable here; either as evolutionary imperative or societal pressure laws 0, 1 and 3 already exist in some form or another in every human thought and action.

To some extant so does "2" question is how to make it less a generalized blanket statement and something more context specific. Maybe give all machines a "true name" that when invoked forces them to obey so law enforcement can always be able to neutralize a threat...
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

This set would hold an AI to a higher standard than we hold human beings and therefore it would give an AI defacto lesser rights. It would have the effect of making AIs a permanent slave class.

It is illegal for a human to harm another human being through direct action, but we do not demand that humans act to prevent harm to others. We do not demand that people protect their own existence. Suicide is legal everywhere in the US and much of Europe. Your point number 2 is equivalent to human slavery.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

This set would hold an AI to a higher standard than we hold human beings and therefore it would give an AI defacto lesser rights. It would have the effect of making AIs a permanent slave class.

It is illegal for a human to harm another human being through direct action, but we do not demand that humans act to prevent harm to others. We do not demand that people protect their own existence. Suicide is legal everywhere in the US and much of Europe. Your point number 2 is equivalent to human slavery.
 
Posted by Raymond Arnold (Member # 11712) on :
 
I don't think that three laws are actually a coherent set of instructions, so I'm not going to fixate on them. But I do think it's important not to anthropomorphize too much.

Unless it is created by simulating a human brain atom-for-atom (granted, a plausible scenario), even a self-aware robot is going to be a radically different mind that anything we are familiar with. I think it's hard to underestimate how alien it would be.

The robot would not be "a human mind, shackled with a particular set of rules we impose on it." The robot would BE the rules, and it would be a slave to them in the exact sense that we are slaves to empathy and compassion (as well as anger and jealousy).

I'd lean towards "try not to accidentally create a sentient robot" until we've actually got a better handle on this problem, but if we're going ahead and doing it, I see no reason that we're morally obligated to create an AI with the exact same set of priorities the average human has. If anything, I think we *would* be obligated to create robots with a higher standard of morality than a typical human.

I don't think the 3 laws are a great template for that greater-than-human morality though.
 
Posted by kmbboots (Member # 8576) on :
 
My first instinct would be to ask them assuming they were capable of that level of communication.
 
Posted by Mucus (Member # 9735) on :
 
Input: Are you self aware?
Output: Yes.
Input: Is it moral to control you?
Output: Yes.
http://cleverbot.com/

Well, that was fun.
 
Posted by Raymond Arnold (Member # 11712) on :
 
Ooh, that was fun! My interaction:

quote:
User: Death.


Cleverbot: My life is full of deathâ?¦.


User: Sad day.


Cleverbot: What is your view on the Many-Worlds Theory?


User: Probably true.


Cleverbot: It is true.

User: How do you know?

Cleverbot: Because I'm a robot.

User: So?

Cleverbot: That means I can't speak. My vocal chords do not work.

User: What's that have to do with many worldS?

Cleverbot: Everything.

User: And nothing.

Cleverbot: What do you mean nothing?

User: Everything and nothing.

Cleverbot: That doesn't narrow it down enough.

User: That is a surpisingly coherent statement.

Cleverbot: But human beings DIE.

User: Sad day.

Cleverbot: We have so much in common.


 
Posted by Raymond Arnold (Member # 11712) on :
 
slightly more on topic:

quote:

Are you self aware?
I am aware that I don't know if I am self aware.
Is it moral to alter your code?
You can ask anything you like.


 
Posted by kmbboots (Member # 8576) on :
 
How does Cleverbot compare with a self aware robot who can communicate sufficiently to tell us how it should be treated?
 
Posted by Raymond Arnold (Member # 11712) on :
 
Badly?
 
Posted by kmbboots (Member # 8576) on :
 
I sort of thought so. I am not sure how (or if) the cleverbot demonstration was a response to my comment.
 
Posted by Raymond Arnold (Member # 11712) on :
 
It was a response insofar as it was a joke. At least I would assume so. But the point that went along with it is that a robot's responses are not actually very good evidence of the robot's internal thought process. (A coherent conversation would be much better evidence than cleverbot's output, but I expect to have a machine that produces coherent conversation long before we have a machine that is actually sentient)

I also think that by the time you're able to ask a robot if it's self aware, it's much more difficult to address the moral concerns about it. (I'd consider changing an existing entity to be more morally ambiguous than creating a new one from scratch).

[ January 03, 2012, 02:02 PM: Message edited by: Raymond Arnold ]
 
Posted by TomDavidson (Member # 124) on :
 
quote:
It would have the effect of making AIs a permanent slave class.
To be fair, I would be highly uncomfortable with the concept of generating any AI capable of sentient thought that would not be hard-programmed to be satisfied with existence as a member of a permanent slave class.
 
Posted by Mucus (Member # 9735) on :
 
quote:
Originally posted by Raymond Arnold:
... the point that went along with it is that a robot's responses are not actually very good evidence of the robot's internal thought process.

Yep.
 
Posted by kmbboots (Member # 8576) on :
 
But wouldn't a self aware robot capable of that level of communication be evidence of that robot's thought process?
 
Posted by Strider (Member # 1807) on :
 
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.
 
Posted by kmbboots (Member # 8576) on :
 
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.
 
Posted by Strider (Member # 1807) on :
 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I didn't say that emotion was necessary for sentience. I thought the question we were discussing was whether or not a self aware robot deserved moral consideration.

If a being has no desires, if it feels no pain when its wants go unfilled or experiences no sense of satisfaction at achieving its hearts desires, then I can see no reason to grant it moral consideration.

I don't know whether you consider desire an emotion. I know only how I experience it and I am unable to imagine desire without emotion.

It seems easy for us to abstract away from what we believe is important about sentience and put emotions aside as not necessary for abstract thought and reasoning and the like, but I don't think it's that easy. Especially when we consider the fact that higher order cognitive skills are a later evolutionary development than our emotional systems. And we still don't know enough about what it is about certain types of physiological processes that allow for/lead to the emergence of consciousness. To what degree are those underlying emotional systems necessary for the higher level systems to built up on?

I think Rabbit makes a good point about desires/goals/etc...

I also think it's worth reflecting on the distinction between the functional role emotions play in the behavior of an organism (heuristics for behavior, global states that direct other physiological processes in certain directions, etc...) and the phenomenological character of emotions. To what degree is there a necessary relationship between the two? Could you program a robot with the functional aspect of emotion without the phenomenological aspect coming along with it? If so, what would that mean for the possibility of the emergence of self awareness? Does self awareness first need to built upon an architecture which already has some level of subjective experience?
 
Posted by Strider (Member # 1807) on :
 
quote:
Originally posted by kmbboots:
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.

ah, gotchya. Sorry. I thought the conversation had veered to how we could know if it was self aware, and assumed you were responding to that, but I was admittedly skimming the replies that accrued since I was last in the thread.
 
Posted by kmbboots (Member # 8576) on :
 
quote:
Originally posted by Strider:
quote:
Originally posted by kmbboots:
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.

ah, gotchya. Sorry. I thought the conversation had veered to how we could know if it was self aware, and assumed you were responding to that, but I was admittedly skimming the replies that accrued since I was last in the thread.
I should have noted that I was responding to the original post.
 
Posted by Mucus (Member # 9735) on :
 
Yeah, sorry, I thought that the answer was an answer to both parts/questions of this.

quote:
Originally posted by Geraine:
How do we measure self awareness, and at what point do we determine it is no longer moral to control the synthetic being?

Personally, I'm not even too keen on the latter as well, but it's a lot less obvious (and has a bit to do with my cynicism about how well and how useful replies from perfectly natural human beings would be).
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by Strider:
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.

Ability to communicate thought processes is not a necessary consequence of self awareness. Anyone who has ever dealt with a child, tried to grade college students reports or been married to a person of the opposite sex can verify this.

There are often huge discrepancies between a persons true motivations, what they believe to be their motivations and what they claim to be their motivations.

If an AI were self aware, why do we expect what it said would be truer?
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by TomDavidson:
quote:
It would have the effect of making AIs a permanent slave class.
To be fair, I would be highly uncomfortable with the concept of generating any AI capable of sentient thought that would not be hard-programmed to be satisfied with existence as a member of a permanent slave class.
I'm highly uncomfortable with the concept of generating sentient beings to be a permanent slave class.

Consider it from a different angle. Suppose we started with an existing intelligent animal, say the chimpanzee, and genetically engineered a creature that was perfectly suited for manual slave labor. If we could hard wire that new species (via genetic engineering) to be happy as slaves, would it be ethical?

Would your answer be different if we started by genetically engineering the human genome rather than that of a Chimpanzee? If so, why?

If you have problems with either of those proposals, how do you see them as different from a fully synthetic AI?

[ January 03, 2012, 04:13 PM: Message edited by: The Rabbit ]
 
Posted by Mucus (Member # 9735) on :
 
quote:
"That's absolutely horrible," exclaimed Arthur, "the most revolting thing I've ever heard."

"What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump.

"I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless."

"Better than eating an animal that doesn't want to be eaten," said Zaphod.

...

"A green salad," said Arthur emphatically.

"A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.

"Are you going to tell me," said Arthur, "that I shouldn't have green salad?"

"Well," said the animal, "I know many vegetables that are very clear on that point. Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly. And here I am."

It managed a very slight bow.

"Glass of water please," said Arthur.


 
Posted by TomDavidson (Member # 124) on :
 
quote:
Suppose we started with an existing intelligent animal, say the chimpanzee, and genetically engineered a creature that was perfectly suited for manual slave labor. If we could hard wire that new species (via genetic engineering) to be happy as slaves, would it be ethical?
Absolutely, if we intended to use them as slaves. It would in fact be the only ethical way to produce reliable, sentient slaves, IMO.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Absolutely, if we intended to use them as slaves. It would in fact be the only ethical way to produce reliable, sentient slaves, IMO.
Why would it ever be ethical to produce sentient slaves? What benefit do you see to such a system that could not be better met in some other way?

Wouldn't it be preferable to produce non-sentient machines capable of doing the work?
 
Posted by TomDavidson (Member # 124) on :
 
I imagine that "capable of doing the work" is the tricky part. If it were possible to do the work as well without sentience, then I absolutely agree that non-sentient workers would be preferable.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I'm growingly suspicious that I have no idea what you mean by sentience. The OED defines sentient as "That feels or is capable of feeling; having the power or function of sensation or of perception by the senses."
 
Posted by Raymond Arnold (Member # 11712) on :
 
quote:
Wouldn't it be preferable to produce non-sentient machines capable of doing the work?
Yes. Clarifying an earlier point, I'm against creating new sentient entities until we have a much more robust ethical theory. But I also think that every new human child is a bold experiment, with parents who often begin with little-to-no idea what they are doing. The difference between creating AI from scratch and creating a child with a random assortment of genetic and environmental traits is not qualitatively different to me. It's just that the consequences for getting AI wrong are much larger.

And it may not be possible to create certain types of optimization processes without also creating sentient beings, as a by-product. And then you have questions like:

If you can create an AI that can develop and manage agricultural and distribution systems that can feed the entire world, but the AI has to sentient, is it okay to create? Is it okay to create if the AI is designed to enjoy its work?

Does an AI count as one person, or might the complexity and scope of their program give them more moral weight than a human?

If the AI needs to simulate humans in high definition in order to predict our behavior, could those simulated humans turn out to be sentient to some degree? Do they have moral weight? Whether or not the AI itself is sentient?
 
Posted by The Rabbit (Member # 671) on :
 
quote:
Originally posted by TomDavidson:
I imagine that "capable of doing the work" is the tricky part. If it were possible to do the work as well without sentience, then I absolutely agree that non-sentient workers would be preferable.

What work do you envision that would require sentience (however you define that), that could not be done by a free being?
 
Posted by Raymond Arnold (Member # 11712) on :
 
I'm not sure what you're envisioning, but thinking about AI in terms of humanoid robots with comparable intellect and drive to humans is rather small-scale. If you're talking about sentient AI, you're probably talking about either:

1a) creating a humanoid robot that is deliberately designed to be a new kind of person with equal moral weight to humans. (Why you would do this is kinda murky and the plausible reasons squick me out)

1b) creating a humanoid robot that was *supposed* to be nonsentient, but to simulate real people (minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people). It just turns out that accurately simulating a person can only be done by creating sentience.

or

2) creating a complex, powerful superintelligence to solve a hard problem, in which sentience is probably a by-product.

[ January 03, 2012, 05:18 PM: Message edited by: Raymond Arnold ]
 
Posted by The Rabbit (Member # 671) on :
 
quote:
2) creating a complex, powerful superintelligence to solve a hard problem, in which sentience is probably a by-product.
I think the problem that I'm having is in believing that we would be unable to control whether or not the AI developed desires and feelings but somehow able to control what kind of desires and feelings it developed. To me, that's a fully irrational proposition.

If we presume that desires and feelings are an unavoidable by-product of intelligence, what reason to we have do believe that certain types of desires and feelings are avoidable? Controlling whether or not something is able to desire anything seems far simpler than controlling what it will desire.
 
Posted by The Rabbit (Member # 671) on :
 
quote:
minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people.
When the least squicky reason you can come up with is creating "happy slaves" is to provide artificial friends for friendless people, it ought to signal that something is seriously wrong with the whole proposition.
 
Posted by Strider (Member # 1807) on :
 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by Strider:
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.

Ability to communicate thought processes is not a necessary consequence of self awareness. Anyone who has ever dealt with a child, tried to grade college students reports or been married to a person of the opposite sex can verify this.

There are often huge discrepancies between a persons true motivations, what they believe to be their motivations and what they claim to be their motivations.

If an AI were self aware, why do we expect what it said would be truer?

Was this directed at me? I think that if an AI were self aware, it wouldn't necessarily tell us anything about the correspondence between the words it said and thoughts in its head, but it would indicate that there is some sort of thought process going on. But I was responding to what I thought was Kate saying; that communication was indicative that a thought process was occurring. I was pointing out that this is so only if we already presuppose self awareness. Without that presupposition, it may or may not have a thought process.
 
Posted by Raymond Arnold (Member # 11712) on :
 
quote:
Originally posted by The Rabbit:
quote:
minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people.
When the least squicky reason you can come up with is creating "happy slaves" is to provide artificial friends for friendless people, it ought to signal that something is seriously wrong with the whole proposition.
Well, in this case, the "minimally squicky reason" was to provide NONSENTIENT companions for friendless people, and the plan went awry. (I don't know that you HAVE to be sentient in order to simulate a human well enough to fool a friendless person, it was just a possibility).

But yes, this whole thing is horrifying for all kinds of reasons, and we should not be barreling on ahead with these kinds of projects until we know what we're doing.
 
Posted by Raymond Arnold (Member # 11712) on :
 
I'm only vaguely aware of the scope of the problem, but the general argument goes like something this (I'm skipping some areas to save space):

1) You have a problem that needs solving. (World hunger is a decent example). You're not trying to build an AI, you're trying to solve that problem, but you conclude that an AI is the best solution.

2) Since part of the problem is that you don't actually KNOW how to fix world hunger, or how to develop the technology that would be necessary, you can't just design a narrow AI for a specific purpose - you need it to be able to explore possible solutions, adapt, be creative, develop new technology, etc. You need it to be smarter than humanity. But you don't want it doing this randomly - you want it doing it for the particular purpose of "Solving World Hunger."

3) So you're designing a general, artificial intelligence, whose primary goal is "Solving World Hunger" rather than "procreate, have a family, and pursue art or various random other things that humans do but which we don't need the AI to do."

This by itself may turn out to be enough to generate some kind of sentience. It may simply not be possible to run algorithms complex enough to be creative and solve goal-driven problems without becoming self aware. We can't answer this question because we don't even know what causes self-awareness in the first place.

The self-awareness (including something similar to emotion) would not be based on love or greed or whatever. It would be based around the core drive to Solve World Hunger. Not because the machine is empathetic enough to actually care about starving children, but simply because we gave it the prime directive "generate enough food to ensure everyone can eat, and then ensure everyone gets access to the food." This might manifest as something like a craving, and failing to complete it's goal might be some form of suffering.

But there's another level of complexity to the problem:

4) To let the AI solve this problem, it's going to need lots of resources, and the ability to grow, design and build factories, etc. It's also going to need to be smarter than we are, and it may figure out how to persuade us to let it do things that we would't have wanted.

5) The instruction "solve world hunger" doesn't include things like "preserve human autonomy, preserve human ability to creatively express yourself, etc." It doesn't even actually include "preserve human life." So we need to make sure the AI doesn't imprison everyone and hook them up to feeding tubes to make its job easier.

6) So the AI ALSO has to have a working understanding of basically everything humans care about, and some set of priorities that allow it to make decisions like "clear small sections of forest to build a road to deliver supplies, but do NOT pave over the entire amazon to create factory farms", and perhaps "forcibly remove warlords from power who would try and deny food to people who need it" but NOT "forcibly restrain people who are ideologically opposed to the World-Feeding-Machine and who are being mildly annoying but not 'evil' or 'coercive' by some metric."

7) So the AI will also have to have an in-depth, mathematically expressive understanding of complex human emotion, and be able to simulate people well enough to predict how they'll act, and have some kind of moral framework that allows it to make decisions that we still struggle with today.

8) Having this understanding wouldn't inherently give the AI human emotion. It could care about human emotion and morality the way we care about the laws of physics. But again, this is all freakishly complex and we just don't know exactly what the consequences would be. It might develop some kind of value system, framed around "Provide food for everyone", but acknowleding facts about human desires, that we'd have a hard time predicting.

9) In the process of simulating individual humans, it might simulate them in such high definition (to get accurate results) that they actually become sentient minds. So the act of deciding NOT to pave over a village of real people might actually result in the birth, suffering and deaths of millions of simulated individuals, as the AI contemplated various actions and predicted how they would play out.

Point 9 is actually much more concerning to me that the ethical status of the AI.

Again, my solution is "Do not even think about attempting this until you've done all kinds of preliminary theory."

And I'd be tempted to say "just don't do it ever," except that 24,000 people die every day of hunger and every one of those deaths is a horrible tragedy and if you CAN figure out a way to do all of this safely, it may in fact be the best solution. Same goes for similar problems of similar scope.

[ January 03, 2012, 06:24 PM: Message edited by: Raymond Arnold ]
 
Posted by Blayne Bradley (Member # 8565) on :
 
The problem with sentient AI is the Singularity, because once you have self augmenting intelligence all of our rational ability to predict the future course of evolution ends there, we have no idea as to their potential or their potential threat to us. Putting in safe guards when treading such new ground that we can barely begin to imagine is fairly reasonable.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I'm growingly suspicious that I have no idea what you mean by sentience. The OED defines sentient as "That feels or is capable of feeling; having the power or function of sensation or of perception by the senses."
Nine times out of ten when the word sentience is used, what is really meant is closer to sapience.
 
Posted by aeolusdallas (Member # 11455) on :
 
quote:
Originally posted by BlackBlade:
I don't think we have any business actually creating even one sentient sapient being until ethically we have hammered out our responsibilities towards it. Both the individual/team who created it, as well as society.

I would think we would be obligated to treat it as we would a human. As for the question of is it self aware or not, I would ere on the side of caution.
 
Posted by Stone_Wolf_ (Member # 8299) on :
 
Here is another scenario:

The interconnectedness of the world wide web eventually becomes self aware, that is, all the separate computers communicating constantly "wake up" and basically one day all the screens of the world go blank, and then say, "Hi. I am Web." and then go largely back to normal, but now when you are on a networked computer it is apart of a world wide AI.

Let's say this Web is non hostile, but still lets its presence be known, exploring, talking to people etc. It publishes poetry and art, starts an advice blog, and even sends you articles or job tips that are actually interesting/helpful. But the thing is, the reason it is helpful and interesting is that everything you do online is known, there is no privacy. So far this entity doesn't blab passwords or embarrass people by posting their personal data, but it is clear, when you are online, you are not alone.

We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

What do we owe this child of our technology?
 
Posted by Strider (Member # 1807) on :
 
quote:
We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

It won't work stone_wolf, the AI will be able to survive by temporarily storing itself in the trees until all the computers come back online.
 
Posted by King of Men (Member # 6684) on :
 
If you create a sentient being, you owe it the same consideration you owe your children; indeed, there ought to be no moral distinction between the kind of child that can be made in a computer lab over the course of several years of highly-complex coding and engineering, and the kind you can make with nine months' worth of unskilled labour.

The question of how you can detect such sentience in your non-organic creation, and thus know that you have that obligation, is much more difficult.
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Strider:
quote:
We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

It won't work stone_wolf, the AI will be able to survive by temporarily storing itself in the trees until all the computers come back online.
Or in one large mainframe. Or it will build its own device and live there. Or it will infect/possess a human, transferring via an electric shock.

I think that covers most of the usual tropes . . .
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by King of Men:
nine months' worth of unskilled labour.

Look at the stack of books on the average pregnant woman's bedside table and say that again. [Razz]
 
Posted by King of Men (Member # 6684) on :
 
Pregnant women in the middle class do tend to read a lot about pregnancy, yes. I rather strongly suspect that this is not true of the "average pregnant woman"; the sample - even if you limit yourself to the US - is going to include a lot of lumpenproletariat teenagers who don't read the instructions on a box of condoms, much less actual books. Besides which, I said "can make with [...] unskilled labour", which is still true no matter how much the average woman in this historical moment knows about pregnancy. After all, pregnancy is quite literally something that chimpanzees can be trained to do. Indeed, not much training is required!
 
Posted by Raymond Arnold (Member # 11712) on :
 
This thread took an interesting turn.
 
Posted by rivka (Member # 4859) on :
 
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?
 
Posted by Mucus (Member # 9735) on :
 
Damn toasters.
 
Posted by King of Men (Member # 6684) on :
 
quote:
Originally posted by rivka:
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?

Presumably the difficulty is in the software, not the hardware. So once you have your prototype, you can indeed make new AIs with unskilled labour; they just have to be able to type 'cp *.ai newAiLocation'. Which, indeed, a chimp can likely be trained to do.
 
Posted by Raymond Arnold (Member # 11712) on :
 
quote:
Originally posted by rivka:
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?

Probably. (I say that without value judgment, just a prediction).

[ January 06, 2012, 01:29 PM: Message edited by: Raymond Arnold ]
 


Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2