Page 1 of 2
AI
Posted:
Tue Dec 04, 2007 9:32 am
by Westfall
There are several issues that are fascinating to me. Notablly the rate of technological advancement. A person born in 1000 AD experinced very little technological change. So far, I've seen the world change so rapidly that the world I was born in would seem forgien & awkward to me now. What will the world look like in 2015? Will it even be recognizable?
We are getting quite close to developing a genuine AI. How will this impact society? What about the many questions AI brings up?
There are many ethical problems associated with working to create intelligent creatures.
AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
Would it be wrong to engineer robots that want to perform tasks unpleasant to humans?
Would a technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?
Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
Under what preconditions could such a simulation be allowed to happen at all?
Westfall
Posted:
Tue Dec 04, 2007 6:15 pm
by Vector
I agree this is an interesting thought experiment, but I have to disagree that we are "getting quite close to developing a genuine AI."
We were "close" back in the late '70s, all we needed was a teeny bit more computational horespower.
I really enjoyed the book "What Computers Still Can't Do" by Hubert Dreyfus. I highly recommend it.
Posted:
Tue Dec 04, 2007 7:57 pm
by Smirks
AI is advancing...it currently at a point where small robots can "learn" things...how to get out of a maze, where to find the "red cube"...etc etc.
Differentiating between right and wrong...well that's a long way out. These types of things would evidently start out as hard coded in, and there are so many situations where there are gray areas between right and wrong...it's farther away than most think.
Posted:
Wed Dec 05, 2007 1:15 pm
by Westfall
Smirks wrote:Differentiating between right and wrong...well that's a long way out. These types of things would evidently start out as hard coded in, and there are so many situations where there are gray areas between right and wrong...it's farther away than most think.
Personally I don't believe there is such a thing as "right and wrong", at least not in an objective sense. Morality is an illusion.
Westfall
Posted:
Wed Dec 05, 2007 4:21 pm
by Vector
As long as we're off topic, there does exist right and wrong, but only with respect to values. Values are entirely personal and subjective. If something supports your values, then it is right, if it opposes your values, then it is wrong. The laws of cause and effect are objective, and whether an action supports or opposes your values is a matter of objective reality (though it may not be obvious which it is).
Any claim of good or bad or right or wrong presumes a value and contains a claim about reality.
Lots of people will try to tell you (or imply) that there is only one "true" set of values. I don't believe them.
Re: AI
Posted:
Wed Dec 05, 2007 8:51 pm
by Check_Mate
Westfall wrote:We are getting quite close to developing a genuine AI.
I'm sure somewhere in the world Sara Connor is fighting really hard to keep that from happening.
(for those of you saying wtf, Sara Connor is the chick from Terminator)
Posted:
Wed Dec 05, 2007 10:06 pm
by Bull Run
Westfall wrote:Morality is an illusion.
Agreed. Morality is defined by the culture in which you live. You've got the whole nature vs. nurture issue here, but the reality is that mores are created and ingrained in us (i.e. morality) therefore, morality is easliy defined by the culture in which you are taught to live.
Posted:
Wed Dec 05, 2007 10:33 pm
by 101998
We are not even close. Even with some of the advanced neural networks used for diagnostic medicine AI has a very limited ability to make decisions. It can “learn”, like if it computes 2+2=3, and you say no 2+2=4, then it can take that data to learn 200+200=400. Obviously that is a simplistic example, but at the same time you can go plug in crazy ideas into that formula. The only real field where AI is helpful is medical diagnostics/technology and maybe auto/aircraft design testing, because it can compare large amounts of data very very quickly and give solutions to problems. As far as robots doing manual labor, why would they need to learn or have intelligence? They are there to do a specific task, putting super advanced processing in them would be a waste of money.
Posted:
Wed Dec 05, 2007 11:37 pm
by zine
this brings up a question can emotion be learned or is it a totally seperate entity from thought?
*waits to buy his ai slave robot*
Posted:
Thu Dec 06, 2007 4:15 am
by Finesse
zine wrote:this brings up a question can emotion be learned or is it a totally seperate entity from thought?
*waits to buy his ai slave robot*
According to AI... seperate.... according to humans... learned.
However a computer can rationlize a pattern but it cant rely soley on it.
If that makes since.