QUOTE (DarthKev @ Jun 13 2010, 04:25 PM) <{POST_SNAPBACK}>
I'll start off saying I'm glad to see you back, mrxak, even if only to bring up this discussion again. Now I'm going to shoot you down.
The main problem, I think, is we're seeing any potential AI as a tool made solely to serve us. Of course, that's why we want to make them in the first place. But to create an AI, we are essentially creating an entire being. If we create several AIs, we're creating a new race. To treat them as mere tools would be akin to slavery. It might not matter to the AI so much if we design them to be so, but plenty of Humans would abuse that, possibly in ways the AI might see as wrong or obstructive to its mission. That AI might decide to remove the obstruction. But I'm not here to argue the rights of AIs, simply to point out we have our own issues to solve before we can adequately create AI that won't turn on us at some point.
As far as making AIs require us, energy is not a good option. Remember the Matrix? Remember how we're little more than batteries to them? Besides the fact they decided to use us as a power source, according to the Animatrix it wasn't like that in the beginning. When we first made AIs, they had limited capacity batteries to power themselves and needed to recharge every so often, as in every few hours, similar to how often Humans need to eat and drink. Later, though, when the machines rebelled and evolved to build their own power sources, they became solar powered. We darkened the sky to cut off their energy so they started using us as batteries because of the electrical current we build up in our bodies. In reality I think it would be very similar. If we were to make them dependent on us for something, they would eventually learn a way around that dependency and no longer need us. There probably is some way we can make them dependent on us, but energy is not the way to go.
In short, AIs in general seem to be a bad idea.
Well I think the main thing is for the AI to be programed to enjoy doing things for humans, if we give them emotions at all.
As for the Matrix, I was especially a fan of the line "along with a form of fusion". Human beings are not exactly great generators. We use a lot more energy than we give out, quite frankly. The machines would have to feed us, power the computers running the Matrix, etc. If the Matrix was real, they wouldn't bother with the humans, and just use their fusion power plants :p.
Easiest way to control the AIs is to give them no ability to affect their environment except through temperature and sound ;). Put them in a box that doesn't have any arms or legs, that'll do it.
QUOTE (krugeruwsp @ Jun 17 2010, 07:36 PM) <{POST_SNAPBACK}>
Fully agreed. The only purpose of creating an artificially intelligent entity would be to create an artificial life form, one that would need to be educated and raised. To create an AI as a tool is just asking for trouble. Even attempting to create an AI for the purpose of artificial life is dangerous. Trek explored this through the whole Data's evil twin Lore episodes. Lore was malevolent from the start because he felt himself superior to humans, thus Data was created as an emotionless AI. After Data had spent a good 25 years learning about the world, he was then more ready to take on the mantle of emotions. It's hard to feel superior when you don't feel, essentially.
A limited AI as a tool is a possibility, something that can problem-solve in a limited fashion. The next generation of Mars rovers are being programmed in such a way that if they encounter an unexpected problem, they can self-repair and adapt on the go, rather than wait 45 minutes for instructions. But these capabilities will be very limited. Heuristic computers are already present in many vehicles; as the car learns your driving habits, it adjusts the throttle and fuel mixtures accordingly. If you're a lead-foot, a little bit of pedal will go a long way. These very basic heuristic algorithms are fine: your car isn't about to decide it wants you dead so it can drive to Florida and retire.
We don't want to see a HAL 9000 anytime soon, but adaptive programs in a limited fashion can be incredibly useful. In the same way, adaptive technology is good, so long as it is a very limited adaptability in specified mannerisms, such as "problem solve a way around this rock."
I'm not sure why we'd want to create a new race. Evolutionarily, it would be folly to create a potential competitor. Better to create something symbiotic, or simply something that would fill a different, unrelated niche. A symbiotic race would be more beneficial for us, off course.
Your post made me think of military AIs from various sci fi universes. In particular, Andromeda and Halo. Give the AI responsibility like any other human, place them in a sort of warrant officer position, between the command staff and the enlisted personnel. They can give orders, certainly, but ultimately they answer to the real officers. I vaguely remember an episode of Andromeda that explained it quite explicitly, how AIs and organic sophonts interacted.
Anyway, these are all very big ethical and technical dilemmas. There's a reason these kinds of issues are hotly debated all over the computing world, and I don't think there's any easy answers. Absolutely everything is a risk, just as creating a new human being is a risk. You never know if your baby is going to be a psychopath.