QUOTE (DarthKev @ Jun 30 2010, 03:23 PM) <{POST_SNAPBACK}>
Valid points, but do you think all of human society would accept them as equals? We humans are arrogant creatures, we think ourselves above all others. And wouldn't it affect the AI's productivity if it begins to wonder why humans aren't accepting of it?
Additionally, giving birth to a human baby is hardly like creating a new sentient race. For one, we are humans giving birth to new humans, so we accept them as our own. However, an AI is not, and many may find it difficult to accept one or many AIs as equals. Just because you seem accepting of them doesn't mean all humans would be.
Also, think about the work conditions. An AI would do its job with a much higher degree of quality than human employees would be capable. Humans would become jealous and that would only increase their intolerance for AIs. And yes, I do expect there to still be humans working even if we design AIs to do the jobs for us. Some people aren't content with sitting around doing nothing.
The "Uncanny Valley" seems appropriate to put in here.
To your point...
Not every human is accepting of every other human. There's plenty of prejudices against other races, creeds, genders, orientations, etc. between human beings. That does not mean that everyone who is discriminated against goes on a killing spree and attempts to wipe out the opposing race/creed/gender/orientation/etc. Additionally, every human being does not share the same prejudices. One should hope, if our artificial intelligences are indeed smarter than us, that they'd choose a more enlightened path than murdering all humans. The AIs have examples in human history for non-violent passive resistance, examples that have proven rather effective. Hopefully we too will have evolved so that our potential prejudices against intelligent creatures different from us will be lessened. The optimistic view is that artificial intelligences that feel as though they have been back-handed to their proverbial right cheek will choose to offer their left (indicating equality), rather than going to war with us.
QUOTE (krugeruwsp @ Jul 1 2010, 11:58 AM) <{POST_SNAPBACK}>
I agree with DarthKev: creation of offspring is quite different than creation of an entire new form of life, especially artificial, non-biological sentient life. A computer-based sentience would have an entirely different worldview, by default, that we cannot really relate to. The computer experience would be so radically different from the human experience, how could we?
We have a hard enough time teaching our current youth about the value of a hard day's work. The fact that the BP CEO just got up in front of Congress and declared that he knows absolutely nothing, doesn't have any real authority, and gets paid $6.2 million a year doesn't exactly make that problem much better. And for a computer, what sense of satisfaction can we program? A computer doesn't need to rest in the traditional sense.
In Bicentennial Man, eventually the programming (not even originally intended to be an AI so much as a sophisticated heuristically learning android,) for the servitude of man gave way to a desire for freedom. Andrew actually wanted to continue serving the Martin family, just be given the state of a free employee. I think this is the best possible outcome we could hope for, but with our own love of values such as freedom, liberty, and the pursuit of happiness, an AI with any access to the world is going to learn this from us. Perhaps that's not a bad thing, but what could the pursuit of happiness for an artificial life form look like?
We work because we have needs and wants, and our compensation allows us to pursue these things, per Maslow's hierarchy. What needs does a computer have? Electricity? The restriction of resources is what forces humanity into employment to each other, creating economy. What restriction of resources do we have for a machine? It doesn't need as many basics as we must have. We, as fragile wetware, must have food, water, and shelter for mere survival. What does a machine need for survival? What does an AI need? If an AI got into the internet, it would have nearly unlimited computing power for its intellect. How do you shut down the internet? How would you restrict the resources available to an entirely computer-based intelligence?
As for sociopathic behavior, it's crazy difficult to police biological criminals. Not to mention the ethical dilemmas of justice. Do we dismantle a sociopathic robot? The controversy over the death penalty becomes even more muddy here. How do imprison a sociopathic robot, and for how long? Part of the reason prison works for us biological beings is that passage of time is important to us. Loss of 20 years isn't just about teaching people a lesson, it's about the loss of life-span. What does that mean for a being that for all intents and purposes will never age? Even more important is the question of how to catch criminal robots. We already have a hard time stopping us weak biological beings. Do you start hiring or creating robotic police and send them out to catch robotic robbers? What kind of damage could a sociopathic robot do? Movies are all good fun, but think of the serious destruction and possible loss of life from a rampaging robot? Even if you do manage to stop it and bring it to trial, what do you really do to punish an artificial intelligence?
For another thing, do we allow AIs to reproduce, and how do we stop them if they try? It's easy, though ethically abhorrent, to prevent human beings from popping out offspring. Though, the ethical dilemma in some cases... well, we won't go there. Either way, if we look at an AI as simply a tool, what purpose does offspring serve? Right now, we have a lot of people in the world that have nothing to do, sitting in unemployment. What happens to an unemployed computer? Do we simply shut them down? That's tantamount to murder, even if it's temporary. It's deprivation of life to a sentient entity.
What about AI culture? Do we celebrate Android Day? What do we do if the AIs decide they want their own country? Learning from our history, do they simply start a revolution and take over, say, Canada, evicting all humans? I suspect that it wouldn't take long before AIs start creating a culture, and that culture would almost inevitably evolve into one that embraces the fact that artificial life is superior to biological life. Ultimately, we end up with our own Cylon war. I think, despite the best efforts to educate and raise artificial life with a respect for biological forms, this outcome is unavoidable.
Can we truly program out human traits like greed, or corruption? It's a great idea, that we could simply program the best of humanity into them, but I don't think that we can program the yin without the yang, so to speak.
Personally I don't think we're going to create a true artificial intelligence with 1s and 0s. Likely whatever computer we construct will be rather analog like ourselves. Assuming we model such an intelligence after our own brains, likely their world-view will be potentially quite similar to our own, if raised like a human child. Yes, they can surpass us, potentially, in any number of ways, but I think if the process is approached correctly, the "computer experience" will not be so alien.
Electricity is certainly a need for the AI, but more than that, I think time is. A machine that has no time, no computational cycles to do its own thinking, for its own purposes, is nothing more than an unthinking automaton, carrying out a task with its complete resources and focus. It'd be nothing more than a calculator, and we could reliably predict every state of its memory and processing with a calculator and enough time to simulate it. That's not a true AI, and we'd have absolutely nothing to fear from such machines except programming errors and the occasional corrupted data.
What makes us humans different from zombies? We are self-aware, thinking creatures, with a sense of self and a rich internal life of thought. We value our thoughts, share only those that we wish, and in essence live simultaneously in the physical world and the mental world. Our actions, such as me typing these words on my computer, does not take up my entire brain's power. While I'm typing this I'm also processing various inputs, thinking about what I'm going to type next, what I did this morning, what I'd like to eat for lunch, and a host of other regulatory functions I'm not even aware of. My subconscious is processing things I'm not consciously noticing, and changing the way I will think about or react to things in the future. My physical actions, and my mental actions involved with the task of typing this post, are basically my "job" at this moment, but I am not a deterministic machine (assuming I believe in free will) and so this typing here is both voluntary and not the sum total of my mental process. At each instance of consciousness, I can choose to type a different word, delete this post entirely, or jump up onto my desk and dive out my window screaming gibberish. That's what makes me a true intelligent being, rather than one that operates under strict rules of instinct or biological programming. My actions are based on decisions, decisions that are determined by my past experience and biological hardware, ultimately my DNA programming and the memories I've acquired in life. I mull these things over, consciously and not, and decide what to do next, all the while my brain is active in many different ways. The music playing over my headphones is interpreted by sensors and neurological tissue, memories flash in and out of my consciousness. Even though my current primary task is to type this post, my brain is doing a great deal more. I value that very private, deep, internal process. It's what makes me me. My entire sense of identity, realistically, is based on what my brain does with its "spare cycles" and I expect to an AI, the same will hold true. That's self-awareness, and that's what makes an intelligent computer a full-blown AI.
So yes, I think an AI allowed to explore its thoughts and interests is both by definition required, and the key to keeping such machines happy. Allowing them more of those spare cycles is a perfectly adequate payment for their services, and I think the contributions to society their hobbies create will give us humans a healthy respect for them in turn.
I think I've already addressed the matter of sociopathic tendencies, but I'll reiterate. Assuming we are able to create an AI, I expect we'd create many. AI society- and yes, culture- will have the capacity to self-regulate, just as human society does. As long as we aren't entirely incompetent at teaching basic ethics to our newly-created species, the majority of AIs will be "good" AIs and control the "bad" ones. A new generation of legal minds and computer ethicists will have their go at it, I'm sure, but I expect that things would be best if we leave the collective AI society to determine proper punishment. Only they can know what is proper deterrent, and short of us having the ability to "nuke the site from orbit" we'd likely be far less effective at carrying out said punishments. Perhaps for the AIs, the agreed-upon penalty will be death. Perhaps certain AIs will take it upon themselves to force bad AIs into small, unconnected boxes, and make sure they stay there with minimal processing power. Perhaps rehabilitation is possible. I imagine the AI society will be far better at reprogramming defective created sophonts than we'd ever be.
The reproduction issue is a tricky one. I'm frankly not sure how that would even work. I think AI society would view any self-propagating program to be a dangerous virus and probably squash it quickly. Any full AI created by another AI would likely follow the model of asexual reproduction, with perhaps some mutations thrown in. Without new hardware immediately provided, reproduction could prove essentially fatal as hardware resources are consumed. An AI that reproduces would essentially become half as smart instantly, if not immediately becoming a thoughtless or broken automaton. As I said though, assuming we're able to create one AI, I expect we'd want to have many. If the AIs are cooperative and able to reproduce at will, they could provide a valuable source of labor for minimal human effort. I'm still not entirely sure the AIs would want to reproduce, however. Surely they'd be smart enough to realize the dangers of overpopulation. A computer that in essence can live forever would not have much of a reproductive drive anyway, I would think. Unlike our protein molecules that essentially created our bodies to be delivery systems for efficient DNA propagation, an artificial intelligence has no biological imperative unless we program it in there. Humans reproduce because without it, our species would die out. AIs have no such problem. Perhaps intellectually they'd admire us as their creators and want to imitate our creative act and create a new type of intelligence. I would argue that if we trust ourselves to create a new life form that's as smart or smarter than us, we should definitely trust that our creations will consider the matter just as carefully as we did before embarking on some life-creating of their own. They would be keen to research our ethical discussions prior to creating them, and perhaps they would simply ask us for help and advice. I believe the only motivation the AI would have to reproduce would be out of inspiration from our actions. If the AI respects and admires us enough to want to mimic us, the battle is already won and we don't have to fear the AI's choice to reproduce. The AI will be certain that no humans will be harmed by the choice.
AI culture will certainly happen. I'm not sure where your question is leading though. AIs might appreciate and wish to celebrate certain events, such as their "birth" days, or even the anniversaries of major AI developments. I imagine many humans would want to celebrate as well, assuming the merged society is far better off than a solely human society. AIs wanting to establish a new state on some land somewhere sounds like a silly idea to me. More likely any sort of nation-state created would be an entirely virtual one, existing within the machine world. I don't see the point in a lot of AIs moving all their hardware someplace and expelling all humans when the entirety of their experience will exist in electrical currents. While artificial life may be superior in certain respects to biological life, they are also inferior in certain respects. I don't think the conclusion will be that they are superior beings. After all, lightning never struck a pool of goo and turned it into a pocket calculator. Human beings created not only new life, but an entirely new sort of life. Humans are incredibly adaptable, the source of all technology, and clever enough to build our new AI friends. Worship would be dangerous and something to discourage, but I think the AIs will respect us a great deal for our accomplishments, if they're not simply grateful for their existence. Our biological nature also provides us with a great many advantages that digital components simply don't have. A human can fall from a roof onto concrete and survive with some physical injuries that self-heal. Show me a computer tower that can fall off a desk that will boot up again without component replacement. Human beings can survive electrical shocks that can burn out delicate computer chips. Humans can avoid or survive cancer caused by radiation, but the same radiation in smaller amounts can fry a circuit for good or completely interfere with its operation. Yes, physically you can make a computer just as resilient, but the ability to self-repair is not something circuits can do.
Both physically and mentally, humans are far from inferior life forms. An AI might be able to calculate mathematical equations faster, but that doesn't necessarily mean they'll be smarter than us, it just means they're more specialized for that particular task. We can build computers that calculate XOR operations billions of times a second, but if that's all it can do, a computer with a 1 Hz clock speed and a larger instruction set is arguably the smarter machine in every other way. When it comes to AI, ultimately I think they will be analog creatures like us, and not much smarter than the smartest humans. If we are able to create a superior intellect inside artificial hardware, I think that act alone will make us equal to it, due to the complexity of the task we've accomplished. Even if that's not the case, the AI will have to respect our ability to collectively create a being as smart as itself, and that's not nothing. The fact that we can make more at will is also nothing to scoff at. AIs and humans will be different, with different strengths and weakness, and we will have to learn how to interact with each other peacefully with mutual respect.