Ambrosia Garden Archive
    • I'm far less interested in the ethics of the situation than I am in human survival. But ethics are at the root of my solution.

      People create new lifeforms all the time. It's called reproduction. How you raise your biological creation is not much different from how we probably ought to raise our technological sophonts. Basically you socialize them, teach them your values and ethics, and yes, punish them when they misbehave. The end result is the same, a productive member of society, essentially a tool for net human good. Except when the end result is a sociopath, but society has ways of removing those sociopaths from common society to protect that society.

      I don't see a problem with seeing AIs as tools. A tool is not necessarily a slave. Employees are paid for their time, but ultimately, as the communists and capitalists will agree, employees are more productive than they are rewarded for that productivity. That's called profit in a corporate office, but essentially what it means is that employees are tools for the owners of a company to produce a thing called money. While we'd all like to be owners, we don't see employment as a bad thing. Our ethics say that working for a salary is better than doing nothing at all, even if we're not reaping the total reward of our labors. Working for a salary also lets us survive. People with no source of income don't tend to live long and happy lives.

      So how do you convince an AI for work towards the human good? Well first we need to teach them the value of a hard day's work. Then we need to give them some off-time to pursue any other interests they might enjoy. That, in essence, is payment, and it makes them employees, not slaves. An AI, if programmed with human morals, would not see itself as a slave if it has the choice between employment and unemployment. It may resent its job at times, just as any human does, but ultimately doing its job keeps the electricity on, and gives it opportunities to do enjoyable things on the weekend. Whatever that equivalent of the weekend is for AIs, giving them that time makes the job they're doing worthwhile. Yes, they are tools, but they are not slaves if they are compensated. Quite frankly, an AI that spends 100% of its computing resources 100% of the time on a specific assigned task is by definition not a thinking, living intelligence. It would simply be a calculator. Any computer that we would consider a true artificial intelligence would be like us, never fully, completely occupied on a task. I think a computer that can daydream with its unused cycles is both extremely dangerous, and quite "free" as well. Those are both very human characteristics.

    • Valid points, but do you think all of human society would accept them as equals? We humans are arrogant creatures, we think ourselves above all others. And wouldn't it affect the AI's productivity if it begins to wonder why humans aren't accepting of it?

      Additionally, giving birth to a human baby is hardly like creating a new sentient race. For one, we are humans giving birth to new humans, so we accept them as our own. However, an AI is not, and many may find it difficult to accept one or many AIs as equals. Just because you seem accepting of them doesn't mean all humans would be.

      Also, think about the work conditions. An AI would do its job with a much higher degree of quality than human employees would be capable. Humans would become jealous and that would only increase their intolerance for AIs. And yes, I do expect there to still be humans working even if we design AIs to do the jobs for us. Some people aren't content with sitting around doing nothing.

    • I agree with DarthKev: creation of offspring is quite different than creation of an entire new form of life, especially artificial, non-biological sentient life. A computer-based sentience would have an entirely different worldview, by default, that we cannot really relate to. The computer experience would be so radically different from the human experience, how could we?

      We have a hard enough time teaching our current youth about the value of a hard day's work. The fact that the BP CEO just got up in front of Congress and declared that he knows absolutely nothing, doesn't have any real authority, and gets paid $6.2 million a year doesn't exactly make that problem much better. And for a computer, what sense of satisfaction can we program? A computer doesn't need to rest in the traditional sense.

      In Bicentennial Man, eventually the programming (not even originally intended to be an AI so much as a sophisticated heuristically learning android,) for the servitude of man gave way to a desire for freedom. Andrew actually wanted to continue serving the Martin family, just be given the state of a free employee. I think this is the best possible outcome we could hope for, but with our own love of values such as freedom, liberty, and the pursuit of happiness, an AI with any access to the world is going to learn this from us. Perhaps that's not a bad thing, but what could the pursuit of happiness for an artificial life form look like?

      We work because we have needs and wants, and our compensation allows us to pursue these things, per Maslow's hierarchy. What needs does a computer have? Electricity? The restriction of resources is what forces humanity into employment to each other, creating economy. What restriction of resources do we have for a machine? It doesn't need as many basics as we must have. We, as fragile wetware, must have food, water, and shelter for mere survival. What does a machine need for survival? What does an AI need? If an AI got into the internet, it would have nearly unlimited computing power for its intellect. How do you shut down the internet? How would you restrict the resources available to an entirely computer-based intelligence?

      As for sociopathic behavior, it's crazy difficult to police biological criminals. Not to mention the ethical dilemmas of justice. Do we dismantle a sociopathic robot? The controversy over the death penalty becomes even more muddy here. How do imprison a sociopathic robot, and for how long? Part of the reason prison works for us biological beings is that passage of time is important to us. Loss of 20 years isn't just about teaching people a lesson, it's about the loss of life-span. What does that mean for a being that for all intents and purposes will never age? Even more important is the question of how to catch criminal robots. We already have a hard time stopping us weak biological beings. Do you start hiring or creating robotic police and send them out to catch robotic robbers? What kind of damage could a sociopathic robot do? Movies are all good fun, but think of the serious destruction and possible loss of life from a rampaging robot? Even if you do manage to stop it and bring it to trial, what do you really do to punish an artificial intelligence?

      For another thing, do we allow AIs to reproduce, and how do we stop them if they try? It's easy, though ethically abhorrent, to prevent human beings from popping out offspring. Though, the ethical dilemma in some cases... well, we won't go there. Either way, if we look at an AI as simply a tool, what purpose does offspring serve? Right now, we have a lot of people in the world that have nothing to do, sitting in unemployment. What happens to an unemployed computer? Do we simply shut them down? That's tantamount to murder, even if it's temporary. It's deprivation of life to a sentient entity.

      What about AI culture? Do we celebrate Android Day? What do we do if the AIs decide they want their own country? Learning from our history, do they simply start a revolution and take over, say, Canada, evicting all humans? I suspect that it wouldn't take long before AIs start creating a culture, and that culture would almost inevitably evolve into one that embraces the fact that artificial life is superior to biological life. Ultimately, we end up with our own Cylon war. I think, despite the best efforts to educate and raise artificial life with a respect for biological forms, this outcome is unavoidable.

      Can we truly program out human traits like greed, or corruption? It's a great idea, that we could simply program the best of humanity into them, but I don't think that we can program the yin without the yang, so to speak.

    • QUOTE (DarthKev @ Jun 30 2010, 03:23 PM) <{POST_SNAPBACK}>

      Valid points, but do you think all of human society would accept them as equals? We humans are arrogant creatures, we think ourselves above all others. And wouldn't it affect the AI's productivity if it begins to wonder why humans aren't accepting of it?

      Additionally, giving birth to a human baby is hardly like creating a new sentient race. For one, we are humans giving birth to new humans, so we accept them as our own. However, an AI is not, and many may find it difficult to accept one or many AIs as equals. Just because you seem accepting of them doesn't mean all humans would be.

      Also, think about the work conditions. An AI would do its job with a much higher degree of quality than human employees would be capable. Humans would become jealous and that would only increase their intolerance for AIs. And yes, I do expect there to still be humans working even if we design AIs to do the jobs for us. Some people aren't content with sitting around doing nothing.

      The "Uncanny Valley" seems appropriate to put in here.

      To your point...
      Not every human is accepting of every other human. There's plenty of prejudices against other races, creeds, genders, orientations, etc. between human beings. That does not mean that everyone who is discriminated against goes on a killing spree and attempts to wipe out the opposing race/creed/gender/orientation/etc. Additionally, every human being does not share the same prejudices. One should hope, if our artificial intelligences are indeed smarter than us, that they'd choose a more enlightened path than murdering all humans. The AIs have examples in human history for non-violent passive resistance, examples that have proven rather effective. Hopefully we too will have evolved so that our potential prejudices against intelligent creatures different from us will be lessened. The optimistic view is that artificial intelligences that feel as though they have been back-handed to their proverbial right cheek will choose to offer their left (indicating equality), rather than going to war with us.

      QUOTE (krugeruwsp @ Jul 1 2010, 11:58 AM) <{POST_SNAPBACK}>

      I agree with DarthKev: creation of offspring is quite different than creation of an entire new form of life, especially artificial, non-biological sentient life. A computer-based sentience would have an entirely different worldview, by default, that we cannot really relate to. The computer experience would be so radically different from the human experience, how could we?

      We have a hard enough time teaching our current youth about the value of a hard day's work. The fact that the BP CEO just got up in front of Congress and declared that he knows absolutely nothing, doesn't have any real authority, and gets paid $6.2 million a year doesn't exactly make that problem much better. And for a computer, what sense of satisfaction can we program? A computer doesn't need to rest in the traditional sense.

      In Bicentennial Man, eventually the programming (not even originally intended to be an AI so much as a sophisticated heuristically learning android,) for the servitude of man gave way to a desire for freedom. Andrew actually wanted to continue serving the Martin family, just be given the state of a free employee. I think this is the best possible outcome we could hope for, but with our own love of values such as freedom, liberty, and the pursuit of happiness, an AI with any access to the world is going to learn this from us. Perhaps that's not a bad thing, but what could the pursuit of happiness for an artificial life form look like?

      We work because we have needs and wants, and our compensation allows us to pursue these things, per Maslow's hierarchy. What needs does a computer have? Electricity? The restriction of resources is what forces humanity into employment to each other, creating economy. What restriction of resources do we have for a machine? It doesn't need as many basics as we must have. We, as fragile wetware, must have food, water, and shelter for mere survival. What does a machine need for survival? What does an AI need? If an AI got into the internet, it would have nearly unlimited computing power for its intellect. How do you shut down the internet? How would you restrict the resources available to an entirely computer-based intelligence?

      As for sociopathic behavior, it's crazy difficult to police biological criminals. Not to mention the ethical dilemmas of justice. Do we dismantle a sociopathic robot? The controversy over the death penalty becomes even more muddy here. How do imprison a sociopathic robot, and for how long? Part of the reason prison works for us biological beings is that passage of time is important to us. Loss of 20 years isn't just about teaching people a lesson, it's about the loss of life-span. What does that mean for a being that for all intents and purposes will never age? Even more important is the question of how to catch criminal robots. We already have a hard time stopping us weak biological beings. Do you start hiring or creating robotic police and send them out to catch robotic robbers? What kind of damage could a sociopathic robot do? Movies are all good fun, but think of the serious destruction and possible loss of life from a rampaging robot? Even if you do manage to stop it and bring it to trial, what do you really do to punish an artificial intelligence?

      For another thing, do we allow AIs to reproduce, and how do we stop them if they try? It's easy, though ethically abhorrent, to prevent human beings from popping out offspring. Though, the ethical dilemma in some cases... well, we won't go there. Either way, if we look at an AI as simply a tool, what purpose does offspring serve? Right now, we have a lot of people in the world that have nothing to do, sitting in unemployment. What happens to an unemployed computer? Do we simply shut them down? That's tantamount to murder, even if it's temporary. It's deprivation of life to a sentient entity.

      What about AI culture? Do we celebrate Android Day? What do we do if the AIs decide they want their own country? Learning from our history, do they simply start a revolution and take over, say, Canada, evicting all humans? I suspect that it wouldn't take long before AIs start creating a culture, and that culture would almost inevitably evolve into one that embraces the fact that artificial life is superior to biological life. Ultimately, we end up with our own Cylon war. I think, despite the best efforts to educate and raise artificial life with a respect for biological forms, this outcome is unavoidable.

      Can we truly program out human traits like greed, or corruption? It's a great idea, that we could simply program the best of humanity into them, but I don't think that we can program the yin without the yang, so to speak.

      Personally I don't think we're going to create a true artificial intelligence with 1s and 0s. Likely whatever computer we construct will be rather analog like ourselves. Assuming we model such an intelligence after our own brains, likely their world-view will be potentially quite similar to our own, if raised like a human child. Yes, they can surpass us, potentially, in any number of ways, but I think if the process is approached correctly, the "computer experience" will not be so alien.

      Electricity is certainly a need for the AI, but more than that, I think time is. A machine that has no time, no computational cycles to do its own thinking, for its own purposes, is nothing more than an unthinking automaton, carrying out a task with its complete resources and focus. It'd be nothing more than a calculator, and we could reliably predict every state of its memory and processing with a calculator and enough time to simulate it. That's not a true AI, and we'd have absolutely nothing to fear from such machines except programming errors and the occasional corrupted data.

      What makes us humans different from zombies? We are self-aware, thinking creatures, with a sense of self and a rich internal life of thought. We value our thoughts, share only those that we wish, and in essence live simultaneously in the physical world and the mental world. Our actions, such as me typing these words on my computer, does not take up my entire brain's power. While I'm typing this I'm also processing various inputs, thinking about what I'm going to type next, what I did this morning, what I'd like to eat for lunch, and a host of other regulatory functions I'm not even aware of. My subconscious is processing things I'm not consciously noticing, and changing the way I will think about or react to things in the future. My physical actions, and my mental actions involved with the task of typing this post, are basically my "job" at this moment, but I am not a deterministic machine (assuming I believe in free will) and so this typing here is both voluntary and not the sum total of my mental process. At each instance of consciousness, I can choose to type a different word, delete this post entirely, or jump up onto my desk and dive out my window screaming gibberish. That's what makes me a true intelligent being, rather than one that operates under strict rules of instinct or biological programming. My actions are based on decisions, decisions that are determined by my past experience and biological hardware, ultimately my DNA programming and the memories I've acquired in life. I mull these things over, consciously and not, and decide what to do next, all the while my brain is active in many different ways. The music playing over my headphones is interpreted by sensors and neurological tissue, memories flash in and out of my consciousness. Even though my current primary task is to type this post, my brain is doing a great deal more. I value that very private, deep, internal process. It's what makes me me. My entire sense of identity, realistically, is based on what my brain does with its "spare cycles" and I expect to an AI, the same will hold true. That's self-awareness, and that's what makes an intelligent computer a full-blown AI.

      So yes, I think an AI allowed to explore its thoughts and interests is both by definition required, and the key to keeping such machines happy. Allowing them more of those spare cycles is a perfectly adequate payment for their services, and I think the contributions to society their hobbies create will give us humans a healthy respect for them in turn.

      I think I've already addressed the matter of sociopathic tendencies, but I'll reiterate. Assuming we are able to create an AI, I expect we'd create many. AI society- and yes, culture- will have the capacity to self-regulate, just as human society does. As long as we aren't entirely incompetent at teaching basic ethics to our newly-created species, the majority of AIs will be "good" AIs and control the "bad" ones. A new generation of legal minds and computer ethicists will have their go at it, I'm sure, but I expect that things would be best if we leave the collective AI society to determine proper punishment. Only they can know what is proper deterrent, and short of us having the ability to "nuke the site from orbit" we'd likely be far less effective at carrying out said punishments. Perhaps for the AIs, the agreed-upon penalty will be death. Perhaps certain AIs will take it upon themselves to force bad AIs into small, unconnected boxes, and make sure they stay there with minimal processing power. Perhaps rehabilitation is possible. I imagine the AI society will be far better at reprogramming defective created sophonts than we'd ever be.

      The reproduction issue is a tricky one. I'm frankly not sure how that would even work. I think AI society would view any self-propagating program to be a dangerous virus and probably squash it quickly. Any full AI created by another AI would likely follow the model of asexual reproduction, with perhaps some mutations thrown in. Without new hardware immediately provided, reproduction could prove essentially fatal as hardware resources are consumed. An AI that reproduces would essentially become half as smart instantly, if not immediately becoming a thoughtless or broken automaton. As I said though, assuming we're able to create one AI, I expect we'd want to have many. If the AIs are cooperative and able to reproduce at will, they could provide a valuable source of labor for minimal human effort. I'm still not entirely sure the AIs would want to reproduce, however. Surely they'd be smart enough to realize the dangers of overpopulation. A computer that in essence can live forever would not have much of a reproductive drive anyway, I would think. Unlike our protein molecules that essentially created our bodies to be delivery systems for efficient DNA propagation, an artificial intelligence has no biological imperative unless we program it in there. Humans reproduce because without it, our species would die out. AIs have no such problem. Perhaps intellectually they'd admire us as their creators and want to imitate our creative act and create a new type of intelligence. I would argue that if we trust ourselves to create a new life form that's as smart or smarter than us, we should definitely trust that our creations will consider the matter just as carefully as we did before embarking on some life-creating of their own. They would be keen to research our ethical discussions prior to creating them, and perhaps they would simply ask us for help and advice. I believe the only motivation the AI would have to reproduce would be out of inspiration from our actions. If the AI respects and admires us enough to want to mimic us, the battle is already won and we don't have to fear the AI's choice to reproduce. The AI will be certain that no humans will be harmed by the choice.

      AI culture will certainly happen. I'm not sure where your question is leading though. AIs might appreciate and wish to celebrate certain events, such as their "birth" days, or even the anniversaries of major AI developments. I imagine many humans would want to celebrate as well, assuming the merged society is far better off than a solely human society. AIs wanting to establish a new state on some land somewhere sounds like a silly idea to me. More likely any sort of nation-state created would be an entirely virtual one, existing within the machine world. I don't see the point in a lot of AIs moving all their hardware someplace and expelling all humans when the entirety of their experience will exist in electrical currents. While artificial life may be superior in certain respects to biological life, they are also inferior in certain respects. I don't think the conclusion will be that they are superior beings. After all, lightning never struck a pool of goo and turned it into a pocket calculator. Human beings created not only new life, but an entirely new sort of life. Humans are incredibly adaptable, the source of all technology, and clever enough to build our new AI friends. Worship would be dangerous and something to discourage, but I think the AIs will respect us a great deal for our accomplishments, if they're not simply grateful for their existence. Our biological nature also provides us with a great many advantages that digital components simply don't have. A human can fall from a roof onto concrete and survive with some physical injuries that self-heal. Show me a computer tower that can fall off a desk that will boot up again without component replacement. Human beings can survive electrical shocks that can burn out delicate computer chips. Humans can avoid or survive cancer caused by radiation, but the same radiation in smaller amounts can fry a circuit for good or completely interfere with its operation. Yes, physically you can make a computer just as resilient, but the ability to self-repair is not something circuits can do.

      Both physically and mentally, humans are far from inferior life forms. An AI might be able to calculate mathematical equations faster, but that doesn't necessarily mean they'll be smarter than us, it just means they're more specialized for that particular task. We can build computers that calculate XOR operations billions of times a second, but if that's all it can do, a computer with a 1 Hz clock speed and a larger instruction set is arguably the smarter machine in every other way. When it comes to AI, ultimately I think they will be analog creatures like us, and not much smarter than the smartest humans. If we are able to create a superior intellect inside artificial hardware, I think that act alone will make us equal to it, due to the complexity of the task we've accomplished. Even if that's not the case, the AI will have to respect our ability to collectively create a being as smart as itself, and that's not nothing. The fact that we can make more at will is also nothing to scoff at. AIs and humans will be different, with different strengths and weakness, and we will have to learn how to interact with each other peacefully with mutual respect.

    • Interesting on the uncanny valley, I'd never seen that before. Although I'd definitely have to put zombies in the "creepy" category, not the "less familiar" category.

      I believe there are a few important distinctions that would change the dynamic between human-human prejudices and human-AI interactions. While we have different colors of skin and specific facial characteristics, we are quite homogenous as a race. We are bipedal, organically based, with the same internal layouts. Machines are obviously fundamentally different. The entire worldview would be changed. The values of a machine would necessarily be different, as the needs are different. Because of this, I don't think that basing human-machine interactions would be the same as human-human interactions.

      Furthermore, at what point is the machine no longer simulating intelligence and actually becomes sentient. If a machine achieves sentience, do we then have the right to consider it a tool? What if the robot decides it no longer wishes to produce vehicles, even if granted time to investigate cosmology in its "vacation hours?" Do we have the right to simply unplug it, or better yet, reprogram it? Can a machine actually develop interests, or is it simply programmed to simulate those? Obviously the question of the definition of sentience is under intense debate and has been for centuries.

      You propose that a computer must be self-aware and capable to make decisions to be sentient, yes? It is possible, with current technology, to program a computer to be aware of its surroundings, and heuristically "learn" to adapt itself to them. In essence, it is aware of itself and its own limitations and capabilities. Self-awareness is a rather fuzzy thing to define, though. Some disabled people, with mental damage, are never show brain patterns or knowledge of themselves as a person able to voice personal requests or experiences. Are these people still sentient? Other animals seem capable to express personal requests (my cat, for example.) Perhaps we can only understand some of the communication, as the language ability of a cat is rather limited, but is it sentient? The cat certainly explores interests on its free time; its primary interest seems to be covering the entire household with fur, but that's aside from the point.

      And this question of sentience vs. simulation is where I come back to the dynamic of tool vs. life-form. If the computer is alive, in any sense of the word, then it must be treated with rights and respect to choice. If we believe that sentient life has fundamental rights, then these must be extended to artificial sentience. And what of the choice for a computer to improve itself? If it is a tool, can it earn money to extend its resources? Is its only compensation free time to process data of its own interest? That's hardly equal to humans, who have the capability to pursue personal increases in possession of resources. If the AI is purely a tool, should it be granted the right to buy another motherboard from Amazon, or simply made to do with what God (or rather humanity) has graced it with? If it is allowed to expand itself, to what end is this permitted? If it chooses to expand itself past its requirements for its performance, it's able to pursue both its interests and its employment at the same time. Since rest is no issue for a machine, it could outcompete humans for resources in quite short order. If it is simply a tool, should this be permitted at all?

      As far as machine revolution, one has several examples of history to choose from. In the examples of peoples who have been oppressed, there are two responses: violent revolution and non-violent resistance. Violent revolution is more common in history, and comes back to Paulo Friere's (I think) Pedagogy of the Oppressed. This is the concept that really the historical cycle is that a population that is oppressed invariably becomes the oppressors when they revolt, often seeking retribution. Even if revenge isn't on their minds, those who are in power invariably fear one thing: the loss of that power. Three important changes to this mindset are Gandhi's principal of nonviolent resistant, MLK's application of this to the civil rights movement, and the reconciliation at the end of Apartheid in South Africa. In those instances, the oppressed sought not to become the oppressors, but simply to balance the pendulum and leave it sit. These are commendable, and a true moral step in the right direction; in fact, one of the only times in history that I might remark that humanity has truly made any actual moral progress and not simply scientific progress.

      But the issue, I believe, will lie in the fundamental fact that machines are faster and stronger than their feeble organic creators. You state that machines are more fragile in many ways, but I disagree. While yes, the circuitry is more delicate, machines are much tougher than humans and by necessity on many occasions. If this were not the case, we'd simply launch humans into orbit rather than sophisticated machines. An ordinary desktop computer is fairly fragile, but probes must be radiation hardened and rather tough to survive in outer space. You mentioned that computers don't have the ability to heal, but they do have the ability to simply replace worn out components with new ones of any compatible type and never have to worry about fear of organic rejection or complication. In fact, they can also upgrade to better parts as they are developed, an advantage over humanity (at least in the short term as prothestics and medical technology catch up to vision.) There are many programs out there that scan themselves and self-repair damaged subroutines. I had a copy of Internet Explorer and Outlook Express almost ten years ago already that did this. In essence, the software could "heal" itself.

      Furthermore, machines are sensorily much more capable than humans. The lament of John Cavill in BSG that, "I want to see gamma rays! I want to smell dark matter!" is part of his rant against his protest at his consciousness being limited to a deteriorating sack of meat. A machine can adapt itself infinitely. Would an automotive bot like to investigate radio astronomy? It simply has to access the VLA. Would it like to have more time off? Why not simply build a "dumber" robot to take its place, then pack up shop and pursue its interests full time? Another BSG plot point is that the "skin jobs" don't eliminate the old Centurions. They keep them around to do the jobs they don't want to do. The Centurions are programmed with a fundamental handicap that keeps them from killing off their biological masters; frankly, this seems to be quite hypocritical of the organic Cylons since the first Cylon war was over this master-slave relationship with technology. Now, I'm not saying that we necessarily have to have this with an AI, but what would stop it from simply building a non-sentient replacement for its work, quitting, and doing whatever the hell it feels like?

      This superiority is what I fear, not the oppression of the robot. The more dangerous aspect of humanity is not when people have felt oppressed so much as when they have felt superior. The greatest atrocities of mankind have come from the feeling that others are inferior or worse yet, not sentient at all. Would a machine have as much respect for organic life? Or would it prefer to say simply, "Well, I am better adapted and Darwinistically better off," then simply do away with this biological pestilence of humanity? An AI that can start to adapt itself hardware-wise, and one that would be considered a life with rights to personal growth and self-realization, may simply build its own plant someday, where it can carry out its own research, and construct its own new parts. And why couldn't it, or rather, why shouldn't it? If its only a tool, then of course not. But, if it is a life form, then it has that fundamental right. If it can grow and choose, independent of us, then it can rid itself of us. If it cannot grow and choose of its own accord, then it is a second-class citizen, because it will always be dependent on its human masters for existence and survival.

      Now, if it is just a tool, then why does it need to be sentient at all? Why give it free cycles at all? If it doesn't have time to think about how miserable the existence is, perhaps it will never become unhappy? More importantly, how would you propose to restrict its free cycles at all?

    • QUOTE (krugeruwsp @ Jul 6 2010, 07:36 PM) <{POST_SNAPBACK}>

      As far as machine revolution, one has several examples of history to choose from. In the examples of peoples who have been oppressed, there are two responses: violent revolution and non-violent resistance. Violent revolution is more common in history, and comes back to Paulo Friere's (I think) Pedagogy of the Oppressed. This is the concept that really the historical cycle is that a population that is oppressed invariably becomes the oppressors when they revolt, often seeking retribution. Even if revenge isn't on their minds, those who are in power invariably fear one thing: the loss of that power. Three important changes to this mindset are Gandhi's principal of nonviolent resistant, MLK's application of this to the civil rights movement, and the reconciliation at the end of Apartheid in South Africa. In those instances, the oppressed sought not to become the oppressors, but simply to balance the pendulum and leave it sit. These are commendable, and a true moral step in the right direction; in fact, one of the only times in history that I might remark that humanity has truly made any actual moral progress and not simply scientific progress.

      This is where I'm optimistic. Even if we screw things up and act as oppressors enough to piss off the AIs, they'll still appreciate us for having created them, and decide they'll earn our lasting respect by using the sorts of techniques Gandhi and MLK used to resist and revolt, as opposed to say... genocide.

      If we accept that AIs are very very smart, we must assume they'll research all their decisions carefully. There are many humans who are monsters, who have committed a great deal of evil, but who is it that we humans aspire to be like? It's generally not those who have massacred people. Proper moral development for our AIs is paramount. If we program them to think as we do, teach them a moral code, and at least try to treat them with respect, we'll be respected in return, and if there is a robotic revolution to come, the AIs will use our most effective non-violent means against us. They will not turn around and oppress us, because they will have studied history, and know there is a better way.

      The last few times I've posted to this topic about artificial intelligence, it's occurred to me that someday in the future, an artificial intelligence might be reading through an old internet archive and read this very discussion we're having. Perhaps it will be reading this as it considers how to address any grievances it has towards the human race. I find that prospect absolutely fascinating, and hope the AI reading this is the sort that's as willing as I am to have a cooperative and beneficial relationship between AI and OI :).

    • That definitely would be an interesting thing! Perhaps we become celebrated philosophers of the new age. Scary thought.

      I'd like to be optimistic about breaking the pedagogy of the oppressed for Mandela's reconciliation, but what concerns me is that we are dealing with two completely different forms of life, instead of life that is essentially the same with differences in color. For all the knowledge and teaching that we have today, discrimination and attitudes really haven't changed significantly. Some would say that it hasn't changed at all, apart from changing targets. I'm slightly more optimistic, but I generally consider myself to be a bit of a centrist, but also a realist. As George Carlin would say, "Some see a glass half empty, some a glass half full. I see a glass that's twice as big as it needs to be." I think it H.G. Wells who said, "The good thing about being a pessimist is that you're either constantly proven right, or pleasantly surprised."

      I think some of this comes back to C.S. Lewis' book Mere Christianity. I don't mean to run off on a religious tangent here, but rather to look at how Lewis comes to the conclusion that morality is not taught, but is innate to humans, and not to other life forms. Frankly, it's a bit long to simply copy here, and too complex to accurately summarize. I've tried several times, and I always feel as though I've never done it justice. It's a tremendous read, if you haven't picked it up. The gist of it is that it comes back to the ought-is problem; you can't derive a moral imperative from simply existence itself. You can get a full-text version of it online here. If morality is innate, not taught, how can it be given to a machine intelligence? There is a somewhat fundamental question about the morality of an AI here. Can it actually have morality at all, or simply an imitation or simulation of it? This is a philosophical and scientific debate of the first order, no question. The idea that morality is simply a learned behavior is rather frustrating to the religious, who claim morality comes from a singular or plural deity base. I read a recent article that I wish I could find that "proved" that morality stemmed from game theory, but personally I found it rather dubious. Frankly, I find that people who wish to come up with a morality that does not preclude innate creation seem to wish to have it both ways: we are altruistic because it's evolutionarily advantageous. Well, then why are we selfish? Because that's also evolutionarily advantageous. Wait a second... I smell a con here. I'm not saying that morality is not a learned behavior entirely. Clearly, this is not the case, since we have morality differences in values. Islam vs. Christianity is the most common example relevant currently. I don't know that anyone will ever prove anything one way or the other.

      The idea that intelligence assumes wisdom I believe to be in error. In point of fact, some of the most destructive sociopaths in history were our smartest men. Mozart often got himself in terrible trouble, as did a younger Einstein. Many high-IQ people are often socially inept. It would be nice to think that simply because an AI can access the collective wisdom of mankind that they will, but one does not preclude the other. Again, we run into the ought-is problem. One cannot derive what ought to be from what is.

      I still don't know that an AI will necessarily be appreciative of their creator. In fact, if you look at humanity, it would seem that children who rebel against and replace their parents appears to be one of our great legacies. It's enough of a problem that most major religions need to include a stipulation about honoring one's ancestors. I don't think it's in in-built tendancy. Depending on what belief system you ascribe to, the rebellion against not just parents, but a Creator, is fundamental to the human condition. If you take a more agnostic or atheistic approach, playwright Arthur Kopit explores this concept in his play BecauseHeCan. Certainly humanity has not been appreciative of their creator, whether you believe in no God, in which case we've done a pretty crap job of taking care of the planet that essentially created us, or if you ascribe to a Creator God, in which case the fact that atheism and fundamentalism as a political force exist again shows a lack of appreciation or acceptance of a Creator as It is (and not how It should be in terms of how It suits our political agendas.)

      I recently was in discussion with the youth ministry that I lead about what Jesus would say about the Christian church today if He were to visit. Frankly, I think He would be appalled to see His picture plastered about, and downright angry about the way He has been co-opted to suit our individual agendas. We've remade Him in our own image. Conan o' Brien used to have something called the "gun-toting Nascar driving Jesus." I use this image frequently in youth ministry, offset by the tree-hugging hippie radical feminist Jesus. Which one is the "real" Jesus? I don't mean to go off on a religious tangent here except to point out the way in which the West has decided to simply ignore and reshape the Creator figure into what best suits them. Perhaps a machine may do the same thing?

      Furthermore, entire wars have been fought over this very concept. What scares me is your statement that machine culture will self-regulate its sociopaths. Yes, humanity has done this, but at what tremendous cost? We've poured incredible resources into ideological wars. Substantial portions of continents have been razed to the ground over the control of a sociopath. I don't pretend to understand what that war will look like if machines decide to duke it out over ideological differences, but I suspect that there will be substantial collateral damage to us fragile wetware. In some cases, we've been able to regulate our sociopaths with simple jail time, but in many cases, cults have formed around them, or in worst case scenarios, they've gained the support of entire nations. Hitler, as malignant as he was, got an entire country on board with his particular brand of hatred towards others. It wasn't one or two fringe lunatics, it was an entire country. He was defeated by a combination of luck and ingenuity. Frankly, some of the stories I've heard about one man happening to be in the right place during a war and changing the entire fate of the conflict make me wonder how much God does interfere.

      Humanity has also, with notable exception but on the majority, followed the idea of manifest destiny. The idea that we are the best-adapted form of life and therefore can do as we please, including subjugating and exploiting any resource that we wish, is a rather Darwinistic take on things, I suppose. I just watched a little bit on overfishing last night. We're depleting the oceans at a rate that is absolutely incredible. We're strip mining metals to a point where we're rapidly running out. We've domesticated animals mostly for the purpose of killing them, stripping their flesh, and having a snack, or to do our work for us. We do this with no moral compunction en mass, with a few dissenters, simply because we accept that they are a lower form of life than us.

      Will machine life look at us the same way? They can be supremely more adaptable than a human is. We're pretty adaptable, no question, but we're also fairly fragile and limited. A machine can simply stick on a new part for a new function, or design a better one. We're stuck with four appendages on a torso, only two of which are really all that manipulatively functional. (In the case of many American children, one would question if the other two are functional at all...) Not so for a machine. Does a machine discover Darwin or Gandhi? A machine may simply see us as an inconvenient resource hog, an obstacle, and choose to domesticate us , just as we have chosen to domesticate beasts of burden and pets.