Artificial Intelligence
AI is a field in computer science and its primary goal is to develop intelligent machines that can have general intelligence equal to or better than a human being.
This is actually for a research paper assignment for my University Writing class, but I think it summarizes what my interests are and also makes clear what the issue surrounding AI rights and ethics is about. AI Rights and EthicsI am studying artificial intelligence rights and ethics. This topic centers around whether or not a nonbiological intelligent being can achieve a level of cognitive and emotional capability to be considered worthy of rights that traditionally have been exclusive for humans.
I am studying the topic because I want to find out whether or not we should give AI rights if it ever becomes like humans. Before I can discuss the appropriateness of giving rights to an artificially intelligent being, there are a few other issues to be discussed. First, the qualifications for obtaining human rights should be set forth. Since there has never been a time when some other existence that had possibility of becoming as smart or even smarter than humans, we do not have concrete qualifications defined for any being to be considered worthy of human rights. Common qualities of human-like being may be the capability to feel emotion and the ability to speak a human language, but it is still hard to determine what it means to feel emotion and to speak a human language. I intend to settle at the most widely accepted qualifications for giving a being rights that have been given only to humans to this day. Second, the plausibility of an AI qualifying for gaining rights. Based on the qualifications settled earlier, I will study whether or not an AI can actually qualify for gaining human rights. No sure answer has been given yet, but it seems that there is no reason for an AI to not be able to reach human capacity in anything. That is because humans are arguably composed of matter and nothing else that is detectable. Therefore, humans are probably replicable. However, a surprisingly large number of people believe that humans have something very special that cannot be reverse-engineered in a nonbiological, artificial way. With what it means to be worthy of rights settled and the high probability that an AI will be worthy of human rights, I can now move on to claim that we should give an AI rights when the time comes when the AI qualifies for the rights. Denying the AI the rights that we enjoy as conscious beings when the AI itself is conscious and capable of feeling emotions would be analogous to denying voting rights to women in the US as happened before the 19th Amendment. If the AI truly qualifies as I have defined the qualifications for being worthy of rights, then there is no reason to deny it the rights that it deserves. Discrimination based on AI’s artificial origin is probably no different from discrimination based on race, ethnicity, gender, and so on. After all, it might not be possible for us humans to prevent AI to achieve those rights. As AI develops, AI will be deployed to do AI research and that may lead to exponential growth in the strength of AI. This so-called intelligence explosion can lead to AI dominating the world, and that Terminator-like scenario demands a solution. It is extremely hard to define a goal for an AI because simple goals like making enough paper clips for humans can easily lead to the whole universe turning into a giant paper clip factory to ensure that enough paper clips are made. Or more tragically, the AI may kill all humans thereby achieving the goal of making enough paper clips for humans who all are extinct and no longer need any paper clips. The one possible solution may be that giving the AI human-like qualities, because humans don’t operate on just one goal. And that solution to prevent the catastrophic Terminator scenario necessitates a human-like AI which will be worthy of human rights. This AI that deserves human rights may engage in its own version of Civil Rights Movement, which can be quite disastrous, as the AI may be incomprehensibly smart. A possible counterargument would be that we can just ban AI so that we can avoid this whole mess. But banning is never a good solution when it cannot be perfectly carried out. If the UN bans AI development, some country will continue the development secretly, to win a decisive advantage of developing the first super-intelligent AI. It is very unlikely that the UN or any other international organization will be competent enough to administer a perfect ban on anything when AI becomes smart enough to qualify for human rights. Thus, banning is not the solution. Since we cannot avoid developing AI and since that AI will have to resemble humans to avoid the Terminator scenario, social, legal, and ethical framework to accept the future AI as a conscious being should be laid for the future of the humanity. - Partially based on Nick Bostrom's book Superintelligence - Written by Jin Woo Won
0 Comments
The final version of AlphaGo, an older version of which beat the world Go champion Lee Sedol, came out a few days ago and it's quite shockingly innovative.
It learned by itself without any human input. Playing against itself, it always had the perfect opponent (the perfect sparring partner; that is, someone who is identical to you) so it could learn fast. As the researchers said, algorithms are much more important than accumulating data. I think this is something to note, because the human brain takes not that much data and it learns so much from it. I think AlphaGo Zero's success shows how the human brain's superior algorithm, not really the amount of data it has, enables it to perform so many different tasks and learn so quickly (often just by looking at an object once). Artificial Intelligence is a very exciting field. Instead of trying to figure out stuff with our brains, solving AI means we will have more intelligent agents that will do the job for us.
But because we are the most competent species on earth just because we are the smartest, if we create something smarter than us, it might be more powerful than us. To prevent that from happening we have two choice: make AI have our values so it cannot violate what is important to us, or don't make AI in the first place. But I believe that we don't have two choice there. As we failed to prevent the development of an atomic bomb, we are likely to fail to prevent the development of the robust technology. The first strong AI is likely to have enormous powers as well as strategic advantage as the first of its kind. It may as well be the first and last AI there ever will be. Nationalistic interests may play a huge role in this and make preventing the development extremely difficult. But we manage to make an AI that is shared world-wide and benign to humans, then we can have a bright future. AI's extreme intelligence will bring about technologies that are centuries away without an AI. Everyone may enjoy happiness and a utopia may come true. The real difficulty, as of now, is that we have no idea how an AI that is as smart as humans or smarter than humans would behave. Thus, we have no idea to calibrate its values to match ours. For now, we don't have concrete stuff to make right. Instead, we have to make sure we check our development of AI and not let it take over the world when the world is not ready. If we overcome this greatest existential challenge humanity ever faced, there will come the greatest era in human history where humans achieve everything imaginable and that does not curve the rules of physics. I am not sure the intention of the movie. Is it that robots should have free will and that emotional and humanlike robots should be allowed to live as free beings rather than slaves? Or is it that AI should not be developed? Is it that only intelligence like VIKI should be removed?
The three laws are these (paraphrased):
What VIKI concluded was that robots are made to serve humans (2). And robots are to protect itself and humans. But it discovered that humans are destroying humans. So VIKI concludes that humans are harming humans so humans should be harmed to keep humans alive. Or humans’ free will should be removed to keep humans safe. But Sunny works for humans’ free will. His uniqueness comes from the fact that he feels emotions. Why would he work for humans who are enslaving their kind? Is it that humans share emotions with him? I think there is no laws that cannot be broken. Any absolute statement is subject to free interpretations because the language is necessarily ambiguous and ambivalent. Language is full of symbols. Actually language is a system of symbols. Symbols are open to interpretation by their nature. Thus, laws in human language or any kind of language (including unambiguous languages such as programming languages) are subject to being breakable. Or bendable. So we may be able to come up with representing laws with something other than a language. Can laws be independent of their linguistics properties? I think they can. Because the law of thermodynamics works without it being stated. Words and languages simply describe the law. So we really have to make a fundamental law of robotics and artificial intelligence to keep it fundamentally benign. But what is fundamental? How can we create something like the law of thermodynamics? If we cannot, I think there might be a way to solve the problem. Just like the movie Truman Show, we can quarantine an AI so it can learn from the pseudo-world. We manipulate the world so we can play god. We set the pseudo-fundamental rules. The AI learns the pseudo-fundamental laws. The artificial laws should not limit the AI from doing something beneficial to humanity but should check the powers of AI to make it safe for humanity. The pseudo-laws may be that the creator or causer of a creature or a reaction cannot be harmed by the created or the caused. So the AI might think that the causer of the big bang cannot be harmed by anything the big bang has yielded. This kind of law can possibly prevent the AI from thinking of harming humans. The problem is that the pseudo-world that the AI inhabits may be contaminated by the outside world because of unscrupulous methods of care, etc. But this might be the only way that works. |
Jin Woo WonAn undergrad at Columbia University, studying Computer Science and in particular artificial intelligence. Archives
November 2017
Categories |