Artificial Intelligence
AI is a field in computer science and its primary goal is to develop intelligent machines that can have general intelligence equal to or better than a human being.
This is actually for a research paper assignment for my University Writing class, but I think it summarizes what my interests are and also makes clear what the issue surrounding AI rights and ethics is about. AI Rights and EthicsI am studying artificial intelligence rights and ethics. This topic centers around whether or not a nonbiological intelligent being can achieve a level of cognitive and emotional capability to be considered worthy of rights that traditionally have been exclusive for humans.
I am studying the topic because I want to find out whether or not we should give AI rights if it ever becomes like humans. Before I can discuss the appropriateness of giving rights to an artificially intelligent being, there are a few other issues to be discussed. First, the qualifications for obtaining human rights should be set forth. Since there has never been a time when some other existence that had possibility of becoming as smart or even smarter than humans, we do not have concrete qualifications defined for any being to be considered worthy of human rights. Common qualities of human-like being may be the capability to feel emotion and the ability to speak a human language, but it is still hard to determine what it means to feel emotion and to speak a human language. I intend to settle at the most widely accepted qualifications for giving a being rights that have been given only to humans to this day. Second, the plausibility of an AI qualifying for gaining rights. Based on the qualifications settled earlier, I will study whether or not an AI can actually qualify for gaining human rights. No sure answer has been given yet, but it seems that there is no reason for an AI to not be able to reach human capacity in anything. That is because humans are arguably composed of matter and nothing else that is detectable. Therefore, humans are probably replicable. However, a surprisingly large number of people believe that humans have something very special that cannot be reverse-engineered in a nonbiological, artificial way. With what it means to be worthy of rights settled and the high probability that an AI will be worthy of human rights, I can now move on to claim that we should give an AI rights when the time comes when the AI qualifies for the rights. Denying the AI the rights that we enjoy as conscious beings when the AI itself is conscious and capable of feeling emotions would be analogous to denying voting rights to women in the US as happened before the 19th Amendment. If the AI truly qualifies as I have defined the qualifications for being worthy of rights, then there is no reason to deny it the rights that it deserves. Discrimination based on AI’s artificial origin is probably no different from discrimination based on race, ethnicity, gender, and so on. After all, it might not be possible for us humans to prevent AI to achieve those rights. As AI develops, AI will be deployed to do AI research and that may lead to exponential growth in the strength of AI. This so-called intelligence explosion can lead to AI dominating the world, and that Terminator-like scenario demands a solution. It is extremely hard to define a goal for an AI because simple goals like making enough paper clips for humans can easily lead to the whole universe turning into a giant paper clip factory to ensure that enough paper clips are made. Or more tragically, the AI may kill all humans thereby achieving the goal of making enough paper clips for humans who all are extinct and no longer need any paper clips. The one possible solution may be that giving the AI human-like qualities, because humans don’t operate on just one goal. And that solution to prevent the catastrophic Terminator scenario necessitates a human-like AI which will be worthy of human rights. This AI that deserves human rights may engage in its own version of Civil Rights Movement, which can be quite disastrous, as the AI may be incomprehensibly smart. A possible counterargument would be that we can just ban AI so that we can avoid this whole mess. But banning is never a good solution when it cannot be perfectly carried out. If the UN bans AI development, some country will continue the development secretly, to win a decisive advantage of developing the first super-intelligent AI. It is very unlikely that the UN or any other international organization will be competent enough to administer a perfect ban on anything when AI becomes smart enough to qualify for human rights. Thus, banning is not the solution. Since we cannot avoid developing AI and since that AI will have to resemble humans to avoid the Terminator scenario, social, legal, and ethical framework to accept the future AI as a conscious being should be laid for the future of the humanity. - Partially based on Nick Bostrom's book Superintelligence - Written by Jin Woo Won
0 Comments
|
Jin Woo WonAn undergrad at Columbia University, studying Computer Science and in particular artificial intelligence. Archives
November 2017
Categories |