AI is a field in computer science and its primary goal is to develop intelligent machines that can have general intelligence equal to or better than a human being.
Artificial Intelligence is a very exciting field. Instead of trying to figure out stuff with our brains, solving AI means we will have more intelligent agents that will do the job for us.
But because we are the most competent species on earth just because we are the smartest, if we create something smarter than us, it might be more powerful than us.
To prevent that from happening we have two choice: make AI have our values so it cannot violate what is important to us, or don't make AI in the first place.
But I believe that we don't have two choice there. As we failed to prevent the development of an atomic bomb, we are likely to fail to prevent the development of the robust technology.
The first strong AI is likely to have enormous powers as well as strategic advantage as the first of its kind. It may as well be the first and last AI there ever will be. Nationalistic interests may play a huge role in this and make preventing the development extremely difficult.
But we manage to make an AI that is shared world-wide and benign to humans, then we can have a bright future. AI's extreme intelligence will bring about technologies that are centuries away without an AI. Everyone may enjoy happiness and a utopia may come true.
The real difficulty, as of now, is that we have no idea how an AI that is as smart as humans or smarter than humans would behave. Thus, we have no idea to calibrate its values to match ours.
For now, we don't have concrete stuff to make right. Instead, we have to make sure we check our development of AI and not let it take over the world when the world is not ready. If we overcome this greatest existential challenge humanity ever faced, there will come the greatest era in human history where humans achieve everything imaginable and that does not curve the rules of physics.