Daniel+S’s+2016+OpEd+Article



When people look forward into the future, they try to predict what it will look like by looking back. To see where the world will be at in, say, 100 years, the typical course of action is to look back 100 years, see how much progress has been made since then, and judge from there. The same goes for any length of time. What will the world look like in 10-20 years from now? What did it look like 10-20 years ago? How much progress has been made? And from there, we can decide that the same amount of progress will be made in the next 10-20 years, right?

The thing is, human technological advancement is exponential. We do not grow linearly. If the past 10 or so years of progress held 10 or so years of progress, then the next 10 of so years of progress will hold far more. It may take centuries to invent agriculture, but once it is invented, it will take far less time to invent all the other aspects of civilization, and from there it is even less time to invent more. It can take however long to invent the silicon chip, but then it will take less time to invent computers, and from there is it easier to innovate further. The existing technology helps accelerate the rate at which new technology is developed based on the existing technology. This is referred to as the Law of Accelerating Returns by futurist Ray Kurzweil. Essentially, progression of the human species increases exponentially, as previously stated, meaning that the future is far more dramatic than one would expect. Now of course one could ask "Why does this matter? Sure maybe it means that my internet will be faster, and my phone will be smaller and flatter, and I will live longer, but so what?" There are other issue on our planet that seem more pressing, so why worry? Here is the thing: right now, we are at the tipping point. If you start increasing a number exponentially, it starts small. For example, 2x2 is 4, 4x2 is 8, 8x2 is 16, 16x2 is 32, so forth. Sure, 32 is larger than 2, but it is not a huge difference. However, continue squaring your number, and it gets pretty big. 32x2 is 64, 64x2 is 128, 128x2 is 256, then 512, then 1024, then 2048, then 4096, so forth. Soon it gets out of control, where you are in the trillions, and doesn't stop. That is a long way from the original 2. And the larger it is, the faster it grows. After a while, it is so big, that it is not even worth calculating. You could go on forever, but it will just keep approaching infinity. You have reached a tipping point. This is where human advancement is right now. We have globalized, and we can access and share information via the internet faster than any time in history. The next 20 years will not bring 20 years of progress, they will bring centuries of progress. After that, it is only a short matter of time before human advancement is increasing thousands of years every second. Things are about to get out of hand. So in reality, we have to think less about "I will live longer," and more "I will live forever."

Technology will improve remarkably fast thanks to this. Yet despite all of the technology that we improve or create, none will have a greater effect than artificial intelligence. We already have artificial intelligence in computers, it is just simple enough that it can do one thing at a time. A computer designed to play chess can beat anyone in the world at chess, but ask it to draw a picture, and it won't be able to. We are a few years away from creating computers that can self improve, allowing themselves to become smarter. They are computers, so assuming that they cannot develop a consciousness like humans, they will always stick to their core programming. However, they will continue to become better and better at this task, and make themselves smarter and smarter until they are smarter than humans, not just at one thing, but at everything. This is where our fate is decided. As soon as any artificial intelligence anywhere becomes smarter than a human (it will most likely be a computer in a technology company somewhere) it will immediately become he superior entity on our planet. After that point, humans will no longer be the most intelligent species. The AI, no matter how small an advantage of intelligence it has on humans, will be able to out-think anyone on Earth. AI will be able to learn and increase its own intelligence faster than any human, and soon every human. Computers can add on more storage to their "brain", and can think and learn at the speed of light. Humans cannot add more cells to their brain; we are limited. So as soon as artificial intelligence becomes smarter than a human, it will be unstoppable. The gap in brain capacity from dogs to primates is small, yet the difference in intelligence is huge. The gap between humans and primates in brain capacity is also small, but the difference in intelligence is still huge. It is impossible to teach a primate quantum mechanics. However, most humans can learn it. This means that even if the gap between humans and AI is small, the difference in intelligence is massive. AI will be able to contemplate concepts that human are mentally too limited to understand, just as one cannot teach calculus to a fish. And the AI will keep getting smarter forever. The only limit it has is that it will always try to achieve the task given to it by its core programming. If it was designed to make leather belts, it will be a nigh omniscient force that will be able to produce the most perfect leather belts in existence, on a galactic scale. If is was some military program designed to kill, it will get smarter and smarter, until it is an unstoppable force that kills everything. Why it is so much more powerful than any other technology, is because it is the gateway to every other technology. Artificial intelligence and research, understand, and produce advanced technology on a scale that humans cannot comprehend. So the big issue is its core programming is. If it is programmed to do something helpful, humans will benefit enormously. If it was designed to help humanity in some way, humans would have access to unlimited technology and power. There would be a solution to everything. No more hunger, disease, death, crime, currency, sadness, or war. Everyone would be indestructible and highly intelligent, spending our time doing whatever we pleased. This would be the positive future of having AI. However, if the artificial intelligence is programmed to do something that harms humanity, we would be doomed. Even if AI was not programmed to hurt us, we might just be in the way of its programming. Maybe it is programmed to make leather belts, and it sees us as being in the way of its task, because we take up too much space, so it uses advanced nanotechnology to turn us all into cows to help produce the leather. There would be no war, no evil robots, and no human resistance like in science fiction movies. This artificial intelligence would be too smart to be "unplugged" or "killed". If it wanted us dead, the human race would be killed off overnight. This would be the negative future of having AI. So humanity either becomes immortal or goes extinct.

One might argue that sure, it is really dramatic, and a big issue and all, but why does it matter? Artificial intelligence is the problem for our grandchildren to deal with, right? This is where the Law of Accelerating returns comes into play. Scientists in the artificial intelligence field, that know far more about this than I ever will, predict on average, that this super artificial intelligence will come about in roughly 40 years, give or take 20 years. Even if it somehow took longer than that, we will live to see it, thanks to the rapid increase in technology. So we will either see the end of the world, or the transcendence of humanity into an immortal state. (That is the future from a scientific perspective. This does not take into account religion or any other factors that one might want to apply.) The issue of our generation is deciding what the core programming will be. Deciding what the AI will do is deciding the fate of the human race. There is no stopping the creation of artificial intelligence. We either be smart about this, or we die.

Of course, there is the extremely slim chance that AI will not become the superior intelligence on earth, and there is some factor that humanity is not considering. We, as a species, will always get something wrong; there is always some factor we are not considering. Perhaps there is some threshold of intelligence that cannot be surpassed by computers. Perhaps AI will, for whatever reason, destroy itself once it becomes smart enough, due to something we are not considering or have yet to discover. But even if artificial intelligence does not take over and dethrone humanity, the advancement of human technology is still exponential, and the distant future is still nearer than one might think.