Will Artificial intelligence surpass human intelligence? | Teen Ink

Will Artificial intelligence surpass human intelligence?

May 30, 2022
By Anonymous

Imagine a world where Artificial Intelligence (AI) has advanced to the point where it has become the superior species. A world where robots are so intelligent that they can think for themselves and beat humans in every way imaginable, or imagine a world where humans are enhanced by AI and are smarter than you can think possible. This is the singularity. 

Artificial intelligence is getting more and more advanced. Some theories say that AI may even surpass human intelligence. This theory is called ‘the singularity’. Artificial intelligence has been a passion of mine. I always ask questions such as; Will AI surpass us? If AI does surpass human intelligence, is it or is it not an issue? If so, why? What are some solutions to the problem? What are the risks? What are the benefits? And what is the future of artificial intelligence? I will explore these questions, giving the arguments for AI surpassing us and against AI surpassing us, and end with my own opinion. 

Some people say that singularity will happen. There are multiple arguments for the theory. These arguments include; 

An argument that singularity will happen is that the progression of AI is vast. This will lead to the singularity where it surpasses our intelligence, “Artificial intelligence is not only becoming continually more advanced but is also improving at ever faster rates. Its rate of progress will eventually lead to an intelligence explosion” (Infobase). Through AI, we will be able to understand ourselves better. We should embrace the development of AI and understand it rather than it being a threat. We will be able to live with it and keep it in our grasp.

Additionally, there is a theory that computers are already more intelligent than us in some ways. Elon Musk says that “computers are already much smarter than humans, in so many dimensions”(Youtube CNBC Television). He went on to give an example of a way computers are more intelligent than us; There was software called Alfago that was designed to play the game Go, which is a game said to take up even more brain power than chess. Alfago started off playing and losing to mediocre players, then soon after, through experience, it was able to beat the world champion and other previous world champions. Then, a newer model came out named Alfazero that won against Alfago after only playing itself for a short time and won the world championship 100–0! And this software learns by just reading instructions just like we do, but Alfazero learned by playing itself. They pick it up so fast that they can make up strategies by themselves in a couple of minutes. 

There are also people who oppose these points, saying that Artificial intelligence will not surpass human intelligence and that we will remain the superior species; 

An argument that the singularity will not happen says that the human brain is too complicated to break down and reverse engineer and will take ages before it can be fully understood and replicated with AI. If this does happen, we will not be able to control it. “Many critics have charged Singularitarians such as Kurzweil with underestimating the complexity of the human brain and the gargantuan effort that it would take to reverse-engineer it. Paul Myers, an associate professor of biology at the University of Minnesota in Morris, writes in his science blog Pharyngula that Kurzweil "demonstrates little understanding of how the brain works," and, as a result, his predicted timeline for reverse-engineering the human brain "is absurd."” (Infobase) 

There is another opinion that AI may be academically more intelligent than humans, but not in an artistic way. It can never replicate artistic vision without the aid of a human to tell it what to do. Therefore, it relies on humans to show our creative touch to make things. And because of this, humans will always, in theory, be generally more intelligent than AI.

On top of that, one may say that; while computers may be faster than humans, that does not mean they are more clever than humans. This argument uses the idea that you can make, for example, a dog's brain think and process faster. However, this does not mean it can be a lawyer. “Say, for example, you could make a dog’s brain process information more quickly. That faster processing power doesn’t suddenly mean that the dog can play a game of chess, or compose a symphony, or weigh up an ethical dilemma.” (Think Automation).

In addition, an opinion states that AI will be limited because of our own limitations in our intelligence, “AI may hit a fundamental limit of intelligence. This is another argument surrounding the ‘will the singularity happen’ question that suggests the answer is no.” (Think Automation). This quote says that because humans cannot think of something more intelligent than them, they can't make something more intelligent than them. “a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so“ (Chollet).

Now that I have explored some arguments for and against the singularity, what do I think? 

I think the singularity will come. It is inevitable. Even if governments were to recognise it and start adding precautions as if it were a threat, the singularity would still happen. So we should embrace it and try to understand it. Grammarly, for example, uses artificial intelligence that uses the rules of grammar to correct our work. This is one way where AI is sometimes more intelligent than the average person. I have a lot more mistakes than Grammarly has. This leads me to my next point; Computers are already more intelligent than humans in some cases, and this gap between us keeps getting smaller every single year. An automated chess game that can beat the world champion consistently shows this. While one may say that that chess game was explicitly made for chess and knows all the moves, it is still calculating possible outcomes and learning from more experience with playing against others. This is artificial intelligence and machine learning. It learns from more and more use, just as we can learn as we go along in life. The thing is, guess who does this faster; the computer. 

Famous people such as Elon Musk and Bill Gates say that AI could be a threat that could beat us. It could be more of a threat to humanity than nuclear bombs. These predictions are highly likely. If this is the case, the education system will have to change. It should have classes to help us understand what it is because once we know it, we can embrace it. As I asked questions to one of my classmates, it became more apparent that people think they have an understanding of what AI is, but they don't realise how much they use it subconsciously. While using AI may seem like embracing it, they still don’t truly understand it. Just telling them what AI is can be of enormous value to the future. 

Artificial intelligence is an issue that we need to consider. It could probably be something that we should be scared of and a threat to us. The way for it not to be as big of a fright to us is by joining them. As Jim Hemson says, "if you can't beat them, join them", and while this seems like a bad thing to do, I think it will have to happen. An argument that if humans and robots merge states that we will lose a vital part of being human, and how far will it take until we are not human and just vessels for robots? I would argue that we already live like robots. A vast majority of people in the past used to be made to do only one function and do what their parents wanted them to do. Now that has lessened, and who knows where that may take us in the future. 

So, in conclusion, AI will become more intelligent than us. I think it is inevitable. AI is advancing at an incredible rate. It is only a matter of time before it becomes less dependent on humans. Newer AI’s are coming. Artificial intelligence has made things far easier for us. We have been able to spend more time doing the creative stuff rather than doing the tedious, time-consuming stuff. It could be an issue that we do have to understand so that it will be as safe as possible. Governments should start placing restrictions on AI, but schools should also be teaching students about it so that we can understand and embrace it. They should also introduce students to get better predictions about the future because this will ultimately be our downfall or salvation. Through this, we will be able to have a more risk-free future for everyone.



Similar Articles

JOIN THE DISCUSSION

This article has 0 comments.