The Griff

Final Thoughts: Will AI end the world as we know it?


Final Thoughts: Will AI end the world as we know it?

—“Hey, Siri: tell me when the world will be taken over by robots.”

—“I’m sorry, Sydney, I’m afraid I can’t do that.”

If Siri’s response sends a shiver down your spine, you can thank HAL 9000, the iconic, eerily calm, artificially intelligent computer in Stanley Kubrick’s 1968 sci-fi drama, 2001: A Space Odyssey. Since then, we’ve seen countless sci-fi characters symbolize the human fascination with artificial intelligence (AI). You might recall Terminator or the film I, Robot.

But the world of robots and artificial intelligence goes beyond the dystopian fiction we all know. In reality, if you own a smartphone, you use artificial intelligence every day.

In research laboratories around the world, powerful tech companies like Google and Samsung are experimenting on the brains of robots. Tesla is producing self-driving cars, and Facebook has created robots that talk to each other in their own language.

Advancements in technology are moving at an alarming rate, which is where the idea of the singularity comes in.

At any moment, if I’m quieter than I often am, it’s probably because I’m thinking about the singularity. Maybe you’re wondering: “What is the singularity? Are you saying that the world will be ruled by robots?”

Yeah, that’s exactly what I’m saying. But the singularity is much more complex than just a point in the future when robots will take over the world.

There’s this common idea of the singularity as the moment when artificial intelligence will reach “superintelligence,” the highest form of AI according to Tim Urban, who authors the blog Wait But Why? and writes extensively on the “AI revolution.” The advent of superintelligence will mark a time in history when robots will be able to do anything that humans can do, only a billion times better. The fear surrounding this is pretty justified.

It’s even more justified when the CEO of Tesla and SpaceX (and real-life Tony Stark), Elon Musk, tweets, “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

Musk isn’t the only genius to drive home concern about AI. Although his claims on the “AI apocalypse” may sound a little far-fetched, he was joined by the likes of Stephen Hawking, Noam Chomsky, and more than 100 other AI specialists this August in writing an open letter to the UN on the potential harm of robots and autonomous weapons.

Although you might be skeptical about human extinction by robot, when the world’s top scientists and innovators give out warnings like this, it’s hard not to take it seriously.

You may have heard of the term “digital disruption.” It’s one of those buzzwords that’s used so often most people don’t actually know what it means, but according to a digital transformation strategy guide on, digital disruption is “the change that occurs when new digital technologies and business models affect the value proposition of existing goods and services.” In terms of AI, digital disruption is actually a suitable term. The disruption happens everywhere advancements in technology pop up. One of the best examples is the iPhone. From media to music, the iPhone’s innovation has disrupted almost every industry you can think of. Now it’s impossible to imagine our lives without it, and in the future, the same could be said about AI.

Stable jobs in almost every industry will be challenged as AI grows. This paradigm shift is something I’m sure many people aren’t ready to come to terms with, but it’s also an opportunity. Disruption is necessary to build new products and ideas that are better, faster, and stronger than their old-school counterparts.

AI forces us to think about life in a different way. The impending doom of the singularity challenges us to question our worth and our purpose here on this little planet. Humans are generally scared of change, which is why the idea of robots merging with our evolution sounds like a doomsday catastrophe. And although I have to admit this possibility is pretty freaky, it’s also something I find fascinating.

As a student, I have to wonder how the possibility of the singularity will affect my life in the future. How will the demands of my career transform while artificial intelligence emerges? Will I, as a human, be seen by AI as a virus on this planet? Will all the time and money I’ve invested in my university career even matter once robots take over?

How can I make friends with the robots?

I think all of us should be asking these questions. The point of AI is to advance humanity, and if we fear the advancement of humanity, it will be difficult for us to go anywhere as a species. If the singularity poses a threat, then it’s logical to be aware of it. However, it’s also equally important to understand what superintelligent AI has to offer, and how we can work with it.