Toastmasters Speech – Singularity

This is a speech I made at Toastmasters yesterday:


When I say the words artificial intelligence, what do you think of? Perhaps Skynet from Terminator 2. Or perhaps you think of the Matrix, machines using human beings as batteries to power themselves. Or HAL from 2001 Space Odyssey, running amok in space. Or maybe something less sinister – the loyal android, Data, from Star Trek, working side-by-side with humans.

But are any of these visions fed to us by Hollywood realistic?

People like Elon Musk and Steven Hawking have told us that AI represents the greatest danger that humanity will ever face. Others, like Ray Kurzweil from Google believe that powerful AI will only serve to enhance humanity. Either way, artificial intelligence is upon us. It is a reality, and it is unavoidable.

How many of you have heard of the Turing Test?

It is this: if a person engages with a machine intelligence and can’t distinguish the machine from a human being, then the machine has passed the Turing test. Kurzweil predicts that by 2025, we will have an AI which can pass the Turing test.

Many of you will have heard of Moore’s Law. Moore’s law essentially states that every 18 months, computer processor power doubles. The significance of that is huge. It means that our technology is growing at an exponential rate. Exponential, not linear.

Our human mind doesn’t naturally comprehend exponential growth. Inevitably, we will reach a point where the human mind cannot catch up. Technological progress, fueled by AI, will proceed at an unstoppable rate of self-replicating improvements, outside of human influence. Programs creating their own programs. Self-replicating, self-optimizing code. Our friend Kurzweil predicts that we will reach this point by 2045. In the world of AI, this is called Singularity.

Singularity. Runaway technological advances constantly self-upgraded by super-intelligent AI.

Does this scare you?

One has the impression that AI is going to end up as some great, monolithic monster, a monster that will inevitably see humanity for the scourge on the planet that it is, and annihilate us, or subjugate us to slavery. I personally don’t think this is going to be the case. However, I am also wary. Very wary. It is wise to view such technology from an impartial stance. View it with eyes of objectivity.

But what is AI?

In essence, current machine learning makes use of vast arrays of data implemented via matrix algebra and probability theory. To put it simplistically, your average artificial intelligence system is a glorified system of statistical inference.

In the old days, before it was fashionable, AI modelling was a chore. To do any kind of machine learning, it involves a whole load of linear algebra, probability, some calculus, programming, and knowledge of various search tree algorithms. Not only that, but to do any kind of worthwhile machine learning, it takes a LOT of processor power. In short it was hard. That level of difficulty creates a huge barrier to entry.

Picture this scenario. Rather than a single, great monolithic AI, imagine instead, millions of custom, individual AI systems, each crafted and tailored to their own purpose. A scenario where anybody with a phone, and a bit of programming knowledge can incorporate AI into their own systems, create their own facial recognition system, or a custom security system, or anything that can make judgements based on swathes of data.

In short, the democratization of artificial intelligence, technology in the hands of the people.

Amazing? Unimaginable?

But that’s now! Right now! Anybody, right now, with a little bit of programming savvy, can use publicly available machine learning programming libraries to train their very own AI network with data, and create their own ai based apps. Heck, you don’t even need to know the math anymore to implement AI. You just plug it in, tweak a few parameters, and boom, you’re read to go. AI out of the box.

The progress of AI is inevitable. Democratization of AI will serve as a buffer against the potential tyranny of some kind of runaway system, or misuse by a centralized group of bad actors. The specialized nature of AI systems serves as another buffer. At the moment, common AI systems can only serve direct, specialized tasks.

To create an AI capable of reasoning at the level of a human being, that would require processing power that isn’t yet achievable with our binary computing systems. Indeed, true human complexity may well be impossible to replicate on traditional binary computer system due to the limits of physics. But it may very well be achievable with quantum computing. Or three-dimensional molecular computing. But those technologies are years away.

This is where I differ with fearmongers. The cynic in my mind tells me this:

Fear is money. Fear is power. When you put fear into people’s hearts, they will give away their freedom and money for the illusion of security. Instilling the fear of AI gives occasion to governments, and the corporations behind those governments, to extend their surveillance tentacles ever further into our lives.

Envision this scenario:

A populace, terrified of the artificial thinking machines. To ‘save the people from themselves’, governments legislate that only the military, government agencies, and select private contractors have the legal right to develop or implement AI. This would mean the end of democratization of AI. It would mean more money and power in the coffers of the governments and corporations.

Similar things are happening already, here, in this country, with the recent introduction of the outrageous anti-encryption Bill, all in the name of protecting us from the big bad terrorists. A different area of computing, yes, but the principle stands true.

I say this. Do not fear it. Learn about it. Take power by learning about that which you are told to fear, because in this shiny postmodern world, ever more Orwellian in its outlook, it’s important for your own power to learn, learn, learn, and to not follow blindly.

Leave a Reply