Artificial Intelligence has always been a promise of the future, but as years go by this very moment becomes the future that we have been referring to for decades, so what’s actually happening? To answer this very rational question we need to understand some history.
Artificial Intelligence dates back to 1956 when a professor at Dartmouth named John Mccarthy gathered computer scientists and mathematicians with the purpose of asking; can a robot learn like a child? The idea wasto see if a machine could learn by reasoning and trial and error – the same method that was used when we were growing up and sometimes one that we still use. But it didn’t end there he wanted the machines to be able to solve problems thathumanscouldn’t, use language, create abstract concepts that would improve our lives, as well as improve themselves. But 1956 was more than 60 years ago, so where is the AI that we were promised? Before answering that question it’s important to understand that until a few years ago AI was restricted to super secret labs and university classrooms but today that’s changing. As our technology improves and venture capitalists, as well as tech giants, are now investing in AI some basic levels of AI are already being used in the market. AI is now helping us process data and even improve or create new technology. But how does this concern the regular day to day person?
Believe it or not, you are already using AI! When you use apps like Google Maps, that’s AI working at your fingertips. An AI predicts traffic time, report accidents, and even construction that may be blocking the route you’re taking. Have you ever thought about who filters spam your email? Well, you have AI to thank for that. Simple filters havestopped working because spammers have learned to avoid certain keywords and still get across your spam police (spam filter) so we need a technology that continuously learns. If you’re using Google for your email, you’ll be happy to know they’re AI program has successfully filtered 99% of your spam. What about social media? When you’re using Facebook and you’re ready to post a picture Facebook highlights faces with suggested tags, that’s AI with facial recognition technology. The same logic applies to Snapchat,Snapchat uses facial recognition technology powered by AI to sort through about 1000 models of faces which resemble “typical” human facial features to design their newest filters. But none of this sounds like Terminator right? So where are the risks of AI?
To understand that first, we need to sort AI into categories. Generally speaking, AI is sorted into three categories.
- Might destroy the human race
- High probability of destroying the human race
- Definitely will destroy the human race.
Jokes apart AI is generally categorized into Artifical Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).
A good example of ANI is something that iPhone users have been using (and/or playing) with for the past couple of years: Siri. ANI such as Siri is programmed with predefinedfunctions and answers to predefined questions which is a narrow form of intelligence since it doesn’t need to extrapolate knowledge or deal with abstract concepts, hence the name ANI. Although ANI is still maturing and being developed it’s something that we can expect to become more and more constant in our daily lives.
On the other hand, we have AGI, which doesn’t exist – yet. However, with the way technology is developing it’s projected to be something that can be expected in the following years. AGI has the ability to apply it’s intelligence to any problem that it’s given rather than a predefined set of problems that it’s programmed to understand. It is also supposed to be as smart as a “typical human being.” AGI is something that the defense industry is deeply looking into, with the advancements of AGI we can expect machines to replace human soldiers in high conflict zones which will change the meaning of war and conflict altogether. With that said, this is where the more ethical side of AI starts to surface. Will AI’s goals align with our (human’s) goals and what will happen if they don’t? This is the main question that propels the imaginations of sci-fi writers and directors of movies, but also a very real question that the AI industry is still faced with.
The final category of AI is ASI, which is only decades away according to visionaries suchas Elon Musk and people from Google. ASI is an almost godlike form of AI that far surpasses human intelligence. If we were to put humans and ASI on an intelligence curve humans would be at the bottom of the curve and ASI close to the top. The specifications of this form of AI aren’t present because it hasn’t been invented yet. Which is why scientists caution that once we reach the stage of AGI our AI technology may skyrocket out of our control before we can define ethical rules and regulations for ASI leaving us with ASI technology but no guidelines to abide by which can only lead to a disaster. One that we may not ever be able to recover from.
So where does Terminator come into play? Although mostly positive things have been highlighted so far it’s naive not to look at disastrous Terminator-like aspects of AI that we have already seen. Let’s talk about Uber’s self-driving car fiasco. Earlier this year one of Uber’s self-driving cars hit and killed a pedestrian which naturally brought up some of the tough questions about the vague nature of AI. Uber then pulled its self-driving cars off the roads of San Fransico, Pheonix, Pittsburg, and Toronto. But issues like this don’t end here. Let’s not forget about Microsoft’s AI mishap in 2016 when a chatbot named TayTweets was released onto Twitter. What started out to be a fun experiment from Microsoft’s part on AI and machine learning revealed how just 24 hours on Twitter could reveal Terminator-like aspects of AI. Within TayTweets first 24 hours instead of seeing the cutesy tweets we were expecting like “humans are great” and “humans are super cool” we started seeing tweets like “Hitler was right.” Although some may argue that Tay was only doing her jobby soaking in all the hate that is seemingly present around every corner of social media it was later found that in many cases Tay was really acting on her own. When asked if the holocaust happened Tay responded with, ” it was made up *clapping emoji*.” Although none of this directly leads to a Terminator being present in our future it is important to understand that as the human race we don’t have the ability to tell when a trend that’s tipping upward is going to skyrocket and potentially change our lives faster than we are prepared for.
What does this actually mean for you and me? Well for the younger crowd it means that we have to choose our careers carefully. Artificial Intelligence is being compared to the Industrial Revolution because this new technology could leave many of us without jobs if we choose a field that artificial intelligence can take over. Which just so happens to almost every field. But putting aside a catastrophizing mindset it also leaves us a great field to be part of. Artificial Intelligence has just recently started making an impact on society which means that those of us that choose to become part of this field become leaders. When you become part of any field when it’s in its seedling stages you become someone who has the power to change the trajectory of it. So for those of us who are lost about our career option, this is definitely one to consider. It’s true that we may have years or even decades before AI truly takes over our lives, either positively or negatively, but it’s also our responsibility to not take a trial and error method which is rather risky with technology like this. Our research and experiments on AI are just the tips of the iceberg so it’s up to you and me to decide whether we want a future where AI is our next digital trend or the Terminator brought to life.