As it has already been mentioned above, artificial intelligence (AI) is an area of very little interest for the general public. I am not talking about AI, but rather its impact on everything. And thus, only a few people, mostly academics and professionals, are able to explain what this concept really means: artificial intelligence being used by humans, or simply machines. If we go back in time, we can see that human-like agents have been used to solve tricky problems over the centuries such as sailing ships, working with numbers and calculations, etc. In addition, there were other technologies that helped us develop faster and smarter: robots, drones, satellites, etc. All these devices were developed to create convenience, ease our task-oriented nature, make things easier, faster, safer, etc. At the same time, they let us become more innovative than before. But how does this change our understanding of what’s “real” AI? It doesn’t. Instead, people associate it to something called machine intelligence (it can be anything from the ability to recognize faces in pictures, to detecting spam, to solving math problems). So if you think about it, both concepts seem to be pretty similar — artificial intelligence is actually a combination of all these two concepts; you can even use them interchangeably without anyone noticing. However, as one might guess, when your friend asks me “which’s smarter?” my brain usually makes up quite a complex answer: A.I. + computers = machines (that are programmed based on previous experiences, often through training).
The truth is, for almost half a century or so, these concepts have had different definitions in some cultures and countries. For example, in Germany, technology is now defined as “the use of advanced technical processes especially for performing physical tasks”. This definition may seem outdated, however, since many years ago this definition was the standard around which most of the rest of Europe acted. So let’s compare the definitions:
“Artificial-intelligence” is used by computer scientists in their study of cognitive systems to define specific “cognitive processes” for particular subjects, including social cognition, learning and problem-solving (for example, the famous “cognitive agent”). While modern AI research deals mainly with intelligent personal assistants like Siri, Alexa, Cortana, and Google Assistant, artificial-intelligence today consists mainly of narrow technologies such as image recognition, natural language processing, data mining, predictive analytics, data modeling, automatic information retrieval, and machine vision. Many people don’t even realize how much they rely on these algorithms at home, or on just a simple voice command. We should note here that “artificial”, “machine”, and even “intelligence” are sometimes misused to describe different technological tools. These aren’t three separate concepts. AI is mostly about using these technologies either directly or through integration. So if you want to get into the world of computing, it’s important to know what you’re looking for, not the ones you can find out through google. You need deep knowledge, not superficialities. The good news is that the field of artificial intelligence is filled with a lot of hype and speculation. People can try to figure out what it will bring to them without doing any analysis (or perhaps still don’t realize what they are buying), but it’s just another thing to talk about. The point here is: when we talk about artificial intelligence being more than just a tool for productivity, we have to accept and accept some level of uncertainty. That’s why nowadays, many people feel reluctant to understand AI because of what they perceive as too much hype and speculation. After all, it seems like every new tech product comes with some sort of hype or a slight doubt whether AI can be a real threat. So I would argue that people shouldn’t expect AI to take over all the jobs — otherwise, we would be living in an Orwellian nightmare.
Instead, I find this term “artificial intelligence” misleading and overly broad, which I’ll try to prove next week. To start with, it’s still unclear what exactly AI is and how it works. Even though researchers have identified so-called narrow bots that are successful at recognizing objects and processes, which was achieved decades ago, we do not yet know exactly how they work. This is also a significant gap compared to the history of human-computer interaction. When it comes to communication, for instance, early forms of communication were largely oral and based on facial expressions. Then came the invention of writing, and now, many years later, we have interfaces like Skype, Webex, Zoom, Messenger, Facebook Messenger and other apps that bring virtual interactions between people and computers closer together. Nowadays, we have smartphones that allow us to share files and photos, but we’ve never had access to a printer, or to a screen for personal media consumption. On top of that, most computers now have sensors, microphone, and cameras with huge capacities and computing power, which allows us to instantly obtain information about ourselves and objects around us. Finally, we have artificial vision devices (like computer vision, that is used to scan images for changes in lighting conditions), and then, finally, the development of self-driving cars. All these technologies are used to achieve various purposes, but they aren’t limited to those. They also help us interact with reality in a way that feels natural, comfortable, safe and familiar.
All these things, of course, are still far away in the future. Since our present day experiences give us the impression that our lives are getting better, thanks to the developments related to artificial intelligence, these solutions might appear attractive in the short run. Yet, in the long run, even a small improvement, as it was shown through the examples of the car industry and music industry, or simply using a slightly increased imagination, could become a major difference between the quality of life of a person and something completely different. There is no way for someone for whom artificial intelligence in music sounds like magic, and artificial intelligence (or machines using artificial intelligence as the main method for communication) sounds like a horror movie. How true are these statements, and how they influence our perception towards artificial intelligence? Could we just switch to the older word? Or not? Is it possible to achieve a full transition between the worlds of machines and humans without losing everything else? Isn’t it enough to just call themselves AI or AMIs, as long as they work on the same task they did 30, 50 years ago and do whatever they are used to doing, and that they feel good doing it? Can we imagine what we might have been able to accomplish without the evolution of technologies mentioned above? Can we call our own inventions (a printing press?) AI or AIMs? Probably not, or maybe not only a small amount of their capabilities: to say the least. Let us consider a quick look at the role of Amazon, Apple, Netflix and others in providing us with products and services that, even at their core, were created by humans. What about the progress? Are we at the verge of creating something entirely independent of us? How far are we from reaching the singularity, a singularity (a single self-contained entity, i.e. consciousness) that encompasses humankind and all life on Earth? How far away will humans be able to reach from now until that moment? Will we have to adapt to a totally autonomous existence and survive on their terms, like they did over 20 thousand years ago, or will we be able to maintain a human-level of autonomy and live happily and meaningfully by making choices in this process (e.g. driving a Tesla)? Our ability to analyze the situation, evaluate alternatives, choose our options (which would mean choosing, even on e-commerce sites, which items could we buy and which wouldn’t), to make decisions, is not given to us automatically, despite having a lot of time and resources. Instead, the task of choice is performed by a large number of factors including financial, psychological, political, legal, social, etc.
To give humans the feeling of control of our actions, it helps to know that machines don’t use their free choice. Moreover, we have to remember that machines are very resourceful. Some of them can even make mistakes. So for us to achieve what we want, we have to trust that we can do the same thing as machines. Of course, this could be partially realized through a perfect algorithm, but, in practice, we do see how little value each of us assigns to her. Maybe we prefer to make decisions with minimal effort, but, in reality, most people go beyond this limit. No matter what we do, we are always under pressure. Human behavior is nothing more than a series of conscious and unconscious decisions that occur during the process of perception. Therefore, it is difficult to trust that machines can do it right. This problem will persist as long as humans keep giving more to artificial intelligence than to actual humanity.
What is interesting about the relationship between artificial intelligence and humanity that is being observed today, especially across nations, is the fact that the majority of governments worldwide still remain convinced that the best solution for achieving sustainable development is the creation of a strong, powerful centralized society with total government control. Although we can observe a growing number of attempts (for example, Elon Musk’s company SpaceX has successfully landed humans on the moon, or Rosetta Stone, built the first 3D printed object), these projects require extensive human involvement and investment. In short, human trust can only provide so much support for mankind, and cannot replace it. Humans are incapable of thinking outside the box, of creative thinking (beyond the current hype around hyped