Have we really misunderstood artificial intelligence?

Have we really misunderstood artificial intelligence?

The author of this article, Sangbae Kim, is the director of the Biomimetic Robotics Laboratory at MIT. Previously, they developed a stunt robot, hoping to make the robot’s body movements more refined in the future through some gait exploration and simple four-leg stunts.

Today, in his opinion, humans have cognitive biases against artificial intelligence and robots. If humans do not understand this cognitive bias, they will not be able to formulate appropriate directions for the research, application and policy of artificial intelligence technology. Therefore, artificial intelligence is also easy to be misunderstood because it is essentially different from human intelligence.

Most people associate artificial intelligence with robots, thinking they are the same thing. In fact, the term “artificial intelligence” is rarely used in research laboratories. On the contrary, the terminology for certain types of artificial intelligence and other intelligent technologies is more appropriate. Whenever someone asks me “Is this robot operated by artificial intelligence?”, I hesitate-I actually want to know whether it is appropriate to call the algorithm we developed as “artificial intelligence” .

The term “artificial intelligence” was first used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and it has often appeared in science fiction or movies in the following decades. Nowadays, artificial intelligence has also been used in virtual assistants of smartphones and algorithms of autonomous vehicles. It can be seen that for a long time, artificial intelligence has covered many different contents, and these contents always cause confusion.

However, people often have a prejudice that artificial intelligence is the artificial realization of human intelligence. And this prejudice may come from our cognitive biases as human beings.

1. Don’t use human standards to treat robots or artificial intelligence tasks

How would you feel if you saw AlphaGo, the artificial intelligence developed by DeepMind, defeated the nine- duan Go player Lee Sedol in 2017 ? You may be surprised or scared, thinking that the capabilities of artificial intelligence have surpassed human geniuses. Nevertheless, winning a game with exponentially possible moves like Go only means that artificial intelligence has surpassed a very limited part of human intelligence.

I believe many people are impressed by the backflip performed by Mini Cheetah developed by the MIT Bionic Robotics Laboratory. Although jumping backward and landing on the ground is also difficult for humans, the algorithm for specific actions is very simple compared to algorithms that require more complex feedback loops to achieve stable walking. It can be seen that completing robotic tasks that seem easy to us is often extremely difficult and complicated. The reason for this is that we always tend to consider the difficulty of the task according to human standards.

We are often eager to summarize all the functions of artificial intelligence after watching a robot demonstration. For example, when we see someone doing a backflip on the street, we tend to think that this person is good at walking and running, and has sufficient flexibility and athletic ability, and must be good at other sports. Generally speaking, our judgment of this person is not wrong.

However, can we also apply this judgment method to robots? It is easy for us to generalize and determine the performance of artificial intelligence based on observations of specific robot movements or functions, just as we do for humans. By watching the video of OpenAI’s robot hand solving Rubik’s Cube in the artificial intelligence research laboratory, we believe that since artificial intelligence can perform such complex tasks, it must be able to complete all simpler tasks. However, we ignore the fact that artificial intelligence neural networks are only for limited types of tasks(Such as solving Rubik’s Cube)Have been trained. If the situation changes, for example, turning it upside down when operating the Rubik’s Cube, the algorithm will not work as well as expected.

Unlike artificial intelligence, humans can combine individual skills and apply them to multiple complex tasks. Once we learn how to solve the Rubik’s Cube, even if we turn it upside down, even though it may feel strange at first, we can still solve the Rubik’s Cube quickly by hand. For most robotic algorithms, they need new data or reprogramming to do this. In addition, self-driving cars require real data for every situation, and human drivers can make rational decisions based on pre-learned concepts to deal with countless situations. These examples provide a sharp contrast between human intelligence and robotic algorithms: robotic algorithms cannot perform tasks with insufficient data.

Have we really misunderstood artificial intelligence?

(Source: HYUNG TAEK YOON)

Since ancient times, mammals have evolved continuously for more than 65 million years. The total time humans spend on learning mathematics, using languages, and playing games add up to only 10,000 years. In other words, humans spend a lot of time developing abilities that are directly related to survival, such as walking, running, and using hands. Therefore, it is not surprising that computers are much faster than humans, because they were originally developed for this purpose. Similarly, computers naturally cannot obtain the ability to use their hands and feet freely like humans. Because these skills are acquired through more than 10 million years of evolution.

This is why it is unreasonable to compare the performance of robots or artificial intelligence with the capabilities of animals or humans. After watching the video of the MIT Cheetah robot running in the field and jumping over obstacles, it would be too sloppy to think that the robot technology of walking and running like an animal is mature. Many robot demonstrations still rely on algorithms set up for special tasks in limited situations. In fact, researchers tend to choose seemingly difficult presentations because it can impress the audience. However, this level of difficulty is from a human point of view and may not be related to the actual algorithm performance.

Before any logical thinking, humans are susceptible to instantaneous and reflective perception. This cognitive bias is strengthened when the research object is very complex and difficult to analyze logically, for example, a robot that uses machine learning.

So, where does our human cognitive bias come from? In my opinion, it comes from our psychological tendency to subconsciously personify the objects we see . Humans have evolved into social animals, and may have developed the ability to understand and empathize with each other during this evolutionary process. And our tendency to personify the subject may come from the same evolutionary process. For example, people tend to use the term “teaching robot” when referring to programming algorithms. But despite this, we are still accustomed to using anthropomorphic expressions. As the 18th-century philosopher David Hume said, “Humans have a general tendency to think that all life is the same as them.”

Of course, we not only personify the appearance of the research subjects, but also personify their mentality. For example, when Boston Dynamics released a video of its engineer kicking the robot, many viewers’ reactions were, “This is too cruel,” “Sympathize with this robot,” and “One day, the robot will retaliate against that engineer.” In fact, the engineer is only testing the robot’s balancing algorithm. However, before understanding any thought movement in this situation, the rough kicking action combined with the animal-like struggle of the robot is instantly transmitted to our brains, leaving a strong impression on us. In this way, this instant personification has a profound impact on our cognitive process.

2. Humans process information qualitatively, while computers process information quantitatively

Looking around, our daily lives are full of algorithms. Since all algorithms are based on numbers, we use terms such as “objective function”, which is a numerical function that represents a specific goal. The sole purpose of many algorithms is to achieve the maximum or minimum value of the function, and the characteristics of the algorithm vary depending on the implementation.

The goal of winning tasks such as Go or Chess is relatively easy to quantify. The easier the quantification, the better the effect of the algorithm. On the contrary, humans often make decisions without quantitative thinking.

Take an example of “cleaning the room”. The way we clean the room is slightly different every day. It depends on the situation, the owner of the room, and how we feel. In this process, are we trying to maximize a certain function? In fact, we did not do such a thing. The act of “cleaning” has always been carried out with the abstract goal of “clean enough”. In addition, the criterion of how much is “sufficient” is easy to change, and this criterion may be different from person to person.

We are used to putting together known information to make good enough decisions. However, we usually do not check whether each decision is optimal. And most of the time this is the case, because we have to meet many conflicting indicators with limited data.

However, when designing tasks or services that we expect robots to perform, this operational difference between humans and algorithms can cause trouble. This is because when algorithms perform tasks based on quantified values, human satisfaction, which is the result of the task, is difficult to be fully quantified. Quantifying the goal of a task that must adapt to personal preferences or changing circumstances (such as cleaning the room above) is not an easy task. In other words, in order to coexist with humans, the evolution of robots may not be to optimize specific functions, but to achieve “good enough” results. Of course, the latter is much more difficult to achieve robustly in real life, because you have to deal with so many conflicting goals and qualitative constraints. 

3. Actually, we don’t know what we are doing

Try to recall the last meal you had before reading this article. Do you remember what you ate? Besides, do you remember the process of chewing and swallowing food? Do you know what your tongue is doing at that moment?

In fact, our tongue does a lot for us: it helps us put food in our mouths, distribute food between our teeth, swallow chewed food, and even send large pieces of food back to our teeth when needed. We can do all of this naturally, and even when talking with friends, we use our tongue to be responsible for pronunciation. How much does our conscious decision contribute to the tongue movement that completes so many complex tasks at the same time? We look like we are moving our tongue as we want, but more often, the tongue is moving automatically, and it gets high-level commands from our consciousness. This is why we cannot remember the detailed movements of our tongues during meals, because first of all we know very little about their movements.

We may think that the hand is the most consciously controllable organ, but in fact, many hand movements also occur automatically and unconsciously, or at most subconsciously. If you don’t believe it, try putting the key or something in your pocket and then take it out. In that short moment, countless micro-operations seamlessly coordinated to complete this task in an instant.

We often cannot perceive each action separately. We don’t even know which units to divide them into, so we express them all as abstract words, such as sorting, washing, smearing, rubbing, wiping, etc. In fact, these verbs are defined qualitatively. They usually refer to the sum of fine movements and operations, and their composition varies with the situation. Of course, even children can easily understand and think about this concept, but from the perspective of algorithm development, these words are infinitely vague and abstract.

Have we really misunderstood artificial intelligence?(Source: HYUNG TAEK YOON)

For example, we show how to make a sandwich by spreading peanut butter on bread and explain it in a few simple words. Suppose there is an alien who speaks the same language as us, but knows nothing about human civilization or culture. (I know this assumption is contradictory… but please be patient and look down.) Can we explain to him how to make peanut butter sandwiches over the phone? In this case, we may be in trouble when explaining how to scoop the peanut butter from the jar. Even letting him hold a piece of bread is not so easy: we have to hold the bread hard to spread the peanut butter, but at the same time not too hard, so as not to damage the shape of the soft bread. In fact, it is not very difficult to do this, but it is not easy to express this through language or text, let alone through functions. Even if humans are learning a task, can we learn the work of a carpenter over the phone? Can we accurately correct tennis or golf posture over the phone? The reality is that it is difficult for us to discern the extent to which the details we see are done consciously or unconsciously.

In short, not everything we do with our hands and feet can be directly expressed in words. Things that happen between successive actions usually happen automatically and unconsciously, so we often explain our actions in a simpler way than the actual situation. This is why our behavior looks simple, but in fact it is incredible. The limitations of expression often make us underestimate the actual complexity. We should realize that the difficulty of language description will hinder the research progress in the field of imperfect vocabulary development.

Until recently, artificial intelligence has been applied to information services related to data processing, with prominent examples such as voice recognition and facial recognition. Now, we are about to enter a new era of artificial intelligence, which can efficiently provide us with physical services. In other words, the era of automation of complex physical tasks is coming.

In particular, it is worth noting that an increasingly aging society has brought us great challenges. The labor shortage has become an obvious social problem. Therefore, we urgently need to discuss how to develop technologies that enhance human capabilities so that we can focus on more valuable work and pursue a unique human life. This is why engineers and members of society from all fields should improve their understanding of artificial intelligence and unconscious cognitive biases. As mentioned above, artificial intelligence is easily misunderstood because it is essentially different from human intelligence.

All in all, it is natural for humans to have cognitive biases against artificial intelligence and robots. However, if we do not clearly understand this cognitive bias, we will not be able to formulate appropriate directions for the research, application and policy of this technology. As a scientific community, in order to make the development of science fruitful, we need to pay close attention to our cognition in the process of promoting the proper development and application of technology.

Reference: https://spectrum.ieee.org/automaton/robotics/robotics-software/humans-cognitive-biases-facing-ai

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/have-we-really-misunderstood-artificial-intelligence/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-07-26 07:17
Next 2021-07-26 07:19

Related articles