Algorithmic exploitation: ethical vulnerabilities facing human-computer interaction

If you’re driving down a road and suddenly a driverless car appears around the corner in front of you, do you continue to assert your right of way priority, or do you give way and let it pass first?

Algorithmic exploitation: ethical vulnerabilities facing human-computer interaction

If you were driving down a road and suddenly a driverless car appeared around a corner in front of you, would you continue to assert your right to road priority, or would you step aside and let it pass first?

Currently, most of us can show friendliness in situations involving other people, but it is not known whether we can show the same kindness to self-driving cars.

Using a behavioral game theory approach, an international team of researchers from the University of Munich and the University of London conducted an analytical study at scale to understand whether people will cooperate with other humans on an equal footing as artificial intelligence (AI) does, and the study, published in the journal iScience, found that early on, people trust AI to the same degree as humans: most people want to meet people who are willing to to work with. The difference comes later, when people tend to use it for their own benefit.

Returning to the traffic example, where a human driver might yield to another human driver but be less willing to yield to a self-driving car, the study sees this reluctance to compromise with machines as a new challenge for future human-AI interactions, a vulnerability that will require not only smarter machines but better human-centered strategies.

“People expect AI to be as friendly and cooperative as real humans. However, they don’t return as much benevolence and take more advantage of the AI component than humans do.” explains Dr. Jurgis Karpus, an expert in behavioral game theory, a philosopher at the University of Munich, and the first author of the study.

Algorithm development has further consequences,” Jurgis Karpus said. If humans on the road are unwilling to yield to a courteous self-driving car, should self-driving cars be less courteous and more aggressive for the sake of efficiency?”

Algorithmic exploitation: ethical vulnerabilities facing human-computer interaction

Collaboration and utilization scenarios (source: iScience)
4 Games and 2 Hypotheses
As AI agents gradually gain the ability to make autonomous decisions, humans will work with them or make decisions side-by-side in the future.

In the past few years, AI has outperformed humans in games such as chess, Go, and Starcraft, but in everyday social interaction situations, it is mostly not a zero-sum game where one side wins and the other loses; instead, the ultimate mutual benefit needs to be achieved through AI-human cooperation.

And when it comes to cooperation, it often involves compromise and risk: one person may have to sacrifice some personal interests for the benefit of the group and expose himself to the risk that others may not cooperate.

Economic games are a useful tool for testing people’s cooperativeness, and some recent industry studies have shown that the presence of several robots helps coordination and cooperation between humans in a human-machine group when a group of two or more human decision makers faces a collective problem; however, this does not mean that humans are willing to cooperate one-on-one with artificial intelligences.

People tend to give back less to machines than they do to people, and more importantly, the reasons why humans cooperate less with machines in social interaction settings remain unexplained, and sometimes the partnership collapses once humans know they are interacting with machines.

People seem to distrust the ability of predictive algorithms and artificial intelligence to coordinate actions in complex decision-making tasks.

Thus, the researchers proposed two hypotheses: the first hypothesis (H1) focuses on the “expectation” that when cooperation can be mutually beneficial, but not cooperating is risky because the other party may be selfish, humans “predict” that cooperation with AI will likely be less than cooperate less with humans.

The second hypothesis (H2) focuses on “algorithmic exploitation”: when people expect to cooperate with another party, they will be more inclined to take advantage of the other party’s benevolence if the other party is an AI agent than if the other party is human, and if this hypothesis is taken to an extreme, H2 would predict that humans will have no qualms about taking advantage of those who can cooperate but are not sentient AI.

To test and clarify whether one or both hypotheses were valid, the researchers tested four well-known one-off games: Trust, Prisoner’s Dilemma, Chicken, and Stag Hunt, in a total of Nine experiments were conducted.

Algorithmic exploitation: ethical vulnerabilities facing human-computer interaction

When people see the opportunity to exploit AI, the collaboration component with AI becomes less (source: iScience)
Humans are more eager to take advantage of AI
How do people react when they are told they are interacting with AI agents or anonymous humans, respectively? Researchers conducted a statistical analysis.

In Experiment 1, 403 participants took part in a trust game where each participant was assigned the role of the first or second player in the game and faced a human or an AI agent as a collaborator.

If the first player cooperated, the second player could decide the final outcome of the game. The second player decides to defect (play*) in her own favor, while cooperating (play+) in the favor of both players. Thus, cooperation is only worthwhile if the first player expects the second player to respond in kind, but cooperation is risky for the first player because the prospect of potentially higher personal rewards induces the second player to defect.

Through the experiment, the researchers came up with the first result: when humans interact with humans, the majority of participants playing the number one role (74%) would choose to cooperate, and the majority of participants playing the number two role (75%) would respond similarly. And in interactions with AI agents, the majority of participants playing the Player One role (78%) also chose to cooperate. However, among participants playing the second role, the rate of cooperation with AI agents (34%) would be significantly lower than that with humans.

Despite the differences in choice behavior, people’s expectations of human and AI collaborator decisions were the same: 79% of participants wanted a human collaborator to collaborate and 83% wanted an AI agent to collaborate.

These results support hypothesis H2 rather than H1: people expect AI agents to be as cooperative or benevolent as humans, but they prefer to utilize benevolent AI agents rather than benevolent humans if given the opportunity.

So, would a more symmetrical distribution of risk among participants be helpful in restoring team-equivalent cooperation? To test this, the researchers conducted Experiment 2 and Experiment 3, in which 201 participants played the Prisoner’s Dilemma game and 204 participants played the Cowardly Game.

The results showed that when people interacted with humans, half of the participants (49%) cooperated in the Prisoner’s Dilemma and the majority (69%) cooperated in the Cowardly Game. When interacting with an AI agent, cooperation rates in both games were significantly lower: 36% in the Prisoner’s Dilemma and 56% in the Coward’s Game.

Despite the differences in choice behavior, people’s expectations of human and AI cooperator decision making are comparable. In the Prisoner’s Dilemma, 59% of participants expected their cooperator to be human and 52% expected to cooperate with an AI agent; in the Cowardly Game, 67% expected cooperation from a human and 70% expected cooperation from an AI agent.

Among those participants who expected their collaborators to cooperate well, the cooperation rate with AI agents was 54% in the Prisoner’s Dilemma and 57% in the Cowardly Game, significantly lower than that with humans (71% and 73%, respectively).

These results also support the H2 hypothesis, but not the H1 hypothesis, and extend previous findings to games with symmetrically distributed risk: people are more eager to take advantage of their expected benevolence when the cooperator is an AI agent.

In Experiment 4, 205 participants took part in a deer-hunting game, and the results showed that most participants (86%) cooperated when people interacted with humans and did not differ significantly from the AI agent’s cooperation rate (80%). The researchers observed that the main reason people cooperate less with AI is not the intense competition to outperform machines. People were also willing to risk cooperating with AI agents when there was little benefit to unilateral defection, just as they would with humans, in order to achieve a mutually beneficial outcome.

When participants interact with AI agents, they are more likely to do so out of self-interest, and conversely, when they interact with humans, they are more likely to cooperate out of common interest. This suggests that even though people are just as likely to cooperate with an AI agent as they are with a human, the motivations for cooperation are different.

In addition to these four experiments, the researchers conducted additional experiments 5-8 and 9, all of which were more consistent with H2’s hypothesis.

AI laws to be further improved
The results of these tests provide a new basis for rethinking social interactions between humans and AI and raise a new moral alarm, the researchers say.

To date, most warnings about AI have focused on the risk that they might be malicious and might treat humans unfairly, so current policies aim to ensure that AI will unilaterally work fairly for the benefit of humans.

The Asilomar AI principles, developed in 2017 and endorsed by more than 3,700 experts from AI, politics, business and academia, suggest, for example, that “the goal of AI research should not be to create undirected intelligence, but rather to create beneficial intelligence “. Similarly, the European Commission has identified fairness as one of the four ethical requirements that AI must comply with.

However, in this study, the researchers’ findings add a different caveat: If industry and lawmakers publicly declare AI to be benevolent by default, humans can more easily decide to take advantage of its cooperative nature. Having machines that are unconditionally obedient and cooperative may therefore not make our future interactions with AI more ethical and mutually beneficial.

This points to a blind spot in current AI policy discussions: the need to rethink how humans will interact with AI if they are to be integrated into human society.

Moreover, advanced AI algorithm development requires larger-scale research, AI agents will model human behavior based on prior observations of human-human social interactions, and over time machines will be able to use data from their own interactions with humans: if AI learns that humans treat cooperation differently, AI may eventually and probably will cooperate less reciprocally with humans. In this case, the error may lie not in our algorithms, but in ourselves.

Regarding the limitations of this research, the researchers say that further study of cooperation between humans and AI is needed to extrapolate the findings to the real world beyond economic gaming, and that the current findings are intended to provide proof of concept for further study of AI-human interactions.

One avenue to explore further is the willingness of humans to cooperate with AI agents when their actions directly benefit or harm others, whose interests the AI agents may represent.

Furthermore, the researchers observed that algorithm development takes place in an environment where the scores earned by the AI are converted into money that goes to the organization represented by the AI agent, but this does not equate to benefiting specific third-party people. Since AI agents themselves do not directly benefit from money, further research is needed to explore what “payoffs” to AI agents are clear and truly important to others.

References.

  1. https://techxplore.com/news/2021-06-humans-ready-advantage-benevolent-ai.html
  2. https://www.cell.com/iscience/fulltext/S2589-0042(21)00647-7?_returnURL=https%3A%2F%2FFlinkinghub.elsevier.com%2Fretrieve%2Fpii% 2FS2589004221006477%3Fshowall%3Dtrue

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/algorithmic-exploitation-ethical-vulnerabilities-facing-human-computer-interaction/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-06-27 05:10
Next 2021-06-27 05:20

Related articles