Humans expect AI to be benevolent and trustworthy. At the same time, humans are unwilling to cooperate and compromise with machines, new study finds. They even exploit them.
Imagine driving on a narrow road in the near future when suddenly another car emerges from a bend in front of you. It is an autonomous car with no passengers inside. Are you going to come forward and assert your right of way, or give way to let it pass? Today, most of us behave benevolently in such situations involving other humans. Are we going to show the same kindness towards autonomous vehicles?
Using methods from behavioral game theory, an international team of researchers from LMU and the University of London conducted large-scale online studies to see if people would behave in such a cooperative manner with them. artificial intelligence (AI) systems than with their peers.
Cooperation keeps a society together. This often forces us to compromise with others and accept the risk that they will let us down. Traffic is a good example. We waste a bit of time when we let other people pass us and get outraged when others don’t return the favor. Will we do the same with the machines?
Use the machine without feeling guilty
The study published in the journal iScience found that when they first meet people have the same level of trust in AI as they do in humans: most expect to meet someone who is willing to cooperate.
The difference comes later. People are much less willing to reciprocate AI and instead harness its benevolence to their own advantage. Going back to the example of traffic, a human driver would give way to another human but not to an autonomous car.
The study identifies this reluctance to compromise with machines as a new challenge for the future of human-AI interactions.
“We put people in the shoes of someone who is interacting with an artificial agent for the first time, as it could happen on the road,” says Dr Jurgis Karpus, behavioral game theorist and philosopher at LMU Munich and first author of the study. “We modeled different types of social gatherings and found a consistent pattern. People expected artificial agents to be as cooperative as their fellow human beings. However, they didn’t return their benevolence as much and exploited AI. more than humans. “
With perspectives from game theory, cognitive science, and philosophy, the researchers found that “algorithm exploitation” is a robust phenomenon. They replicated their results through nine experiments with nearly 2,000 human participants.
Each experiment examines different types of social interactions and allows the human to decide whether to compromise and cooperate or act selfishly. The expectations of other actors were also measured. In a well-known game, The Prisoner’s Dilemma, people need to be confident that other characters won’t let them down. They’ve embraced risk with humans and AI, but have betrayed AI’s trust much more often, to make more money.
“The cooperation is backed by a mutual bet: I hope you will be kind to me, and you believe that I will be kind to you. The biggest worry in our field is that people will not trust machines. But we let’s show they do! ” notes Professor Bahador Bahrami, a social neuroscientist at LMU and one of the study’s principal investigators. “They’re okay with ditching the machine, though, and that’s the big difference. People don’t even report a lot of guilt when they do,” he adds.
A benevolent AI can backfire on you
Bias and unethical AI has made headlines – from the 2020 UK exam fiasco to court systems – but this new research raises a new caveat. Industry and lawmakers are striving to make artificial intelligence benevolent. But benevolence can backfire on you.
If people think AI is programmed to be kind to them, they will be less inclined to cooperate. Some accidents involving self-driving cars can already show concrete examples: drivers recognize an autonomous vehicle on the road and expect it to give way. The autonomous vehicle expects the normal compromises between drivers to hold.
“The exploitation of algorithms has other consequences down the line. If humans are reluctant to let a polished self-driving car join a side road, should the self-driving car be less polished and more aggressive to be useful?” Jurgis Karpus asks.
“Benevolent and trustworthy AI is a buzzword that gets everyone excited. But fixing AI isn’t the whole story. If we realize that the robot in front of us will be cooperative no matter what. , we will use it in our selfish interest, “says Professor Ophelia Deroy, philosopher and lead author of the study, who also works with the Norwegian Institute for Peace Research in Oslo on the ethical implications of the study. integration of autonomous robot soldiers with human soldiers. “Compromise is the oil that makes society work. For each of us, it only looks like a small act of self-interest. For society as a whole, it could have far greater repercussions. If no one else does. let self-driving cars join the traffic, they will create their own traffic jams on the side, and will not facilitate transportation. “