In a new set of experiments, artificial intelligence (AI) algorithms were able to influence people’s preferences for fictitious political candidates or potential romantic partners, depending on whether the recommendations were explicit or secret. Ujué Agudo and Helena Matute from Universidad de Deusto in Bilbao, Spain, present these results in the open access journal PLOS ONE April 21, 2021.
From Facebook to Google search results, many people come across AI algorithms on a daily basis. Private companies conduct extensive research on their users’ data, generating information about human behavior that is not publicly available. Academic social science research lags behind private research, and public knowledge about how AI algorithms might influence people’s decisions is lacking.
To shed new light, Agudo and Matute conducted a series of experiments that tested the influence of AI algorithms in different contexts. They recruited participants to interact with algorithms that presented photos of fictitious political candidates or online dating candidates, and asked participants to indicate who they would vote for or for whom to post. The algorithms promoted some candidates over others, either explicitly (eg, “90% compatibility”) or covertly, such as showing their photos more often than others.
Overall, the experiments showed that the algorithms had a significant influence on participants’ decisions about the vote or message. For political decisions, explicit manipulation greatly influenced decisions, while covert manipulation was not effective. The opposite effect was observed for dating decisions.
The researchers believe these results might reflect people’s preference for explicit human advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice over rational policy decisions.
In light of their findings, the authors express support for initiatives to strengthen the reliability of AI, such as the European Commission Ethics Guidelines for Trustworthy AI and the Explainable AI Program (XAI ) of DARPA. Nonetheless, they caution that more publicly available research is needed to understand human vulnerability to algorithms.
Meanwhile, researchers are calling for efforts to educate the public about the risks of blind reliance on algorithm recommendations. They also highlight the need for discussions about the ownership of the data that powers these algorithms.
The authors add: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing truly personalized profiles of participants (and using the same photographs in any case), a more sophisticated algorithm like those with which people interacting in their daily life should certainly be able to exert a much stronger influence. “
Source of the story:
Material provided by PLOS. Note: Content can be changed for style and length.