A study from two Europe-based nonprofits has found that Microsoft’s artificial intelligence (AI) Bing chatbot, now rebranded as Copilot, produces misleading results on election information and misquotes its sources.
The study was released by AI Forensics and AlgorithmWatch on Dec. 15 and found that Bing’s AI chatbot gave wrong answers 30% of the time to basic questions regarding political elections in Germany and Switzerland. Inaccurate answers were on candidate information, polls, scandals and voting.
It also produced inaccurate responses to questions about the 2024 presidential elections in the United States.
Bing’s AI chatbot was used in the study because it was one of the first AI chatbots to include sources in its answers, and the study said that the inaccuracies are not limited to Bing. They reportedly conducted preliminary tests on ChatGPT-4 and also found discrepancies.
The nonprofits clarified that the false information has not influenced the outcome of elections, though it could contribute to public confusion and misinformation.
“As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information.”
Additionally, the study found that the safeguards built into the AI chatbot were “unevenly” distributed and caused it to provide evasive answers 40% of the time.
According to a Wall Street Journal report on the topic, Microsoft responded to the findings and said it plans to correct the issues before the U.S. 2024 presidential elections. A Microsoft spokesperson encouraged users to always check for accuracy in the information obtained from AI chatbots.
Earlier this year in October, senators in the U.S. proposed a bill that would reprimand creators of unauthorized AI replicas of actual humans — living or dead.
In November, Meta, the parent company of Facebook and Instagram, introduced a mandate that banned the usage of generative AI ad creation tools for political advertisers as a precaution for the upcoming elections.