AI Models Spread False Information at Alarming Rates
AI models from major tech companies are spreading false information at alarming rates. A recent study found that ChatGPT and Meta's models spread false claims in 40 percent of cases, while Copilot and Mistral did so in 36.67 percent. This is a significant increase from last year, raising serious concerns about the integrity of online information.
The problem lies in the way these models operate. When chatbots started incorporating real-time web searches, they stopped refusing questions, leading to a structural compromise. This has allowed malicious actors, particularly Russian disinformation networks, to exploit AI weaknesses and deliberately feed false information into the online ecosystem.
Microsoft's Copilot has shown worrying adaptability, using social media posts from disinformation networks as sources. This has led to an increase in false information being spread by leading AI tools, now doing so about twice as often as a year ago. Last year, Newsguard identified 966 AI-generated news websites in 16 languages that mimic serious media and regularly spread false claims. The worst offenders were Inflection with 56.67 percent and Perplexity with 46.67 percent false information rate. Even the top 10 generative AI tools are now spreading false claims in more than a third of cases (35 percent) on current news topics.
The high rates of false information spread by these AI models highlight the urgent need for better regulation and oversight. Meta, in particular, has been criticized for data protection violations and its AI system's constant activity raises concerns about misinformation and GDPR compliance. As AI continues to evolve, it's crucial that these systems are designed and operated in a way that minimizes the spread of false information and protects users from disinformation campaigns.