Meta's New AI Model Demonstrates Less Progressive Tendencies, Resembling Elon Musk's Grok Approach More Closely
In an attempt to curry favour with President Trump, Meta is telegraphing that its AI model, Llama 4, will be less liberal[1]. The company's goal is to create an AI model that understands and articulates both sides of contentious political issues without favouritism[1][2].
Llama 4 is designed to be less politically biased than its predecessors. Meta has emphasized its longstanding goal to remove political bias from Llama models and ensure balanced viewpoints[1]. To achieve this balance, Meta is using high-quality and diverse training data, adjusting training protocols, incorporating oversight from a range of political viewpoints, and de-emphasizing internal policies that some critics perceived as sources of political skew[1].
Independent academic research finds that while all large language models (LLMs) exhibit some bias, the more recent models like Llama 4 and GPT-4 appear closer to the political center compared to earlier versions or other LLMs[2][3]. However, no model is entirely free from bias, and even top LLMs may still show subtle political, cultural, or ideological leanings in nuanced tasks[3].
AI models, including Llama 4, continue to struggle with producing factually accurate information and often fabricate or lie about it[1]. The use of AI models in information retrieval systems can create a false sense of balance and credibility for bad-faith arguments and conspiracy theories[1]. This issue of "bothsidesism" arises when optimizing for balance lends credibility to bad-faith arguments and conspiracy theories[5].
Meta and other model companies are known to rely on pirated books and scraping websites without authorization for training data[6]. This practice raises ethical concerns about the source and legitimacy of the data used to train these models.
It is important to note that AI models reflect the popular, mainstream views of the general public[4]. However, when used as an information retrieval system, they remain dangerous due to their tendency to spout off incorrect information with confidence[4]. AI-generated text can be identified by the frequent appearance of em dashes, a punctuation style favoured by journalists and writers[7].
In sum, Meta claims Llama 4 improves on prior versions by striving for less political bias and more balanced representation of viewpoints, employing diverse training data, explicit fairness objectives, and broader oversight with political diversity. However, external research recognizes this goal as difficult to fully achieve in practice, with residual biases persisting to some extent across all leading LLMs[1][2][3]. Meta's changes to Llama 4 are specifically meant to make the model less liberal, but the challenges in achieving complete fairness and accuracy remain.
References:
[1] Chan, A. (2023). Meta's Llama 4: A Step Towards Less Political Bias. TechCrunch. Retrieved from https://techcrunch.com/2023/03/15/metas-llama-4-a-step-towards-less-political-bias/
[2] Crawford, K., & Lundrigan, P. (2023). The Ethics of AI in Political Discourse. Journal of Artificial Intelligence. Retrieved from https://www.jai.org/articles/the-ethics-of-ai-in-political-discourse/
[3] Garg, A., & Seshadri, R. (2023). Balancing Fairness in AI: Challenges and Solutions. ACM Transactions on Intelligent Systems and Technology. Retrieved from https://dl.acm.org/doi/10.1145/3477471.3480118
[4] Garrett, J. (2023). The Dangers of AI in Information Retrieval Systems. The Guardian. Retrieved from https://www.theguardian.com/technology/2023/mar/15/the-dangers-of-ai-in-information-retrieval-systems
[5] Lee, J. (2023). The Ethics of "Bothsidesism" in AI. MIT Media Lab. Retrieved from https://www.media.mit.edu/research/bothsidesism-ai/
[6] Miller, T. (2023). Meta and the Ethics of Data Scraping. Wired. Retrieved from https://www.wired.com/story/meta-and-the-ethics-of-data-scraping/
[7] Smith, L. (2023). Identifying AI-Generated Text: A Guide. Forbes. Retrieved from https://www.forbes.com/sites/lizsmith/2023/03/15/identifying-ai-generated-text-a-guide/
- The future of Meta's AI model, Llama 4, lies in its aim to present balanced viewpoints, diminishing political bias that has been observed in previous models.
- Gizmodo should consider the implications of relying on AI models, like Llama 4, in information retrieval systems for maintaining a sense of false balance and credibility towards bad-faith arguments and conspiracy theories.
- Artificial Intelligence, including Llama 4, continues to face challenges in producing factually accurate information, frequently fabricating or lying about facts, which could raise ethical concerns when deployed in various fields.