Computers Capable of Astonishing Us?
In the realm of technology, few questions have sparked as much debate as those posed by Alan Turing and Ada Lovelace over a century ago: can machines think, and do they have the capacity to originate something? While modern learning machines, including advanced AI and machine learning models, have made significant strides, they still fall short when it comes to interpreting and reappropriating sociocultural expectations to generate originality and non-trivial surprises.
Lady Lovelace's famous quote, emphasising the crucial role of interpretive capabilities in originating something, remains relevant today. Turing, in his enquiry into whether machines can think, raised the question about computers and surprise, a query that still resonates in the AI community. However, Turing's idea that computers could never take him by surprise was dismissed as unsubstantiated.
The core limitations of today's learning machines stem from their dependency on data, lack of true comprehension, and difficulties in contextual and creative reasoning. Machine learning models rely heavily on the quality, diversity, and quantity of training data. Poor, missing, or biased data can lead to models that reproduce social biases or fail to understand nuanced sociocultural contexts, restricting their ability to authentically interpret cultural subtleties or generate novel insights that diverge from training patterns.
Moreover, AI systems learn statistical correlations rather than meaning or intent. They do not possess consciousness or lived experience, so they cannot internalise sociocultural values or expectations dynamically. This results in responses that may appear original but are ultimately derivative or superficial.
Another challenge lies in the black box nature of many AI models, making it hard to understand how decisions relate to sociocultural norms or why certain outputs are produced. This limits trust and the capacity to innovate beyond encoded patterns.
Achieving real-time, context-aware interpretation of sociocultural complexities requires immense computational resources and advanced algorithms, posing technical barriers to scalability and efficiency. Handling sensitive social data responsibly while fostering originality is a delicate balance still unresolved by current systems.
While machines can generate new combinations of existing data, they lack true creativity that stems from human experience, emotions, and unpredictable insight. They are limited in producing genuinely surprising or original ideas because they do not possess intuition or consciousness.
The human-machine analogy in building learning agents assumes that they should mimic how children learn. However, learning machines, unlike humans, may not have a need for originality. The learning goal of minimising prediction errors is important for perception and motor control, but may leave out important aspects.
Interactive, collective contestability mechanisms are in their infancy and should be a priority. The design of systems deployed in ethically significant contexts should consider fostering, rather than compromising, our drive to question norms. The difficulty in preserving a system's ability to recognise the extent to which some input challenges a given model is commonly associated with Bayesian learning methods.
In conclusion, compared to humans, today's learning machines are constrained by their reliance on existing data, their lack of true semantic understanding and creativity, computational and integration challenges, and ethical complexities. These collectively limit their ability to interpret sociocultural expectations in a richly original and surprising manner. Humans’ embodied experiences, emotional intelligence, and cultural immersion enable them to transcend patterns, producing authenticity and novelty that machines cannot yet match.
[1] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. [2] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. [3] Russell, S. J., & Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Pearson Education. [4] Whittlestone, S., & Taddeo, M. (2019). AI Ethics: The Ethical Challenges of Artificial Intelligence. Routledge. [5] Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.
Artificial Intelligence (AI), while making significant strides, still struggles with interpreting and reappropriating sociocultural expectations, a critical factor in generating originality and non-trivial surprises, much like Lady Lovelace suggested. The core limitations of today's learning machines stem from their dependency on data, lack of true comprehension, and difficulties in contextual and creative reasoning, as emphasized in the AI Ethics book by Whittlestone and Taddeo.