Can computers think?
An essay written as part of my application towards the Robotics program of TU Munich.
A brief discussion of the nature of computation and the prospective relevance of consciousness in the pursuit of Artificial Strong Intelligence (ASI).
Introduction / The nature of thought
Perhaps the most defining characteristic of the human species among living things is its exceptional ability to model its environment and adjust its behavior accordingly to increase its chances of survival. In fact, human beings have become so good at modeling their physical surroundings, that they have learned to exploit them for their own purposes. Insights about materials, energy, biology gravity, mathematics and astronomy have lead to the innovation of strategies, tools and machines that simplify the satisfaction of basic human needs and extended the human life expectancy dramatically.
Among the diminishing number of tasks automation of which humans had not accomplished at any given point in time, a shared property began to emerge: All of them were tasks that required some degree of thought or cognition.
This observation would have been entirely uncontroversial until about 1937. When Konrad Zuse built the Z1, the world’s first freely programmable electromechanical computer, for the first time in their 30.000 year long history, humans had accomplished building a machine whose purpose was entirely the fulfillment of a task within the domain of thought: Mathematical calculation.
This obviously required an adjustment of the perceived characteristic of tasks that supposedly would never be fulfilled by machines: Tasks which required thought which required some degree of intelligence or creativity. While Frank Rosenblatt at Cornell’s Aeronautical Laboratory was already running the Perceptron algorithm on an IBM 704 in 1957, directly mimicking observations about the functionality of the neuron, it took until 1997 for creativity to be undeniably achieved by a machine with public effect: When IBM’s “Deeper Blue” beat Garry Kasparov in a series of chess, the idea of human superiority in the doctrine of cognition was exposed as fundamentally flawed.
With the recent onset of the “AI Storm” brought about by convolutional[1]
/ recurrent[2]
deep neural networks [3]
[4]
[5]
[6]
, it is increasingly of interest to humans as a species to ask the question of “where it will end”, and whether discrete computation will turn out to be sufficient for human and human-level cognition. This leads to the question of the nature of computation itself.
Discussion / The nature of computation
In 1936, Allan Turing introduced the Turing Machine as the universal model of discrete and deterministic computation, basically defining computation as an automaton over an array of memory, an internal state, and a transition function Δ:I,S_n→O,S_n+1, that given a memory input I and a current state S_n specifies an output O and the following state S_n+1. This did not only turn out to be a handy mathematical tool to prove the uncomputability of the Entscheidungsproblem, but the foundation for the current Information Age.
The repeated application of a transition function Δ over a tape of memory is obviously a highly mechanical process, and we would be unlikely to attribute any amount of humanity to such a device, no matter how human it’s utterances may seem. This includes denying it the recognition of thought. However, under the application of Occam’s razor, if the behavior of a Turing machine is objectively indistinguishable from that of a human being, there are no scientific grounds for denying it the properties of thought or consciousness.
A counter to this line of argument may be the Chinese Room thought experiment as introduced by John Searle in 1980 [7]
. Imagine a native English speaker in a room. She does not speak chinese, but has a book written in English which tells her the correct Chinese response to any Chinese question. As defined by the Turing test, this room is a black box, and native Chinese speakers may converse with it to find out, that by all indications, the person in the room seems to be perfectly capable of reasoning in Chinese. In reality though, all she does is matching Chinese script with no understanding of the underlying semantics whatsoever.
This indicates a difference between (conscious) human thought and “mere” computation, that can not be captured by any currently known objective feature.
Conclusion
Until the Turing test is truly passed by a machine (no disrespect to Eugene Goostman [8]
), the question of whether computers are capable of thought can still be answered “No”. However, it is undeniable that computers are fulfilling tasks that have long been occupied by the domain of thought, and we can therefore argue that computation is at least necessary for cognition.
However, given the recent advances in Artificial Intelligence towards solving ever more complex tasks, it seems improbable that Strong (human-level) Artificial Intelligence (ASI) will not be achieved. As soon as it is achieved, Computers will objectively be thinking machines. Whether or not machine thought has consciousness is a question that remains to be answered, but so is the question of its relevance.
References
[1] LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
[2] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
[3] Abdel-Hamid, O., Mohamed, A. R., Jiang, H., & Penn, G. (2012, March). Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on (pp. 4277-4280). IEEE.
[4] Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119).
[5] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-9).
[6] Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., ... & Zhang, X. (2016). End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.
[7] Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.
[8] http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx (Accessed 2017/5/1)