What is Artificial Intelligence in the 21st Century

[1] Dr. Siim Karus and Dr. Remo Reginold 

Artificial Intelligence (AI) is a trendy notion and generally labelled as high-tech superpower. This deludes commentators to either apotheosize or demonize AI. It is often forgotten that AI subsumes different technologies which have complete different applications and effects. This article questions the term AI and lays out its technological developments. This brief technological overview helps us to understand the fundaments of AI and its potential implication on society and human interactions at large.      

We are astounded how often the term Artificial Intelligence (AI) is used in public discourses. The expression AI is almost always adopted for any kind of computing features imitating human beings, interacting with humans or making smart predictions. Indeed, AI is heavily overstretched and serves as refuge for a lot. Popular business and science magazines are writing on AI. There is no week in the year, where somebody is not organising a business conference, a talk or a round table discussion on AI. Since a couple of years, the feuilleton is also very busy with printing opinion articles on AI and its impact on society, politics and economics. Hence, it is trendy to talk about AI. In these articles and on these conferences AI technology is generally described as an entity, a monolithic bloc and as a singularity. Thereby these commentators end up with conclusions oscillating between either utopian or dystopian visions.[2] Having observed this non-reflecting use of AI makes us feel uncomfortable.

AI is predominantly a buzz-word for marketing and puts a lot of different computing concepts, technologies and practices under one umbrella. The public discourse is somehow trapped in this narrow understanding. This cross-use of the term AI is creating a mixed up public perception of the underlying technologies whereas AI should be considered to be a multitude of technologies which function and behave differently in different realms and applications. Therefore, it is worthwhile to reconsider the term AI. The historical insights and the mapping of technology give us a better understanding of AI and how it might affect society and politics at large.

The History of the Term

Creating machines which imitate human reasoning is a very old human incentive and has its sources in different cultural and philosophical traditions (McCorduck 2004: xxiii-xxx and 3-35). The start of modern and computing-based AI imitating humans however can be found in the fifties. John McCarthy coined the term AI in 1955 for a working group at Dartmouth College (Rajaraman 2014: 201). At the research gathering in 1956, the participants agreed that AI is the discipline of how any intelligence features such as learning and problem solving can be simulated by machines (McCarthy et al. 1955). It was the start for two decades of fundamental research. Triggered by the aim of designing a human-level AI, research between 1960 and 1974 was focusing on Natural Language Processing (NLP)[3], Machine Learning (ML)[4] and symbolic reasoning[5]. Generally, the AI goals are driven by exceling efficiency and effectivity for automatized processes[6]. Expert Systems[7] were probably the most visible outcome and commercial AI-service at that time. However, the efforts where not as fruitful as expected since hardware technology was not as powerful as needed[8]. The anxiety of mechanizing human thoughts fell in the so-called AI winter between 1974 and 1993.  With the development of faster hardware and parallel computing, the availability of large amounts of data, and the emergence of cloud computing, AI got a revival. Especially NLP and ML gained momentum.

Artificial Intelligence in 2018

Today, AI is much about learning and behaving in ways that maximize the chance to achieve clear-targeted goals successfully. This is what genetic programming – a subset of AI – is all about: self-learning features developing strategies by learning from past errors or successes[9]. Thus, if we consider AI in 2018, it is much more about purpose oriented applications and less of imitating human reasoning. The current stage of technology can be labelled as narrow AI. The pioneering efforts of the fellows at Dartmouth has been given up for more mainstream and fast deployable applications – the dreamt computational autonomy has been transformed into more pragmatic directions: neural networks, computer visioning and data science are such kind of applications (Poole 1998: 25 f.). AI is in 2018 facet intelligence and not geared towards a comprehensive notion of intelligence. Most of the academic and industrial research money goes to this kind of solutions excelling smart solutions in specific domains. Apple’s Siri voice recognition (Capes et al. 2017) and Alphabet’s DeepMind neural network programmes (Desjardines et al. 2015: 2071-2079) are prominent and mediatised examples of this kind of research applications. These applications demonstrate the contemporary AI’s ability to learn through deduction. On the other facet, we find applications like the complex automatic programmes on cooking ovens (e.g. Miele mChef[10]), battery optimisation algorithms on mobile devices, and other largely rule-based intelligence, which despite their simplicity enhance our daily lives beyond mere human capability. Hence, if we talk today of AI, it is either a specific form of ML that feeds algorithms with a large amount of data (so-called Big Data) or a complex set of reasoning rules devised by human programmers. The former allows the system to learn by adjusting and reasoning on the delivered datasets mimicking human reactions whereas the latter is the formalised experiences of humans. Contemporary mainstream research has even further narrowed the scope of AI to very specific sub-fields of ML (e.g. deep learning techniques, convolution neural nets), which leave the truly impressive and influential capabilities of AI not yet diffused to both practitioners and the general public (Brynjolfsson et al. 2017).

In general, ML-based solutions are aiming at very specific solutions. It is training by design, which is very powerful for facet intelligence. Hence, the public discourse in 2018 is referring to the historic notion of AI but the current technological applications are much simpler and narrower. Hence, robots who are sensitive and take most human jobs are still utopian visions à la Hollywood. Conceptually, the most elaborated and comprehensive AI technology comes from Cognitive Computing (CC) and its knowledge engineering capacities. This powerful technological paradigm aims for holistic reasoning by reasoning on contexts and relationships, which exist in the World. CC tries to reason and interact with human beings while representing the relationships between objects within all its dimensionality. Its formal knowledge representation relays on description logic thereby emphasising space and time. By applying this logic, AI can unleash its power and can be applied for content retrieval, contextual analysis, knowledge interpretation, and recommendation engines. As such, the AI gains the ability to use induction in its reasoning – the absolute necessity for coping with unexpected circumstances (e.g. the ability to operate an interstellar space ship without prior first-hand experience). Most importantly, instead of copying humans, the technology allows the AI to evolve maximising its own natural strengths. This trend, although not heavily publicised, can already be seen in the kind of AI-related patents filed more and more often by technology pioneers (Fuji et al. 2018: 60-69).  

Having mapped different technological approaches, we can observe that AI is not clearly defined and gives room for various definitions. Today, we are very pragmatic with AI and ongoing research is sponsored to feed business needs. Hence, AI is getting much more profane and still programmed, geared and supervised by human beings. However, this does not mean that AI is a set of simple computational applications in software evolution - AI has the potential to unleash fundamental changes in society and politics.

Living with AI

By having briefly mapped the current technological subsets of AI, we can state that copying human intelligence in its encompassing sense is yet not envisioned. In the near future, AI research and its respective technological subsets will continue evolving from the bottom of narrow AI to the top of strong AI. Therefore, strong AI or also labelled as Artificial General Intelligence (AGI) won’t be reality for quite a long time[11]. In this regard, the actual question of future trends and its implication on society won’t be that of AGI. Nonetheless, advancement of narrow AI can trap human expectations. Reinforced ML can learn bad behaviours that lead to biased conclusions and rebellious attitude. Politics as the means of social engagement needs answers for AI-systems where (1) machine-to-machine interactions ignore human beings. In addition, it needs to recognise that (2) computational knowledge is not acting and reasoning with the same perception as human beings. Moreover, (3) human intelligence as concept is relative and not clearly defined[12]. Indeed, human knowledge is closely knitted with not-knowing – the lack of facts available and the inability to consider all facts available. This ability to work with limited or unreliable data (commonly referred to as intuition) allows us to operate in non-one-dimensional aspects and the unforeseen future. This black box gives humans an advantage that AI has not achieved. Intuition as shaped by human history and biological evolution has advanced humankind even more than we might be aware of. In political terms, AI is in that sense another black box with multidimensional agency. The AI’s impressive capability to process and deliver more information has and will change the nature of human and social decision making further than we can foresee. From a technological critique perspective, it points to the fact that the politics of multitude is more than simply the evolution of 0&1 and less than the singularity of AI. The anthropological replica would be that human beings are caught in a doubled promethic shame[13]: the shame accepting that we – human beings – want to be like machine but are less powerful than the envisioned technology, is insofar doubled as we want to translate human capacities thereby ignoring the fact that machines are at the end still machines.

Therefore, the social-political question will be: do we ever understand what the intellectual capacities of strong AI really are? It is like with animals or children: do we really understand animal behaviour or child’s behaviour? AI is not about copying human brains. If we really want to cope with strong AI, we, but also AI, need to think outside the box (beyond neural networks, cognitive reasoning and pattern recognition). We need to understand that stronger AI is not about deduction or induction alone, but the ability to combine these reasoning techniques in the AI way. From a political point of view, the next big milestone will be when AGI is entitled for citizenship. Robots already do (Hatmaker 2017), but imagine when machine consciousness is getting citizenship and they will use it in the AI way. The geographical fluidness of the cybersphere – the natural environment of AGI – will challenge the geographical boundaries of states. AGI cannot be confined but might create new political entities, such as independent confines of the Internet which are not anymore under the control of geopolitical states.


[1] Original article “What is Artificial Intelligence in 2018?” in swissfuture Nr. 02/2018

[2] It is interesting to observe how opinion leaders like Bill Gates (or the media portraying him) present both positive and negative AI visions, cf. Peter Holley’s (2015) “Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’” and Catherine Clifford’s (2018) “Bill Gates: ‘A.I. can be our friend’”.

[3] “Traditionally, work in natural language processing has tended to view the process of language analysis as being decomposable into a number of stages, mirroring the theoretical linguistic distinctions drawn between syntax, semantics and pragmatics” (Dale 2010: 4).

[4] ML are algorithms that try to emulate the characteristics and behaviour of learning. Henrik H. Martens explains in reference to learning machines that “(…) L-automaton is introduced via a formal, behaviouristic definition, in an attempt to give an abstract characterization of machine "learning” (1959: 364).

[5] Symbolic reasoning can be outlined by the following three principles: “(1) a model representing an intelligent system can be defined in an explicit way, (2) knowledge in such a model is represented in a symbolic way and (3) mental/cognitive operations can be described as formal operations over symbolic expressions and structures which belong to a knowledge model” (Flasiński 2016: 15).

[6] Automation means how human thoughts can be mechanized thereby applying formal reasoning processes. For example, the need for automated translation of Russian texts throughout the Cold War was such an application; a linguistic overview of this topic can be found in Harry Josselson’s research (1971: 1-58)  

[7] “Expert Systems are programs for reconstructing the expertise and reasoning capabilities of qualified specialists within limited domains” (Puppe 1993: 3).                 

[8] However, the term AI has often been removed as soon as it got applied. That was the so-called AI effect leading to the effect that “AI is whatever hasn’t been done” (Hofstadter 1979: 601).

[9] Further readings in a guest editorial of Machine Learning (Goldberg and Holland 1988: 95-99).

[10] Cf. Press release of Miele in 2018: Im Dialog mit dem Lebensmittel: Miele enthüllt zur IFA revolutionäres Garverfahren. https://www.miele.ch/de/m/1544.html

[11] Cf. the number of publications published on the topic of AGI: http://aminer.org/topic/artificial%20general%20intelligence

[12] Intelligence in general is often referred to as rational intelligence. However, there are other forms of human intelligence (e.g. emotional, organic or social intelligence); concurrent knowledges, which are not homogenous.

[13] A concept outlined by the philosopher Günther Anders (1961: 21-95).


Bibliography:

Anders, Günther (1961): Die Antiquiertheit des Menschen. Über die Seele im Zeitalter der zweiten industriellen Revolution. München: Beck.

 

Brynjolfsson, Erik, Daniel Rock and Chad Syverson (2017): Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics, in: Ajay K. Agrawal, Joshua S. Gans and Avi Goldfarb: Economics of Artificial Intelligence. Chicago: University of Chicago Press.

 

Clifford, Catherine (16 February 2018): Bill Gates: ‘A.I. can be our friend’, in: CNBC, https://www.cnbc.com/2018/02/16/bill-gates-artificial-intelligence-is-good-for-society.html (30 July 2018).

 

Dale, Robert (2010): Classical Approaches to Natural Language Processing, in: Nitin Indurkhya and Fred J. Damerau: Handbook of Natural Language Processing. Boca Raton: CRC Press.

 

Desjardines, Guillaume, Karen Simonyan, Razvan Pascanu and Koray Kavukcuoglu (2015): Natural Neural Networks, in: NIPS'15 Proceedings of the 28th International Conference on Neural Information Processing Systems – Volume 2: 2071-2079.

 

Flasiński, Mariusz (2016): Introduction to Artificial Intelligence. Basel: Springer International Publishing.

 

Fujii, Hidemichi and Shunsuke Managi (2018): Trends and priority shifts in artificial intelligence technology invention: A global patent analysis, in: Economic Analysis and Policy, 58: 60-69.

 

Goldberg, David E. and John H. Holland (1988): Genetic Algorithms and Machine Learning, in: Machine Learning, 3/2-3: 95-99.

 

Hatmaker, Taylor (27 October 2017): Saudi Arabia bestows citizenship on a robot named Sophia, in: Techcrunch, https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/?ncid=rss (30 July 2018).

 

Hofstadter, Douglas (1979): Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books.

 

Holley, Peter (29 January 2015): Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’, in: The Washington Post, https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/ (30 July 2018).

 

Josselson, Harry H. (1971): Automatic Translation of Language Since 1960: A Linguist’s View, in:  Advances in Computers, 11: 1-58.

 

Martens, Henrik H. (1959): Two Notes on Machine “Learning”, in Information and Control, 2/4: 364-379.

 

McCarthy, John, Marvin Minsky, Nathaniel Rochester and Claude Shannon (1955): A Proposal for the Dartmouth Summer Research Project on Artifical Intelligence, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html (30 July 2018).

 

McCorduck, Pamela (2004): Machines Who Think. A Personal Inquiry into the History of Prospects of Artificial Intelligence. Natick, MA: A K Peters.

 

Poole, David, Alan Mackworth and Randy Goebel (1998): Computational Intelligence: A Logical Approach. New York: Oxford University Press.

 

Puppe, Frank (1993): Systematic Introduction to Expert Systems. Knowledge Representations and Problem-Solving Methods. Berlin: Springer-Verlag.

 

Rajaraman V, (2014): John McCarthy – Father of Artificial Intelligence, in: Resonance – Journal of Science Education, 19/3: 198-207.

 

T. Capes, P. Coles, A. Conkie, L. Golipour, A. Hadjitarkhani, Q. Hu, N. Huddleston, M. Hunt, J. Li, M. Neeracher, K. Prahallad, T. Raitio, R. Rasipuram, G. Townsend, B. Williamson, D. Winarsky, Z. Wu, H. Zhang (2017): Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System, in: Interspeech: 4011-4015.