Categories
Guide

The Language of AI: How Borrowed Terms Shape Our Understanding

The language we use to describe complex concepts shapes our understanding, and nowhere is this more evident than in the rapidly evolving field of Artificial Intelligence. As AI seeks to define itself, it inevitably borrows terminology from other disciplines, particularly those that study the human mind and brain. But this borrowing comes at a cost. Are we unintentionally imbuing these systems with qualities they don’t possess, fostering misconceptions and hindering genuine progress? Exploring this phenomenon reveals the subtle yet powerful ways in which language influences our perception of AI, potentially leading us down unexpected paths.

Why does the technical vocabulary of Artificial Intelligence contribute to confusion within society?

The confusion surrounding Artificial Intelligence isn’t solely due to its rapid technological advancements, but arguably more fundamentally rooted in its technical vocabulary. AI, as a relatively nascent discipline, has heavily borrowed terminologies from other fields, particularly brain and cognitive sciences (BCS). This conceptual borrowing, while necessary for initial development, has resulted in AI systems being described anthropomorphically, imbuing them with unwarranted biological and cognitive properties. This, in turn, leads to a misunderstanding of AI’s capabilities and limitations within society, leading to inflated expectations and potential misapplications. The use of terms like “machine learning,” “hallucinations,” and “attention” in AI contexts, despite their significantly different meanings compared to BCS, is a prime example of this phenomenon.

The problem extends beyond mere metaphorical liberty. The application of psychological terminology to AI systems subtly shapes public perception, fostering the misconception that machines possess genuine cognitive abilities. For instance, the term “machine learning” suggests a parallel to human learning, prompting the assumption that machines acquire knowledge and understanding in a similar manner. This anthropomorphism can lead to unfounded beliefs regarding the sentience and autonomy of AI, creating ethical dilemmas and fueling anxieties about the potential consequences of advanced AI systems. This widespread use of these terms can be attributed to AI’s rapid growth as a discipline, which resulted in the necessity for utilizing any and all related terminology to better describe novel concepts. Unfortunately, these related terms often have ingrained meanings which are improperly applied to the systems being developed.

What is the influence of Carl Schmitt’s observation on the phenomenon of conceptual borrowing across disciplines?

Carl Schmitt’s insightful observation regarding the theological roots of modern political concepts offers a powerful lens through which to examine conceptual borrowing across various disciplines. Schmitt argued that key political ideas like “sovereignty” and “state of exception” were secularized versions of theological concepts. This process of borrowing, he claimed, was not merely historical but also retained the underlying structure and influence of the original theological frameworks. Similarly, when new scientific disciplines emerge, they often lack the established vocabulary to describe their unique phenomena. As a result, they frequently borrow existing terms from neighboring fields to fill this gap, a practice that, like the secularization of theological concepts, is far from neutral.

This process of conceptual borrowing in science, while often necessary for establishing a coherent technical language, carries with it the inherent risk of importing unintended semantic baggage. Every technical term is enmeshed within a network of conceptual structures, exerting subtle but significant influences on its meaning and application in a new context. When terms are grafted from one discipline to another, they bring with them contextual constraints and implications that can either enrich understanding or introduce confusion. This is far from a neutral act, and the extent to which the added baggage can assist or hinder the scientific process depends upon the alignment and relationship between the fields engaged in the borrowing.

How does the borrowing of concepts from other fields function within the new science of artificial intelligence?

The emergence of artificial intelligence as a distinct field of study necessitated the development of its own technical vocabulary. However, given the rapid pace of its development, AI has frequently relied on conceptual borrowing from other established disciplines to bridge the terminology gap. This phenomenon, observed across various scientific domains, involves adopting and adapting terms from neighboring fields to describe AI’s unique phenomena, problems, hypotheses, and theories. While borrowing can provide a foundation for communication and understanding, it also introduces potential complications, particularly when the borrowed terms carry baggage and implications from their original context that may not perfectly align with their application in AI.

Specifically, AI has heavily borrowed from fields linked to human and animal agency, behavior, and their biological underpinnings, most notably cognitive/psychological sciences and neuroscience. This borrowing is evident in the pervasive use of terms like “machine learning,” “hallucinations,” “attention,” and “neurons,” originally conceived within the context of biological or psychological processes. The adoption of these terms, while seemingly providing intuitive handles for complex AI concepts, can inadvertently lead to anthropomorphism, imbuing AI systems with unwarranted biological and cognitive properties. For instance, describing AI errors as “hallucinations” or algorithms capable of paying “attention” might suggest a level of subjective experience or conscious awareness that does not exist, potentially misguiding public perception and even influencing scientific research directions within AI.

Potential Issues with Borrowed Terminology

The usage of seemingly intuitive terms such as “machine learning”, “hallucinations”, or “attention” can exert semantic influence over the interpretation of the qualities of computational systems. It can also link the concept to other related concepts, generating potential anthropomorphic interpretations. Furthermore, it can become natural to wonder whether machines can learn or pay attention in the biological or psychological sense, running the risk of under-scrutiny. Scientists are then at risk of getting derailed from exploring the most relevant biological and psychological processes. Ultimately, while language may change throughout time, the understanding and application of concepts within the field of AI must be grounded in fact alongside acknowledging the risk of inappropriate translation through language.

What are the potential consequences of using biological and psychological terms within Artificial Intelligence?

The appropriation of biological and psychological terminology within the field of Artificial Intelligence carries significant consequences as it risks imbuing AI systems with unwarranted characteristics and interpretations. For instance, the term “machine learning” carries the positive connotations associated with human learning, influencing the understanding and expectations surrounding these systems. This, in turn, can lead to the assumption that machines possess similar cognitive abilities to humans or animals, minimizing essential differences and diverting attention from researching the unique informational and computational processes at play within both AI and Brain & Cognitive Sciences. The application of terms like “hallucination” to describe errors in AI models further blurs the line between machine function and human experience, potentially misleading the public and stakeholders about the true capabilities and limitations of these systems.

This conceptual borrowing fuels a cycle of anthropomorphism which can have far-reaching effects. When AI systems are described using terms laden with psychological and biological meaning, it becomes easier to assume that they possess similar capabilities and characteristics to humans. This not only distorts public perception but also influences AI research programs and business strategies, often leading to exaggerated expectations and subsequent disappointment, sometimes manifested as “AI winters” when systems fail to deliver on inflated promises. Moreover, semantic crosswiring with Brain & Cognitive Sciences leads to impoverished, reductionist views. The short circuit generates confusion, faith that AI will create super-intelligent systems, and exploitation due to anthropomorphic interpretations.

How do the two avenues of conceptual borrowing, by AI and Cognitive Science, affect understanding in each respective field?

The impact of conceptual borrowing manifests differently in AI and Cognitive Science. In Artificial Intelligence, the appropriation of terms from Brain and Cognitive Sciences (BCS), such as “machine learning,” “hallucinations,” and “attention,” imbues computational systems with unwarranted cognitive and biological properties. This anthropomorphism risks a misinterpretation of AI’s capabilities, both within society and among researchers. By using language laden with psychological and biological connotations, the field inadvertently obfuscates the fundamental differences between computational processes and genuine biological or cognitive phenomena. This can lead to inflated expectations, misdirected research efforts, and ultimately, to disillusionment, as exemplified by the historical “AI winters.” The borrowed terms, carrying positive values and exerting influence, run the risk of obscuring the true nature of machine learning and other AI processes.

Conversely, Cognitive Science and Neuroscience (collectively, BCS) have increasingly adopted technical constructs from information theory and computer science, framing the brain and mind as mere computational and information-processing systems. Concepts such as “encoding,” “decoding,” “information processing,” and “architecture” are now common in BCS, used in the investigation of neural systems and their functions. While this approach has yielded valuable scientific and empirical insights into the biological basis of cognition, it simultaneously risks an oversimplified and reductionist understanding of the human mind. The subjective, experiential qualities of consciousness are often flattened into measurable brain activity, neuronal populations, and algorithmic functions, potentially overlooking the richness and complexity inherent in biological intelligence and leading to an impoverished view of the mind.

What are the prospects for addressing the conceptual issues arising from the crosswiring of languages between AI and Cognitive Science?

While a comprehensive linguistic reform seems unlikely, there’s a basis for optimism, rooted in the natural evolution of language itself. Languages, including technical vocabularies, are like powerful social currents resistant to external control. AI and cognitive science will likely continue using existing, potentially misleading terms despite their pitfalls. AI will press on with anthropomorphic descriptions of computer functions, characterizing them as “minds” with capabilities like attending, learning, and reasoning. Simultaneously, brain and cognitive sciences will persist in flattening the complexities of biological agency, viewing the brain as an information-processing machine, encoding, storing, and decoding signals via input-output mechanisms. This tension seems intrinsic to the nature of these evolving scientific domains.

Despite the entrenchment of these linguistic habits, historical precedent suggests that understanding and factual knowledge can subtly reshape the meaning of words and usage. Even deeply ingrained habits can adjust to new realities. A classic example is the continued use of “sunrise” and “sunset,” relics of a geocentric understanding long disproven. While the literal meaning is inaccurate in light of our heliocentric model, the terms persist, but with adjusted, updated understandings. The hope is that a similar evolution can occur in the AI and cognitive science domains, where increased understanding of the systems can temper the anthropomorphic and reductionist tendencies currently present in the language.

The Horsepower Analogy

The case of “horsepower,” a unit of measure that originated in the need to compare steam engines to the familiar labor provided by horses, offers an analogy for a potential future outcome. Despite its origins deeply rooted in animal labor, horsepower remains the standard unit of measuring mechanical power output. Today, we readily accept that there are no actual horses involved, and we do not try to find equine qualities within an engine. This acceptance holds promise for a similar future for AI: as our understanding of AI evolves, the field may become less burdened by anthropomorphic expectations. Instead, perhaps people will treat AI less like a biological mimic and more like a tool with defined capabilities, shedding misleading cognitive or psychological associations.

Ultimately, disentangling AI’s technical language from its cognitive science roots appears improbable, akin to halting a rushing river. Complete vocabulary reform is unlikely. However, history offers a glimmer of hope. Just as we still use “sunrise” despite knowing the sun doesn’t literally rise, greater understanding of AI systems can subtly reshape our interpretation and usage of existing terms. Perhaps, like “horsepower,” AI terminology, though initially anthropomorphic, can evolve towards a more accurate and less misleading characterization of these powerful computational tools. The key lies not in eradicating the borrowed language, but in fostering a deeper understanding that transcends its inherent limitations, allowing us to perceive AI for what it truly is, not what our inherited vocabulary might imply.