AI and Persons
Saint Louis University
November 12, 2024
What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call “strong” AI from “weak” or “cautious” AI (Artificial Intelligence).
— John Searle (1980, 417)
According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.
— John Searle (1980, 417)
But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.
— John Searle (1980, 417)
In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.
— John Searle (1980, 417)
But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism.
— Thomas Nagel (1974)
Replace “organism” above with “AI system” to get a definition of conscious AI.1
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Is AGI Strong AI or Weak AI?
If we follow OpenAI’s definition, then AGI requires neither consciousness nor strong AI. In other words, it is (conceptually) possible to have Unconscious, Weak AGI.
We’re making tools, not colleagues.
— Dennett (2020)
In an influential paper Bryson argues that “Robots should be slaves.” Here is short outline of the argument.
A “natural” person has
Quite unlike pets, our AIs’ apparent personalities will be entirely at our disposal. By design and by social context (we will own them), their raison-d’être will be the service of our interests. Yet even if we know the AI to be a reflection rather than a subject, we will not feel that way. That is, we will rightly see our AIs’ behavior as a consumable product rather than an expression of personal life, but our instinctive empathy for our AI tools will make us experience them as if they were natural persons—whose behavior we consume.
— Gunkel and Wales (2021, sec. 2.3)
We cannot avoid experiencing these tools as personal, but our empathy will not be mistaken if, after the initial and unavoidable moment of empathizing with or personalizing our AI tools, we “refer” that empathy, extending the AI’s horizon of meaning by consciously engaging in a second moment of empathic recognition toward all the unknowable real persons whose interactions have unwittingly sculpted its persuasive personality.
— Gunkel and Wales (2021, sec. 4.3)
Habitually engaged, these two moments may become one. By tying our empathy for AIs to this persistent and grateful recollection, we will preserve our own personhood by resisting the fantasy of superbia.
— Gunkel and Wales (2021, sec. 4.3)
Should some AIs be legal persons?