Computer Ethics

AI and Persons

Christopher L. Holland

Saint Louis University

November 12, 2024

Weak AI, Strong AI, Conscious AI and AGI

John Searle on Strong vs Weak AI

What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call “strong” AI from “weak” or “cautious” AI (Artificial Intelligence).

   — John Searle (1980, 417)

Weak AI

According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion.

   — John Searle (1980, 417)

Strong AI

But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

   — John Searle (1980, 417)

Strong AI

In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.

   — John Searle (1980, 417)

Conscious and Conscious AI

But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism.

   — Thomas Nagel (1974)

Replace “organism” above with “AI system” to get a definition of conscious AI.1

AGI and Open AI’s Mission

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

https://openai.com/charter

Distinguishing Four Types of AI

  • Strong AI
  • Conscious AI
  • Weak AI
  • AGI (Artificial General Intelligence)

Is AGI Strong AI or Weak AI?

If we follow OpenAI’s definition, then AGI requires neither consciousness nor strong AI. In other words, it is (conceptually) possible to have Unconscious, Weak AGI.

Dangers of Strong AI

Dan Dennett

We’re making tools, not colleagues.

   — Dennett (2020)

Joanna J. Bryson (2010)

In an influential paper Bryson argues that “Robots should be slaves.” Here is short outline of the argument.

  1. Having servants is good and useful, provided no one is dehumanized.
  2. A robot can be a servant without being a person.
  3. It is right and natural for people to own robots.
  4. It would be wrong to let people think that their robots are persons.

Special Issues for Social AI

Jordan Joseph Wales

A “natural” person has

  • “subjectivity”: a subject of experience
  • “intersubjectivity”: “a consciousness that voluntarily reaches out to make contact with the consciousness of others as an act of self-giving; it is subjectivity oriented to inter-subjectivity” (Gunkel and Wales 2021, sec. 2.2).

Wales on the Dangers of Social AI

Quite unlike pets, our AIs’ apparent personalities will be entirely at our disposal. By design and by social context (we will own them), their raison-d’être will be the service of our interests. Yet even if we know the AI to be a reflection rather than a subject, we will not feel that way. That is, we will rightly see our AIs’ behavior as a consumable product rather than an expression of personal life, but our instinctive empathy for our AI tools will make us experience them as if they were natural persons—whose behavior we consume.

   — Gunkel and Wales (2021, sec. 2.3)

Wales on the Dangers of Social AI

  • Our interaction with Social AI may make it easier to treat other humans as consumables. (Growing accustom to slave holding.)
  • If unsatisfactory AI behavior is a faulty product, what is unsatisfactory human behavior?
  • Root Issue: Social AI has the potential to develop our superbia (pride)

Wales Solution to the Dangers of Social AI

We cannot avoid experiencing these tools as personal, but our empathy will not be mistaken if, after the initial and unavoidable moment of empathizing with or personalizing our AI tools, we “refer” that empathy, extending the AI’s horizon of meaning by consciously engaging in a second moment of empathic recognition toward all the unknowable real persons whose interactions have unwittingly sculpted its persuasive personality.

   — Gunkel and Wales (2021, sec. 4.3)

Wales Solution to the Dangers of Social AI

Habitually engaged, these two moments may become one. By tying our empathy for AIs to this persistent and grateful recollection, we will preserve our own personhood by resisting the fantasy of superbia.

   — Gunkel and Wales (2021, sec. 4.3)

Should some AIs be legal persons?

Additional Reading on AI and Consciousness

Chalmers, David J. 2023. “Could a Large Language Model Be Conscious?” Boston Review, August 9, 2023. https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/.

Sources

Bryson, Joanna J. 2010. “Robots Should Be Slaves.” In Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, edited by Yorick Wilks, 63–74. Natural Language Processing (NLP): V. 8. John Benjamins Pub. Company.
Chalmers, David J. 2023. “Could a Large Language Model Be Conscious?” Boston Review, August 9, 2023. https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/.
Dennett, Daniel C. 2020. “What Can We Do?” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, 41–53. Penguin.
Gunkel, David J., and Jordan Joseph Wales. 2021. “Debate: What Is Personhood in the Age of Ai?” AI & Society 36 (2): 473–86. https://doi.org/10.1007/s00146-020-01129-1.
Hildebrandt, Mireille. 2019. “9. Legal Personhood for AI?” Law for Computer Scientists, June. https://lawforcomputerscientists.pubpub.org/pub/4swyxhx5/release/5.
Nagel, Thomas. 1974. “What Is It Like to Be a Bat?” The Philosophical Review 83 (4): 435–50. http://www.jstor.org/stable/2183914.
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3 (3): 417–24. https://doi.org/10.1017/S0140525X00005756.