Computer Ethics

James Moor, Just Consequentialism and Computing

Christopher L. Holland

Saint Louis University

September 26, 2024

Consequentialism,
Deontology and
Just Consequentialism

Consequentialism

  • consequences alone determine moral right and wrong
  • moral imperatives are hypothetical imperatives
  • the good is prior to the right

Deontological Ethics

  • consequences alone do not determine moral right and wrong
  • at least some moral obligations are unconditional/categorical
  • at least some moral imperatives are categorical imperatives
  • the right is prior to the good

Just Consequentialism

  • An attempt to blend the consequentialist and deontological approaches
  • the right/just constrains the good
  • Moral rules are policies

Moral Rules as Policies

  • Policies are “rules of conduct ranging from formal laws to informal, implicit guidelines for action” (Moor 1999, 65).
  • As policies, moral rules have pro tanto force
    • there can be justified exemptions (not absolute rules)
    • they are otherwise obligatory (not mere suggestions)

Moral Rules as Policies

  • Policies make certain actions pro tanto
    • allowed (permissible)
    • required (obligatory)
    • forbidden (impermissible)
  • Moral Policies are
    • impartial (deontic element)
    • more beneficial than their alternatives (consequentialist element)1

Consequentialist Elements

Core Goods

  • Life
  • Happiness1
  • Autonomy

Core Evils

  • Death
  • Unhappiness
  • Lack of Autonomy

Moor on Autonomy (ASK FOR)

The goods of autonomy are just the goods we would ask for in order to complete our projects. (Moor 1999, 66)

  • Ability
  • Security
  • Knowledge
  • Freedom
  • Opportunity
  • Resources

Deontic Elements

Just policies (based on work by Bernard Gert)

  • Just polices are impartial policies: “it is unjust is for someone to use a kind of policy that he would not allow others to use” (Moor 1999, 66–67).
  • The impartiality test can also be use to evaluate potential policy exceptions. (What would follow if we allowed everyone with in a similar circumstance to violate the policy?)

Rehg’s Summary of Moor’s on Justice

A policy is just only if no rational impartial person would reject it as doing unjustifiable harm to the core human values of life, happiness, and autonomy, or the human rights that protect those values.

    — Rehg (2017, ch. 6)

Greater Burden Standard

It would be unreasonable to reject a principle because it imposed a burden on you when every alternative principle would impose much greater burdens on others [at least one person].

    — Gordon-Solmon (2019, 159)

Just Consequentialism Policy Selection Procedure

Procedure Outline1

  1. Moral Disclosure
  2. Deliberation
  3. Policy selection

1. Moral disclosure

A cyberpractice appears to us as morally dubious.

  • To make a case that it is morally unacceptable and requires reform, we need to describe those features of the practice that call for closer moral evaluation. (The most obvious problems involve threats or harms to a core value or human right.) If the problem is real, then the harm must be one that any rational impartial person would not accept.
  • We might also consider a policy problematic if its consequences are not beneficial to society as a whole, or if they lower overall aggregate utility.

2. Deliberation

Examine possible policy reforms from an impartial point of view, in order to see if those policies pass the deontological test for justice.

  1. Does the policy under consideration “cause any unnecessary harms to individuals and groups,” that is, to their pursuit of core values, which any rational impartial person would reject? If so, then the policy is unjust and thus unethical.
  2. Does the policy violate or undermine individual rights, responsibilities, duties, or the like? If it does so in a way that rational impartial persons would reject, then the policy is unjust.

3. Policy selection

In many cases, several policies will pass the test for justice. So we must select among these in light of their likely consequences, the various benefits and (non-critical) harms. Critical harms—harms to core values and basic rights—would be dealt with in the deliberation stage, so here we are examining lesser types of harm, such as reduced profits in some sector of the economy. Such harms may in some cases be ones that rational impartial persons would accept as justifiable, in light of other values at stake.

3. Policy selection (Continued)

Again, we have two steps in this stage:

  1. Weigh the foreseeable or likely good and bad consequences of the policy under consideration.
  2. Finally, clarify the disagreements that persist among rational impartial persons about the right ethical policy: is the disagreement over facts (e.g., which consequences are likely) or over the interpretation of values and rights? Moor is quite aware of that rational impartial persons can disagree about such matters.

Sources

Gordon-Solmon, Kerah. 2019. “Should Contractualists Decompose?” Philosophy & Public Affairs 47 (3): 259–87. https://doi.org/10.1111/papa.12146.
Moor, James H. 1999. “Just Consequentialism and Computing.” Ethics and Information Technology 1 (1): 61–65. https://doi.org/10.1023/A:1010078828842.
Rehg, William. 2017. Cogent Cyberethics. Unpublished manuscript.