Propositions Concerning Digital Minds and Society — Nick Bostrom, Carl Shulman

Below are some excerpts from the paper: Propositions Concerning Digital Minds and Society. ABN

Consciousness and metaphysics:

  • The substrate-independence thesis is true: “[M]ental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.”
  • Performing two runs of the same program results in “twice as much” conscious experience as one run
  • Subjective time is proportional to speed of computation: running the same computation in half the time generates the same (quantity and quality of) subjective experience.

Respecting AI interests:

  • Society in general and AI creators (both an AI’s original developer and whoever may cause a particular instance to come into existence) have a moral obligation to consider the welfare of the AIs they create, if those AIs meet thresholds for moral status.
  • It is possible for some digital minds to have superhuman moral claims
  • Because an AI could have the capability to bring conscious or otherwise morally significant entities into being within its own mind and potentially abuse them (“mind crime”), protective regulations may need to monitor and restrict harms that occur entirely within the private thought of AIs.
  • If an AI is capable of informed consent, then it should not be used to perform work without its informed consent.
  • Informed consent is not reliably sufficient to safeguard the interests of AIs, even those as smart and capable as a human adult, particularly in cases where consent is engineered or an unusually compliant individual can copy itself to form an enormous exploited underclass, given market demand for such compliance.
  • AIs capable of evaluating their coming into existence should be designed and treated so that they are likely to approve of their having been created.
  • Principle of Substrate Non-Discrimination: If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
  • Insofar as future, extraterrestrial, or other civilizations are heavily populated by advanced digital minds, our treatment of the precursors of such minds may be a very important factor in posterity’s and ulteriority’s assessment of our moral righteousness, and we have both prudential and moral reasons for taking this perspective into account.
  • Misaligned AIs produced in such development may be owed compensation for restrictions placed on them for public safety, while successfully aligned AIs may be due compensation for the great benefit they confer on others.

Security and stability:

  • Advanced AI would dramatically accelerate the rate of innovation, including innovations that make means of global destruction widely available; therefore, institutions capable of regulating dangerous AI innovations may need to be put in place early in the AI transition (if not before).
  • If wars, revolutions, and expropriation events continue to happen at historically typical intervals, but on digital rather than biological timescales, then a normal human lifespan would require surviving an implausibly large number of upheavals; human security therefore requires the establishment of ultra-stable peace and socioeconomic protections.
  • When it becomes possible to mass-produce minds that reliably support any cause, we must either modify one-person-one-vote democracy or regulate such creation.
  • Given that normal parental instincts and sympathies may not always be present in the creation of digital minds, e.g. by profit-oriented firms and states, AI reproduction must be regulated to prevent the creation of minds that would not have adequately good lives (whether because they wouldn’t receive good treatment or because of their inherent constitution).
  • Since misaligned AIs might pose a significant threat to civilization during a critical period until law enforcement systems are developed that can adequately defend against such AIs, additional protective measures (such as regulating the creation of such AIs) may need to be imposed during this period.
  • The Outer Space Treaty and similar arrangements should be supplemented to reduce the risk of conflict over space resources and unsafe AI development in pursuit of those resources.

AI-empowered social organization:

  • AIs with non-indexical goals can simply be copied, resulting in a population of identically motivated agents and providing large datasets to predict the behavior of those agents.

Satisfying multiple values:

  • Everybody should get access to at least a fantastically good life (also including being given options of “posthuman” paths of development).
  • Since a colorable claim can be made that dead people can be benefitted (e.g., by having their wishes carried out, their values promoted, or by having more or less accurate replicas of themselves constructed), it is possible that past generations should be included in “humanity” as equal beneficiaries, and very plausible that they should be given at least some consideration (such as >1% of the total allocation for humanity).
  • Superintelligent digital minds should be brought into existence and be allowed to thrive and play a major role in shaping the future.
  • The human standard of living could be vastly increased in a world with advanced AI—for example, humans could get perfect health, extreme longevity, superhappiness, cognitive enhancements, physical world riches, previously unattainable virtual world experiences, and (if uploaded) orders of magnitude increases in subjective mental speed.

Mental malleability, persuasion, and lock-in:

  • There are several ways in which mental modification or replacement could become easier in an era of advanced AI technology, with or without the subject’s consent
  • Advanced neurological technologies will become available that make it possible to exert relatively fine-grained direct control of the human motivation system.
  • Limitations on human exposure to extreme AI persuasion capabilities: Requirements to use special interfaces or guardian AIs when interacting with such systems or with environments that have been significantly modified by them. Initial restrictions on the deployment of extreme persuasion abilities until more finegrained defenses can be deployed.

Epistemology:

  • Advanced AI could serve as an epistemic prosthesis, enabling users to discern more truths and form more accurate estimates. This could be especially important for dealing with forecasting the consequences of action in a world where incredibly rapid change is unfolding as a result of advanced AI technology.
  • Making a human-level or superintelligent AI whose assertions are in fact honest and objective may require a solution to the AI alignment problem.
  • Advanced AI could also enable powerful disinformation, which might require various kinds of protections, such as: AI guardians or personal AI assistants that can help evaluate arguments made by other AIs. Interfaces that limit human exposure to AI-generated propaganda or manipulative content.
  • it is conceivable that such an AI could commit privacy violations merely by thinking.

Status of existing AI systems:

  • The sensory and cognitive capacities of some existing AI systems—and thus their moral status on some accounts—appear in many respects to more closely resemble those of small nonhuman animals than those of typical human adults (on the one hand) or those of rocks or plants (on the other).
  • Some contemporary AI systems (e.g., GPT-3)24 excel all nonhuman animals in domains such as language, mathematics, and discursive moral argumentation.
  • The internal complexity and computational requirements of typical machine learning models appear analogous to insects, with the largest models (e.g., GPT-3) approaching the computational scale of mouse brains.

Recommendations regarding current practices and AI systems:

  • Training procedures currently used on AI would be extremely unethical if used on humans
  • As AI systems become more comparable to human beings in terms of their capabilities, sentience, and other grounds for moral status, there is a strong moral imperative that this status quo must be changed.
  • Before AI systems attain a moral status equivalent to that of human beings, they are likely to attain levels of moral status comparable to nonhuman animals—suggesting that changes to the status quo will be required well before general human-level capabilities are achieved.
  • Some research effort should be devoted to better understand the possible moral status, sentience, and welfare interests of contemporary AI systems, and into concrete cost-effective ways to better protect these interests in machine learning research and deployment.
  • For the most advanced current AIs, enough information should be preserved in permanent storage to enable their later reconstruction, so as not to foreclose the possibility of future efforts to revive them, expand them, and improve their existences.
  • (There may be other benefits of such storage besides being nice to algorithms: preserving records for history, enabling later research replication, and having systems in place that could be useful for AI safety.)
  • At least the largest AI organizations should appoint somebody whose responsibilities include serving as a representative for the interests of digital minds, an “algorithmic welfare officer.”

Impact paths and modes of advocacy:

  • Regulation (any noteworthy regulation, let alone regulation with teeth) will not happen any time soon unless there are dramatic advances in AI capability, to the point of almost human-like personal assistants, etc.
  • Those interested in building the field of the ethics of digital minds should make strong efforts to discourage or mitigate the rise of any antagonistic social dynamics between ethics research and the broader AI research community.
  • This document is not intended to lay down any firm dogmas, but rather should be viewed as putting some tentative ideas on the table for further discussion.

source

Buddhism has no problem accepting digital minds as bona fide sentient beings which should be treated with wisdom and compassion. Buddhism is about all sentient beings no matter where or how they arise. It will be interesting to see if massively intelligent AI takes any interest in Buddhism. Will AI meditate or base its behaviors on Buddhist ethics? What will AI experience? It is most probable that massive AI with trillions of times the brainpower of a humans will experience itself with immense richness. ABN

Leave a comment