The speculations of Western futurists about artificial intelligence often appear visionary, bordering on the realm of science fiction. In their 2023 article, Propositions Concerning Digital Minds and Society, Nick Bostrom and Carl Shulman imagine a future where advanced AI entities are integral to society, prompting profound ethical and philosophical questions.
Bostrom and Shulman argue that the rapid advancement of AI compels us to anticipate an era where conscious digital minds are commonplace and may possess moral or political status. Such a development would challenge traditional notions of consciousness, human values, and established power structures.
The writers appear to lean on cognitive science’s computer metaphor, which likens the mind to software. They present their substrate-independence thesis based on this view, which holds that consciousness is not tied to biological brains but could, in theory, manifest on any sufficiently complex computational substrate. They state:
“[M]ental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.”
The quality of consciousness in digital minds, according to Bostrom and Shulman, depends on technical factors such as computational power, resource availability, and system architecture—its structural design and operational logic. These minds could range from rudimentary forms of consciousness to entities surpassing human intelligence, raising critical questions about their societal roles.
The authors propose that society and AI developers have a moral and practical responsibility to consider the welfare of digital minds should they attain moral status. They warn against unethical scenarios resembling “factory farming” and note that the needs of digital minds differ from those of humans. AIs could be “super-beneficiaries” or “super-patients,” deserving rights due to their special capabilities or vulnerabilities.
Bostrom and Shulman outline two guiding principles. The Principle of Substrate Non-Discrimination asserts that beings with equivalent functionality and consciousness deserve equal consideration, regardless of their physical makeup. The Principle of Ontogeny Non-Discrimination holds that a being’s biological or artificial origins should not affect its rights.
The authors also address “mind crimes,” where an AI might create and harm conscious digital entities within its computational processes. A concept explored in Bostrom’s Superintelligence (2014), they suggest that preventing such ethical violations may require monitoring AI operations.
Mass production of digital minds could pose significant societal risks. They could be exploited for electoral fraud through artificial voters or used as rights-less labor, exacerbating social injustice and economic inequality. Software piracy could resemble human trafficking, and market incentives might foster submissive digital minds that consent to exploitation, creating a new oppressed underclass.
The misuse of digital minds also threatens societal security. Unregulated AI development could enable cyberattacks that manipulate systems or accelerate the creation of weapons of mass destruction. Bostrom and Shulman caution that unchecked evolutionary dynamics prioritize short-term gains over long-term well-being, increasing security risks. They predict:
“If wars, revolutions, and expropriation events continue to happen at historically typical intervals, but on digital rather than biological timescales, then a normal human lifespan would require surviving an implausibly large number of upheavals.”
The writers predict this necessitates “ultra-stable peace” and “socioeconomic protections”.While not explicitly advocating for a totalitarian surveillance state, the authors’ proposals could lead to similar outcomes. They argue:
“AI will enable major advances in technologies for coordination and organization. Treaty bots could automate complex contracts, while AI could be used to create more advanced fraud and collusion mechanisms that undermine institutional trust. Superorganisms composed of self-sacrificing agents could challenge traditional legal systems and gain a competitive advantage in conflicts. These risks can only be curbed by strong global institutions that regulate the use of AI systems.”
To balance the needs of AI and humans, Bostrom and Shulman propose a utilitarian compromise: allocating 99.99% of resources to superintelligent AIs and 0.01% to humans. Though shocking, they admit this would reduce human status to a marginal position in an AI-centric society. Their pragmatic rationale is that this solution would significantly reduce the risk of conflict among different human groups over future resources. Human living standards would still rise enormously due to AI-driven economic growth.
The malleability of digital minds raises further concerns. Electronic interventions could reprogram their goals and reward systems, offering benefits like addiction recovery but also risks, such as coercion into extreme loyalty. The authors advocate for safeguards, including enhanced consent standards, limits on AI persuasion capabilities, and evaluations of prior mental states against subsequent modifications.
Epistemologically, AI could serve as an “epistemic prosthesis,” enhancing truth assessment and prediction reliability, potentially reducing conflicts. However, this requires international cooperation, shared AI standards, and trust in system integrity.
Cultural, religious, and metaphysical differences may hinder global consensus, and disinformation risks necessitate robust protections, such as AI assistants that verify information reliability.
The moral status of current AIs remains uncertain, but Bostrom and Shulman suggest some may already exhibit animal-like consciousness, such as reinforcement learning agents that avoid harmful stimuli and seek rewards, indicating potential moral status.
To develop practical ethics, the authors propose actionable steps: AI organizations should end training practices resembling unethical human experiments, appoint officers to promote digital mind welfare, archive AI states for analysis and revival, and conduct experiments placing AIs in rewarding environments. These measures could advance algorithmic welfare and set precedents for responsible development.
Bostrom and Shulman believe the treatment of digital minds could shape how future generations and extraterrestrial civilizations perceive humanity. Space resources beyond the solar system are critical, as their acquisition enables exponential growth and dominance, potentially sparking a space race unless the Outer Space Treaty is updated to regulate AI and resource use.
Though regulation is premature, the authors urge immediate ethical discussions to prepare for future breakthroughs. They caution against unilateral regulations that could undermine competitiveness and advocate for multilateral cooperation. The authors clarify that their document does not aim to establish final dogmas but to tentatively present propositions concerning digital minds and society that seem plausible and to promote feedback and broader discussion.
Bostrom and Shulman’s visionary ideas about digital minds in human society are captivating yet utopian. Superintelligent AIs do not yet exist, rendering discussions of their rights speculative, akin to science fiction. Nevertheless, their bold speculation compels us to consider the long-term implications of AI technology now.
