4. Objections.
"HI claims that once the biological substrates of suffering have been abolished, it is 'inconceivable' that suffering will ever be recreated. But this isn't so. According to the Simulation Argument, there is a significant likelihood that we ourselves are living in an ancestor-simulation run by our advanced descendants. If this is the case, then our simulated status entails that posthumans will not eradicate suffering. The Simulation Argument implies that our descendants will re-introduce suffering via their ancestor-simulations, or they never opted to abolish suffering in the first instance." No 35
The Simulation Argument (SA) is perhaps the first interesting argument for the existence of a Creator in 2000 years. It is worth noting that SA is distinct from the traditional sceptical challenge of how one can ever know that one's senses aren't being manipulated by an evil Cartesian demon, or be sure that one isn't just a brain in a nefarious neurosurgeon's vat, and so forth. SA is also distinct from the controversial but non-sceptical inferential realist theory of perception: inferential realists believe that each of us lives in egocentric simulations of the natural world run by a real organic computer i.e. the mind-brain. Instead, SA claims that given exponential growth in computing processing power and storage capacity, the entire universe as commonly understood could be a simulation run on an ultrapowerful computer built by our distant descendants. We may really be living in one of posterity's versions of The Matrix. SA's important subtlety - the subtlety that catapults SA from idle philosophical fancy to serious scientific metaphysics - is that if multiple ancestor-simulations are destined to be created whose inhabitants are subjectively indistinguishable from ourselves, then statistically it is much more likely that we are living with the great majority in one of these indistinguishable simulations rather than with the minority in pre-simulation Reality. Or rather, SA concludes that at least one of the following three propositions must be true: 1. Almost all civilisation at our level of development become extinct before becoming technologically mature; 2. The fraction of technologically mature civilizations that are interested in creating ancestor-simulations is almost zero; 3. You are almost certainly living in a computer simulation. Actually, SA's proposed trilemma may shortly be simplified. The first of SA's three disjuncts, the extinction scenario, can be effectively excluded within a century or two - an exclusion that ostensibly increases the likelihood one is living in a cosmic mega-simulation. For humans are poised to colonise worlds beyond the home planet, thereby rendering global thermonuclear war, giant asteroid impacts, a nanotech "grey goo" incident, superlethal viral pandemics and other Earth-ravaging catastrophes impotent to extinguish intelligent life itself. Even on the most apocalyptic end-of-the-world prophecies, intelligent life will presumably survive in at least low-density branches of the universal wave function. In the far future, superintelligent posthumans may at some stage mass-produce ancestor-simulations. If so, these computer simulations of ancestral life may include billions of human primates whose inner lives, the simulation hypothesis suggests, may be subjectively indistinguishable from our own.
What should we make of this? First, a familiar sociological point. The dominant technology of an age typically supplies its root-metaphor of mind - and often its root-metaphor of Life, The Universe and Everything. Currently our dominant technology is the digital computer. We may have finally struck lucky. Yet what digital computers have to tell us about the ultimate mysteries of consciousness and existence remains elusive. At any rate, no attempt will be made here exhaustively to discuss SA except insofar as its conclusion impacts on the abolition of suffering. But it's first worth raising a few doubts about the technical feasibility of any kind of simulation hypothesis. These doubts will then be set aside to consider the likelihood that a notional superintelligence that did have the computing technology to run full-blown ancestor-simulations would ever choose to do so.
One problem with SA is that it rests on a philosophical premise for which there is no evidence, namely the substrate-independence of qualia - the introspectively accessible "raw feels" of our mental lives. This premise is probably best rephrased as the substrate-neutrality or substrate-invariance of qualia: SA functionalism doesn't claim that the colours, sounds, smells, emotions, etc, of subjective first-person consciousness can be free-floating, merely that any substrate that can "implement" the computations performed by our neural networks will conserve the textures of human experience. The substrate-neutrality assumption is intended to rule out a [seemingly] arbitrary "carbon chauvinism": take care of the computations, so to speak, and the qualia will take care of themselves. SA aims to quantify the likelihood of our living in an ancestor-simulation with a principle of indifference: the probability that we are living in a simulated universe rather than primordial Reality is equal to the fraction of all people that are actually simulated people. Critically for the argument, SA assumes the subjective indistinguishability of "real" from hypothetical post-biological "simulated" experiences. SA proposes that the power of posthuman supercomputers may allow vastly more simulated copies of people to exist than ever walked the Earth in the ancestral population. This is because once a single "master program" is written, copying its ancestor-files is trivially easy if storage space is available. Hence SA's claim that if posthumans ever run ancestor-simulations, then we are almost certainly in one of them. But here is the rub. The prior probability to be assigned to our living in a simulated universe depends on the probability one assigns to the existence of superadvanced civilisations that are both able and willing to create multitudes of sentience-supporting ancestor-simulations. And there is simply no evidence that such computationally simulated virtual "people", if they ever exist, will be endowed with phenomenal consciousness - any more than computationally simulated hurricanes feel wet. SA postulates that consciousness will supervene or "result" from supercomputer programs emulating organic mind/brains with the right causal-functional organization at some suitably fine-grained level of detail. The physical substrates of the putative supercomputer used to simulate sentient creatures like us will supposedly influence our kinds of consciousness only via their influence on computational activities. But it's worth noting that silicon etc robots/computers can already emulate and exceed human performance in many domain-specific fields of expertise without any hint of consciousness. It's unclear how or why generalising or extending this performance-gap will switch on inorganic sentience - short of the physical "bionization" of our robots/computers via organic implants. Without qualia, we ourselves would just be brainy zombies; yet qualia are neither necessary nor sufficient for the manifestation of behavioural intelligence. Thus some very stupid organic creatures suffer horribly. Some very smart silicon systems and digital sims aren't sentient; they can defeat the human world-champion at chess. We're clearly missing something: but where are we going wrong?
For SA to work in the absence of a scientific explanation of consciousness, some kind of cross-substrate qualia conservation postulate must be assumed on faith. Yet if phenomenal consciousness is really feasible in other substrates or virtual machines, does this synthetic consciousness have the same generic texture as ours - or might not synthetic consciousness be as different as is waking from dreaming (or LSD-like) consciousness? Assuming conscious minds can be "implemented", "uploaded" or "emulated" in other substrates, what grounds are there for supposing that the uploads/simulated minds retain all, or any, particular qualia at every virtual level - assuming their specific textures are as computationally incidental to the mind as are the specific compositions of the pieces in a game of chess? Granted biological minds can be scanned, digitized and uploaded to/simulated in another medium, will the hypothetical sentience generated be sub-atomic, nano-, micro-, (or pan-galactic?) in scale? Can abstract virtual machines really generate spatio-temporally located modes of consciousness? Are multiple layers of qualia supposed to be generated by virtual beings in a nested hierarchy of simulations? Are the stacked qualia supposed to be epiphenomenal i.e. without causal effect; if so, what causes subjects like us to refer to their existence? By what mechanism? If ancestor-simulations are being run, then what grounds exist for assuming the conservation of type-identical qualia across multiple layers of abstraction? Are these layers of computational abstraction supposed to be strict or, more realistically, "leaky"? SA undercuts the [ontological] unity of science by treating Reality as though it literally has levels. Yet there is no evidence that virtual machines could have the causal power to generate real qualia; and the existence of "virtual" qualia would be a contradiction-in-terms.
None of the above considerations entail that phenomenal consciousness or unitary conscious minds are substrate-specific. Perhaps the problem is that there are microfunctional differences between organic and silicon etc computers/robots - microfunctional differences that our putative Simulators might emulate on their supercomputers with software that captures the fine-grained functionality which coarser-gained simulations omit. After all, it's question-begging to describe carbon merely as a "substrate". The carbon atom has functionally unique valence properties and a unique chemistry. The only primordial information-bearing self-replicators in the natural world are organic precisely in virtue of carbon's functional uniqueness. Perhaps the functional uniqueness of organic macromolecules extends to biological sentience. These microfunctional differences may be computationally irrelevant or inessential to a game of chess; but not in other realms. Suppose, for example, that the binding problem [i.e. how the unity of conscious perception is generated by the distributed activities of the brain] and the unitary experiential manifolds of waking/dreaming experience can be explained only by invoking quantum-coherent states in organic mind-brains. Admittedly, this hypothesis resolves the Hard Problem of consciousness only if one grants a monistic idealism/panpsychism that most scientists would find too high a price to swallow. But on this account, the fundamental difference between conscious biological minds and silicon etc computers is that conscious minds are quantum-coherent entities, whereas silicon etc computers (and brains in a dreamless sleep, etc) are effectively mere classical aggregates of microqualia. Counterintuitively, a naturalistic panpsychism actually entails that silicon etc robots are zombies.
A proponent of the simulation hypothesis might respond: So what? A functionally unique organic neurochemistry needn't pose an insurmountable problem for a Simulator. After all, there is no reason to suppose that a classical computer can't formally calculate anything computable on a quantum computer, since (complications aside) a quantum computer is computationally equivalent to a Turing machine, albeit hugely faster. So if silicon etc supercomputers could simulate biological mind-brains with their putative quantum-coherence as well, then qualia might still "emerge" at this layer of abstraction. The technicalities of SA's original, classical formulation aren't essential to the validity of its argument. SA still works if it's recast and the organic mind/brain is a quantum computer. The snag is that this defence of SA conflates the simulation of extrinsic and intrinsic properties: formal input-output relationships and the felt textures of experience. Computational activity that takes milliseconds will not feel the same as computational activity that takes millennia - quite aside from any substrate-specific differences in texture or absence thereof. If quantum coherence is the signature of conscious mind, then conscious biological minds are implicated in the fundamental hardware of the universe itself - the computationally expensive, program-resistant stuff of the world. As David Deutsch has stressed, the computations of a quantum computer must be done somewhere. If our minds by their very nature tap into the quantum substrate of basement reality, then this dependence undercuts the grounds for believing that we are statistically likely to inhabit an ancestor-simulation - though it doesn't exclude traditional brain-in-a-vat style scepticism.
Of course, none of the above reasoning is decisive. We simply don't understand consciousness. Many scientists and philosophers would dispute that quantum theory is even relevant to the problem. Or perhaps we are simulated quantum mind/brains running on a post-silicon quantum supercomputer. Or perhaps the laws of quantum mechanics itself are an artefact of our simulation in some kind of posthuman "computronium". Who knows. Here we are veering into more radical forms of scepticism. But if insentient simulations of humans (etc) are feasible, then one may reasonably doubt all three disjuncts of SA. Maybe neither the premises nor the conclusions of SA are true. Intelligent life is not headed for extinction. Some of our descendants may conceivably run multiple ancestor-simulations in low-density branches of the universal wave function. It is exceedingly unlikely that we are participants in one of them.
However, let's set aside technical doubts about computationally simulated sentience. Assume that posthumans have solved the Hard Problem of consciousness. The explanatory gap has been closed without unravelling our entire conceptual scheme in the process. Or perhaps qualia can themselves be digitally encoded and computationally re-created at will. Assume too that some analogue of Moore's Law of computer power is not just a temporary empirical generalisation: computer power continues to increase indefinitely until superintelligence has to grapple with the Bekenstein bound - unless this limit on the entropy or information that can be contained within a three-dimensional volume is itself supposed to disclose the granularity of our simulation. Assume further that a supercivilisation reaches a stage of development where it has the technical capacity to run an abundance of ancestor-simulations and simulate [a fragment of] the multiverse disclosed by contemporary physical science - though computationally simulating the infinite-dimensional Hilbert space of quantum-mechanics is no task for the faint-hearted. Finally, if the ancestor-simulations running are supposed to be cheap simulacra rather than faithful replications, let's assume like SA that the computational savings in taking "reality-shortcuts" outweigh the computational cost of the supervisory software - although in practice the computational price of intervening when ancestor-simulants get too close to discovering their ersatz status could make skimping on our Matrix a false computational economy. Granted all the above, then consider the scenario proposed in SA. Of all the immense range of alternative activities that future Superbeings might undertake - most presumably inconceivable to us - running ancestor-simulations is one theoretical possibility in a vast state-space of options. On the one hand, posthumans could opt to run paradises for the artificial lifeforms they evolve or create. Presumably they can engineer such heavenly magic for themselves. But for SA purposes, we must imagine that (some of) our successors elect to run malware: to program and replay all the errors, horrors and follies of their distant evolutionary past - possibly in all its classically inequivalent histories, assuming universal QM and maximally faithful ancestor-simulations: there is no unique classical ancestral history in QM. But why would posthumans decide to do this? Are our Simulators supposed to be ignorant of the implications of what they are doing - like dysfunctional children who can't look after their pets? Even the superficial plausibility of "running an ancestor-simulation" depends on the description under which the choice is posed. This plausibility evaporates when the option is rephrased. Compare the referentially equivalent question: are our posthuman descendants likely to recreate/emulate Auschwitz? AIDS? Ageing? Torture? Slavery? Child-abuse? Rape? Witch-burning? Genocide? Today a sociopath who announced he planned to stage a terrorist attack in the guise of "running an ancestor-simulation" would be locked up, not given a research grant. SA invites us to consider the possibility that the Holocaust and daily small-scale horrors will be recreated in future, at least on our local chronology - a grotesque echo of Nietzschean "eternal recurrence" in digital guise. Worse, since such simulations are so computationally cheap, even the most bestial acts may be re-enacted an untold multitude of times by premeditated posthuman design. It is this hypothetical abundance of computational copies that lends SA's proposal that one may be living in a simulation its argumentative bite. At least the traditional Judeo-Christian Deity was supposed to be benevolent, albeit in defiance of the empirical evidence and discrepancies in the Biblical text. But any Creator/Simulator who opts to run prerecorded ancestor-simulations presumably knows of the deceit practised on the sentient beings it simulates. If the Simulators have indeed deceived us on this score, then what can we be expected to know of unsimulated Reality that transcends our simulation? What trans-simulation linguistic apparatus of meaning and reference can we devise to speak of what our Deceiver(s) are purportedly up to? Intuitively, one might suppose posthumans may be running copies of us because they find ancestral Darwinian life interesting in some way. After all, we experiment on "inferior" non-human animals and untermenschen with whom we share a common ancestry. Might not intellectual curiosity entitle superintelligent beings to treat us in like manner? Or perhaps observing our antics somehow amuses our Simulators - if the homely dramaturgical metaphor really makes any sense. Or perhaps they just enjoy running snuff movies. Yet this whole approach seems misconceived. It treats posthumans as though they were akin to classical Greek gods - just larger-than-life versions of ourselves. Even if advanced beings were to behave in such a manner, would they really choose to create simulated beings that suffered - as distinct from formally simulating their ancestral behaviour in the way we computationally simulate the weather?
Unfortunately, this line of thought is long on rhetorical questions and short on definitive proof. A counterargument might be that most humans strongly value life, despite the world's tragedies and its everyday woes. So wouldn't a "like-minded" Superbeing be justified in computationally replaying as many sentient ancestral lives as possible, including Darwinian worlds like our own? Even Darwinian life is sometimes fun, even beautiful. Might not our Simulators regard the episodic nastiness of such worlds as a price worth paying for their blessings - a judgement shared by most non-depressive humans here on Earth. Yet this scenario is problematic even on its own terms. Unless the computing resources accessible to our Simulators were literally infinite, a claim of dubious physical meaning, every simulation has an opportunity-cost in terms of simulated worlds forgone. If one were going to set about creating sentient-life-supporting worlds in a supercomputer, then why not program and run the greatest number of maximally valuable paradises - rather than mediocre or malignant worlds like ours? Presumably posthumans will have mastered the technologies of building super-paradises for themselves, whether physically or via immersive VR. They'll presumably appreciate how sublimely wonderful life can be at its best. So why recreate the ugliness from which they emerged - a perverse descent from posthuman Heaven into Darwinian purgatory? Our own conviction that existing life is worthwhile is itself less a product of disinterested reflection than a (partially) heritable expression of status quo bias. If prompted, we don't believe the world's worst scourges, past or present, should be proliferated if the technical opportunity ever arises. Thus we aim to cure and/or care for the brain-damaged, the mentally ill and victims of genetic diseases; but we don't set out to create more brain-damaged, mentally ill and terminally sick children. Even moral primitives like contemporary Darwinian humans would find abhorrent the notion of resurrecting the nastier cruelties of the past. One wouldn't choose to recreate one's last toothache, let alone replay the world's sufferings to date. How likely are posthumans ever to be more backward-looking, in some sense, than us?
Of course, predictions of "progress" in anything but the most amoral, technocratic sense can sound naïve. Extrapolating an exponential growth in computing power, weapons technology or the like sounds reasonable. Extrapolating an expanding circle of compassion to embrace all sentient life sounds fuzzy-minded and utopian. Certainly, given the historical record, envisaging dystopian possibilities is a great deal more plausible than a transition to paradise-engineering. However, a reflex cynicism is itself one of the pathologies of the Darwinian mind. As our descendants rewrite their own code and become progressively smarter, their conception of intelligence will be enriched too. Not least, enriched intelligence will presumably include an enhanced capacity for empathy: a deeper understanding of what it is like to be others - beyond the self-centred perspective of Darwinian minds evolved under pressure of natural selection. An enhanced capacity for empathetic understanding doesn't feature in conventional measures of intelligence. Yet this deficit reflects the inadequacy of our Aspergersish "IQ tests", not the cognitive unimportance of smarter mind-reading and posthuman supersentience. Failure to appreciate the experience of others, whether human or nonhuman, is not just a moral limitation: it is a profound intellectual limitation too; and collective transcendence of humanity's intellectual limitations is an indispensable part of becoming posthuman. If our descendants have any inkling of what it is like to be, say, burned alive as a witch, or to spend all one's life in a veal crate, or simply to be a mouse tormented by a cat, etc, then it seems inconceivable they would set out to (re-)create such terrible states in computer "simulations", ancestral or otherwise. Achieving a God's-eye view that impartially encompasses all sentience may be impossible, even for our most godlike descendants. But posthuman cognitive capacities will presumably transcend the anthropocentric biases of human life. HI argues that posthuman benevolence will extend to the well-being of all sentience; this is technically feasible but speculative.
However, there is a counter to such reassuring arguments. It runs roughly as follows. We can have no insight into the nature of a hypothetical posthuman civilisation that might be capable of running subjectively realistic ancestor-simulations in their supercomputers. Therefore we have no insight into the motivational structure of our Simulators and why they might do this to us. Or perhaps we are merely incidental to their simulation(s) - which exist for a Higher Purpose that we lack the concepts even to express. For instance, perhaps advanced posthumans can command the Planck-scale energies needed hypothetically to create a "universe-in-the-laboratory". For inscrutable reasons, such posthumans might decide to spin off a plethora of baby multiverses, making it statistically more likely that we are living in one of them rather than in the primordial multiverse. If so, we are emulating/simulating our ancestors in another multiverse that spawned us; and we are destined in turn to emulate/simulate our descendants in baby multiverses to come. This scenario contrasts with messy "interventionist" or conspiratorial simulations where posthuman supercomputers are supposed to be constantly rearranging stuff in our simulated world to keep us in ignorance of our artificial status. The point here is that we can't rule out any of such scenarios because we know absolutely nothing of posthuman ethics - or posthuman values of any kind. Posthuman psychology may simply be unfathomable to Homo sapiens, as are our purposes to lesser primates - or to beetles. Or maybe an explanation of our simulated status may be inaccessible to us simply in virtue of our being the ancestor-simulations of real historical people. Our ignorance could be written into the script.
We can't be sure this argument is false. There is nonetheless a problem with the unfathomability response. The prospect of using supercomputers to run ancestor-simulations belongs to the conceptual framework of early 21st Century human primates. The idea resonates with at least a small sub-set of social primates because running ancestor-simulations seems - pre-reflectively, at any rate - the kind of interesting activity that more advanced versions of ourselves might like to pursue. Yet if we have no insight into truly posthuman motivations or purposes, or indeed whether such anthropomorphic folk-psychological terms can bear posthuman meaning, then it's hard to assign any significant probability to our successors opting to run sentient ancestor-simulations. In fact given the immense state-space of potential options, and the intrinsic squalor of so much Darwinian life, then the prior probability we should assign to their doing so might seem vanishingly small - even if the technological obstacles could be overcome.
Contrary to the Objection, then, the existence of a world full of suffering is not evidence that our advanced descendants will never abolish its substrates. The existence of suffering is strong presumptive evidence that our descendants will never run sentience-supporting ancestor-simulations.
E-mail Dave : dave@hedweb.com